Cutshort logo

Big data Jobs

Explore top Big data Job opportunities from Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.
icon
Remote only
4 - 10 yrs
₹30L - ₹60L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+10 more

Fast growing SA Technology Company is seeking an experienced and highly skilled Senior Hadoop Developer with expertise in Hadoop, Spark, and Python to join their team.


The ideal candidate should have extensive hands-on experience with Hadoop ecosystem technologies, strong programming skills in Python, and a deep understanding of distributed systems, big data processing, and Spark.


Responsibilities:

  • Design, develop, and optimize Hadoop-based solutions for large-scale data processing and analysis.
  • Collaborate with stakeholders to understand business requirements and translate them into technical specifications and Hadoop architecture designs.
  • Write high-quality, scalable, and efficient code in Python and other relevant programming languages for data processing, data ingestion, and data transformations.
  • Implement and maintain data processing workflows using Apache Spark, leveraging its distributed computing capabilities for efficient data analysis and machine learning tasks.
  • Collaborate with data engineers and data scientists to integrate machine learning models and algorithms into Hadoop-based systems.
  • Optimize performance and troubleshoot issues related to Hadoop, Spark, and Python-based applications and workflows.
  • Ensure data security and compliance by implementing appropriate access controls, data encryption, and data governance practices.
  • Monitor Hadoop clusters and Spark applications, proactively identifying and addressing performance bottlenecks, scalability issues, and system failures.
  • Stay updated with the latest advancements and best practices in Hadoop, Spark, Python, and big data technologies, and provide guidance and mentorship to junior team members.


Requirements:

  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
  • 4+ years of professional experience as a Hadoop Developer, with a focus on Hadoop, Spark, and Python.
  • Strong hands-on experience with Hadoop ecosystem technologies, such as HDFS, MapReduce, Hive, Pig, HBase, and YARN.
  • In-depth knowledge and practical experience with Apache Spark, including Spark SQL, Spark Streaming, and Spark MLlib.
  • Proficiency in programming languages such as Python, Java, or Scala, with a strong emphasis on Python.
  • Solid understanding of distributed systems, parallel computing, and big data processing concepts.
  • Experience with data ingestion, ETL processes, and data integration using tools like Apache Nifi, Kafka, or Sqoop.
  • Familiarity with cloud-based big data platforms, such as AWS EMR or Google Cloud Dataproc, is a plus.
  • Strong problem-solving skills and the ability to work independently as well as part of a collaborative team.
  • Excellent communication skills and the ability to effectively articulate complex technical concepts to both technical and non-technical stakeholders.


Read more
DP
Posted by TanmayaKumar Pattanaik
Bengaluru (Bangalore)
2 - 4 yrs
₹5L - ₹16L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+10 more

Qualifications & Experience:

▪ 2 - 4 years overall experience in ETLs, data pipeline, Data Warehouse development and database design
▪ Software solution development using Hadoop Technologies such as MapReduce, Hive, Spark, Kafka, Yarn/Mesos etc.
▪ Expert in SQL, worked on advanced SQL for at least 2+ years
▪ Good development skills in Java, Python or other languages
▪ Experience with EMR, S3
▪ Knowledge and exposure to BI applications, e.g. Tableau, Qlikview
▪ Comfortable working in an agile environment

Read more

at iLink Systems

1 video
1 recruiter
DP
Posted by Ganesh Sooriyamoorthu
Chennai, Pune, Noida, Bengaluru (Bangalore)
5 - 15 yrs
₹10L - ₹15L / yr
Apache Kafka
Big Data
Java
Spark
Hadoop
+1 more
  • KSQL
  • Data Engineering spectrum (Java/Spark)
  • Spark Scala / Kafka Streaming
  • Confluent Kafka components
  • Basic understanding of Hadoop


Read more

at Shiprocket

5 recruiters
DP
Posted by Kailuni Lanah
Gurugram
4 - 10 yrs
₹25L - ₹35L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+4 more

We are seeking an experienced Senior Data Platform Engineer to join our team. The ideal candidate should have extensive experience with Pyspark, Airflow, Presto, Hive, Kafka and Debezium, and should be passionate about developing scalable and reliable data platforms.

Responsibilities:

  • Design, develop, and maintain our data platform architecture using Pyspark, Airflow, Presto, Hive, Kafka, and Debezium.
  • Develop and maintain ETL processes to ingest, transform, and load data from various sources into our data platform.
  • Work closely with data analysts, data scientists, and other stakeholders to understand their requirements and design solutions that meet their needs.
  • Implement and maintain data governance policies and procedures to ensure data quality, privacy, and security.
  • Continuously monitor and optimize the performance of our data platform to ensure scalability, reliability, and cost-effectiveness.
  • Keep up-to-date with the latest trends and technologies in the field of data engineering and share knowledge and best practices with the team.

Requirements:

  • Bachelor's degree in Computer Science, Information Technology, or related field.
  • 5+ years of experience in data engineering or related fields.
  • Strong proficiency in Pyspark, Airflow, Presto, Hive, Datalake, and Debezium.
  • Experience with data warehousing, data modeling, and data governance.
  • Experience working with large-scale distributed systems and cloud platforms (e.g., AWS, GCP, Azure).
  • Strong problem-solving skills and ability to work independently and collaboratively.
  • Excellent communication and interpersonal skills.

If you are a self-motivated and driven individual with a passion for data engineering and a strong background in Pyspark, Airflow, Presto, Hive, Datalake, and Debezium, we encourage you to apply for this exciting opportunity. We offer competitive compensation, comprehensive benefits, and a collaborative work environment that fosters innovation and growth.

Read more

at Thoughtworks

1 video
34 recruiters
DP
Posted by nadeem Shaikh
Pune, Gurugram
5 - 9 yrs
Best in industry
Spark
Hadoop
Big Data
Data engineering
PySpark
+10 more

Data Engineers develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems. On other projects, you might be acting as the architect, leading the design of technical solutions or perhaps overseeing a program inception to build a new product. It could also be a software delivery project where you're equally happy coding and tech-leading the team to implement the solution.

 

Job responsibilities:

 

  • You will partner with teammates to create complex data processing pipelines in order to solve our clients' most complex challenges
  • You will collaborate with Data Scientists in order to design scalable implementations of their models
  • You will pair to write clean and iterative code based on TDD
  • Leverage various continuous delivery practices to deploy, support and operate data pipelines
  • Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available
  • Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions
  • Create data models and speak to the tradeoffs of different modeling approaches
  • Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process
  • Assure effective collaboration between Thoughtworks' and the client's teams, encouraging open communication and advocating for shared outcomes

 

Job qualifications:

 

Technical skills:

 

  • You have a good understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop
  • You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting
  • Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions
  • You are comfortable taking data-driven approaches and applying data security strategy to solve business problems
  • Working with data excites you: you can build and operate data pipelines, and maintain data storage, all within distributed systems
  • You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments

 

Professional skills:

 

  • You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives
  • An interest in coaching, sharing your experience and knowledge with teammates
  • You enjoy influencing others and always advocate for technical excellence while being open to change when needed
  • Presence in the external tech community: you willingly share your expertise with others via speaking engagements, contributions to open source, blogs and more

 

Other things to know:

 

L&D:

 

There is no one-size-fits-all career path at Thoughtworks: however you want to develop your career is entirely up to you. But we also balance autonomy with the strength of our cultivation culture. This means your career is supported by interactive tools, numerous development programs and teammates who want to help you grow. We see value in helping each other be our best and that extends to empowering our employees in their career journeys.

 

About Thoughtworks:

 

Thoughtworks is a global technology consultancy that integrates strategy, design and engineering to drive digital innovation. For 28+ years, our clients have trusted our autonomous teams to build solutions that look past the obvious. Here, computer science grads come together with seasoned technologists, self-taught developers, midlife career changers and more to learn from and challenge each other. Career journeys flourish with the strength of our cultivation culture, which has won numerous awards around the world.

 

Join Thoughtworks and thrive. Together, our extra curiosity, innovation, passion and dedication overcomes ordinary.

Read more

at Incedo Inc.

3 recruiters
Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Gurugram
5 - 8 yrs
₹5L - ₹15L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+13 more

Role: Data Engineer

Total Experience: 5 to 8 Years

Job Location: Gurgaon

Budget -26 28 LPA


Must have - Technical & Soft Skills:

  • Python: Data Structures, List, Libraries, Data engineering basics
  • SQL: Joins, Groups, Aggregations, Windowing functions, analytic functions etc.
  • Worked in AWS services S3, EC2, Glue, Data Pipeline, Athena and Redshift
  • Solid hands-on working experience in Big-Data Technologies
  • Strong hands-on experience of programming languages like Python, Scala with Spark.
  • Good command and working experience on Hadoop/Map Reduce, HDFS, Hive, HBase, and No-SQL Databases
  • Hands on working experience on any of the data engineering/analytics platform AWS preferred
  • Hands-on experience on Data Ingestion Apache Nifi, Apache Airflow, Sqoop, and Ozzie
  • Hands on working experience of data processing at scale with event driven systems, message queues (Kafka/ Flink/Spark Streaming)
  • Hands on working Experience with AWS Services like EMR, Kinesis, S3, CloudFormation, Glue, API
  • Gateway, Lake Foundation
  • Operationalization of ML models on AWS (e.g. deployment, scheduling, model monitoring etc.)
  • Feature Engineering/Data Processing to be used for Model development
  • Experience gathering and processing raw data at scale (including writing scripts, web scraping, calling APIs, write SQL queries, etc.)
  • Hands-on working experience in analyzing source system data and data flows, working with structured and unstructured data
Read more

at Thoughtworks

1 video
34 recruiters
DP
Posted by Sunidhi Thakur
Bengaluru (Bangalore)
10 - 13 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+9 more

Lead Data Engineer

 

Data Engineers develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems. On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product. It could also be a software delivery project where you're equally happy coding and tech-leading the team to implement the solution.

 

Job responsibilities

 

·      You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems

·      You will partner with teammates to create complex data processing pipelines in order to solve our clients' most ambitious challenges

·      You will collaborate with Data Scientists in order to design scalable implementations of their models

·      You will pair to write clean and iterative code based on TDD

·      Leverage various continuous delivery practices to deploy, support and operate data pipelines

·      Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available

·      Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions

·      Create data models and speak to the tradeoffs of different modeling approaches

·      On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product

·      Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process

·      Assure effective collaboration between Thoughtworks' and the client's teams, encouraging open communication and advocating for shared outcomes

 

Job qualifications Technical skills

·      You are equally happy coding and leading a team to implement a solution

·      You have a track record of innovation and expertise in Data Engineering

·      You're passionate about craftsmanship and have applied your expertise across a range of industries and organizations

·      You have a deep understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop

·      You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting

·      Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions

·      You are comfortable taking data-driven approaches and applying data security strategy to solve business problems

·      You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments

·      Working with data excites you: you have created Big data architecture, you can build and operate data pipelines, and maintain data storage, all within distributed systems

 

Professional skills


·      Advocate your data engineering expertise to the broader tech community outside of Thoughtworks, speaking at conferences and acting as a mentor for more junior-level data engineers

·      You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives

·      An interest in coaching others, sharing your experience and knowledge with teammates

·      You enjoy influencing others and always advocate for technical excellence while being open to change when needed

Read more
Mumbai, Navi Mumbai
6 - 14 yrs
₹16L - ₹37L / yr
Python
PySpark
Data engineering
Big Data
Hadoop
+3 more

Role: Principal Software Engineer


We looking for a passionate Principle Engineer - Analytics to build data products that extract valuable business insights for efficiency and customer experience. This role will require managing, processing and analyzing large amounts of raw information and in scalable databases. This will also involve developing unique data structures and writing algorithms for the entirely new set of products. The candidate will be required to have critical thinking and problem-solving skills. The candidates must be experienced with software development with advanced algorithms and must be able to handle large volume of data. Exposure with statistics and machine learning algorithms is a big plus. The candidate should have some exposure to cloud environment, continuous integration and agile scrum processes.



Responsibilities:


• Lead projects both as a principal investigator and project manager, responsible for meeting project requirements on schedule

• Software Development that creates data driven intelligence in the products which deals with Big Data backends

• Exploratory analysis of the data to be able to come up with efficient data structures and algorithms for given requirements

• The system may or may not involve machine learning models and pipelines but will require advanced algorithm development

• Managing, data in large scale data stores (such as NoSQL DBs, time series DBs, Geospatial DBs etc.)

• Creating metrics and evaluation of algorithm for better accuracy and recall

• Ensuring efficient access and usage of data through the means of indexing, clustering etc.

• Collaborate with engineering and product development teams.


Requirements:


• Master’s or Bachelor’s degree in Engineering in one of these domains - Computer Science, Information Technology, Information Systems, or related field from top-tier school

• OR Master’s degree or higher in Statistics, Mathematics, with hands on background in software development.

• Experience of 8 to 10 year with product development, having done algorithmic work

• 5+ years of experience working with large data sets or do large scale quantitative analysis

• Understanding of SaaS based products and services.

• Strong algorithmic problem-solving skills

• Able to mentor and manage team and take responsibilities of team deadline.


Skill set required:


• In depth Knowledge Python programming languages

• Understanding of software architecture and software design

• Must have fully managed a project with a team

• Having worked with Agile project management practices

• Experience with data processing analytics and visualization tools in Python (such as pandas, matplotlib, Scipy, etc.)

• Strong understanding of SQL and querying to NoSQL database (eg. Mongo, Casandra, Redis

Read more
DP
Posted by Aravind Kumar
Bengaluru (Bangalore)
4 - 10 yrs
Best in industry
Software Testing (QA)
Test Automation (QA)
Appium
Selenium
Java
+12 more

Minimum 4 to 10 years of experience in testing distributed backend software architectures/systems.

• 4+ years of work experience in test planning and automation of enterprise software

• Expertise in programming using Java or Python and other scripting languages.

• Experience with one or more public clouds is expected.

• Comfortable with build processes, CI processes, and managing QA Environments as well as working with build management tools like Git, and Jenkins

. • Experience with performance and scalability testing tools.

• Good working knowledge of relational databases, logging, and monitoring frameworks is expected.

Familiarity with system flow like how they interact with an application Eg. Elasticsearch, Mongo, Kafka, Hive, Redis, AWS


Read more
Mysore
4 - 6 yrs
₹10L - ₹20L / yr
Data modeling
ETL
Oracle
MS SQLServer
MongoDB
+4 more

RequiredSkills:


• Minimum of 4-6 years of experience in data modeling (including conceptual, logical and physical data models. • 2-3 years of experience inExtraction, Transformation and Loading ETLwork using data migration tools like Talend, Informatica, Datastage, etc. • 4-6 years of experience as a database developerinOracle, MS SQLor another enterprise database with a focus on building data integration process • Candidate should haveanyNoSqltechnology exposure preferably MongoDB. • Experience in processing large data volumes indicated by experience with BigDataplatforms (Teradata, Netezza, Vertica or Cloudera, Hortonworks, SAP HANA, Cassandra, etc.). • Understanding of data warehousing concepts and decision support systems.


• Ability to deal with sensitive and confidential material and adhere to worldwide data security and • Experience writing documentation for design and feature requirements. • Experience developing data-intensive applications on cloud-based architectures and infrastructures such as AWS, Azure etc. • Excellent communication and collaboration skills.

Read more
Koramangala
3 - 6 yrs
₹6L - ₹10L / yr
AngularJS (1.x)
Angular (2+)
React.js
NodeJS (Node.js)
MongoDB
+7 more

KEY RESPONSIBILITIES

  • Building a website based on the given requirements and ensure it’s successfully deployed
  • Responsible for designing, planning, and testing new web pages and site features
  • A propensity for brainstorming and coming up with solutions to open-ended problems
  • Work closely with other teams, and project managers, to understand all stakeholders’ requirements and ensure that all specifications and requirements are met in final development
  • Troubleshoot and solve problems related to website functionality
  • Takes ownership of initiatives and drives them to completion.
  • Desire to learn and dive deep into new technologies on the job, especially around modern data storage and streaming open source systems
  • Responsible for creating, optimizing, and managing REST APIs
  • Create website content and enhance website usability and visibility
  • Ensure cross-browser compatibility and testing for mobile responsiveness
  • Ability to integrate payment processing and search functionality software solutions
  • Stay up-to-date with technological advancements and the latest coding practices
  • Collaborate with the team of designers, content managers, and developers to determine site goals, functionality, and layout
  • Monitor website traffic and overall system’s health with Google analytics to ensure high GTmetrix score
  • Build the front-end of applications through appealing visual design
  • Design client-side and server-side architecture
  • Develop server-side logic and APIs that integrate with front-end applications.
  • Architect and design complex database structures and data models.
  • Develop and implement backend systems to support scalable and high-performance web applications.
  • Create automated tests to ensure system stability and performance.
  • Ensure security and data privacy measures are maintained throughout the development process.
  • Maintain an up-to-date changelog for all new, updated, and fixed changes.
  • Ability to document and manage all the software design, requirements, reusable & transferable code, and other technical aspects of the project.
  • Create and convert storyboards and wireframes into high-quality full-stack code
  • Write, execute, and maintain clean, reusable, and scalable code
  • Design and implement low-latency, high-availability, and performant applications
  • Implement security and data protection
  • Ensure code that is platform and device-agnostic

EDUCATION & SKILLS REQUIREMENT

  • B.Tech. / BE / MS degree in Computer Science or Information Technology
  • Expertise in MERN stack (MongoDB, Express.js, React.js, Node.js)
  • Should have prior working experience of at least 3 years as web developer or full stack developer
  • Should have done projects in e-commerce or have preferably worked with companies operating in e-commerce
  • Should have expert-level knowledge in implementing frontend technologies
  • Should have worked in creating backend and have deep understanding of frameworks
  • Experience in the complete product development life cycle
  • Hands-on experience with JavaScript, HTML, CSS, JQuery, JSON, PHP, XML
  • Proficiency in databases, including analytical (e.g., mySQL, MongoDB, PostgreSQL, DynamoDB, Redis, Hive, Elastic etc.)
  • Knowledge of architecting or implementing search APIs
  • Great understanding of data modeling and RESTful APIs
  • Strong knowledge of CS fundamentals, data structures, algorithms, and design patterns
  • Strong analytical, consultative, and communication skills
  • Excellent understanding of Microsoft office tools : excel, word, powerpoint etc.
  • Excellent organizational and time management skills
  • Experience with responsive and adaptive design (Web, Mobile & App)
  • Should be a self starter and have ability to work without being supervised
  • Excellent debugging and optimization skills
  • Experience building high throughput/low latency systems.
  • Knowledge of big data systems such as Cassandra, Elastic, Kafka, Kubernetes, and Docker
  • Should be willing to be a part of a small team and working in fast-paced environment
  • Should be highly passionate about building products that create a significant impact.
  • Should have experience in user experience design, website optimization techniques and different PIM tools


Read more

at Freestone Infotech Pvt. Ltd.

1 video
7 recruiters
DP
Posted by Genevieve Mascarenhas
Mumbai, Pune
8 - 13 yrs
₹15L - ₹25L / yr
Java
J2EE
Big Data
Amazon Web Services (AWS)
Spring
+1 more
Position Title: Senior Software Developer
Opportunity to work with a Silicon Valley based security and governance start-up
About Privacera
Privacera, Inc is a California based start-up company that is looking for Senior Software Engineers to work
out of our Mumbai based office. Privacera is a cloud-based product which uses Cloud native services in
AWS, Azure and GCP. Privacera is a fast-growing start-up and provides ample opportunity work on all
Cloud services like AWS S3, DynamoDB, Kinesis, RedShift, EMR, Azure ADLS, HDInsight, GCP GCS, GCP
PubSub and other services.
https://www.privacera.com
About our Company
Freestone Infotech is a global IT solutions company providing innovative best-in-class turnkey solutions to
enterprises worldwide and is a partner to Privacera in Mumbai.
http://freestoneinfotech.com/
We are looking for motivated individuals who have worked on Cloud and Big Data or keen on working on
Cloud or Big Data services. If you want to work in a start-up culture and are ready for the challenge, then
join us on our exciting journey.
Experience: 5+ yrs
Core Experience:
• Experience in Core Java, J2EE, Spring/Spring Boot, Hibernate, Spring REST, Linux, JUnit, Maven,
Design Patterns.
• Sound knowledge of RDBMS like MySQL/Postgres, including schema design.
• Proficient in general programming, logic, problem solving, data structures & algorithms
• Exposure to Linux environment
Secondary Skills:
• Agile / Scrum Development Experience preferred.
• Comfortable working with a microservices architecture and familiarly with NoSql solutions.
• Experience in Test Driven Development.
• Good analytical, grasping and problem-solving skills.
• Excellent written and verbal communication skills.
• Hands-on skills in configuration of popular build tools, like Maven and Gradle
• Good knowledge of testing frameworks such as JUnit.
• Good knowledge of coding standards, source code organization and packaging/deploying.
• Good knowledge of current and emerging technologies and trends.
Job Responsibilities:
• Design, Development and Delivery of Java based enterprise-grade applications.
• Ensure best practices, quality and consistency within various design and development phases.
• Develop, test, implement and maintain application software working with established processes.
Education and Experience:
• Bachelor’s / Master’s degree in Computer Science or Information Technology or related field
Read more

at Aidetic

5 recruiters
DP
Posted by Suparna Ghosh
Bengaluru (Bangalore)
2 - 3 yrs
₹4L - ₹15L / yr
ETL
Informatica
Data Warehouse (DWH)
Elastic Search
MongoDB
+7 more

Responsibilities:


● Designing, building and maintaining efficient, reusable, and reliable architecture and code. ● Participate in the architecture and system design discussions

● Independently perform hands on development/coding and unit testing of the applications

● Collaborate with the development and AI teams and build individual components into complex enterprise web systems

● Work in a team environment with product, frontend design, production operation, QE/QA and cross functional teams to deliver a project throughout the whole software development cycle

● Architect and implement CI/CD strategy for EDP

● Implement high velocity streaming solutions using Amazon Kinesis, SQS, and Kafka (preferred) ● Designing, building and maintaining efficient, reusable, and reliable architecture and code.

● Ensure the best possible performance and quality of high scale web applications and services ● To identify and resolve any performance issues

● Keep up to date with new technology development and implementation

● Participate in code review to make sure standards and best practices are met

● Migrate data from traditional relational database systems, file systems, NAS shares to AWS relational databases such as Amazon RDS, Aurora, and Redshift

● Migrate data from AWS Dynamodb to relational database such as PostgreSQL

● Migrate data from APIs to AWS data lake (S3) and relational databases such as Amazon RDS, Aurora, and Redshift

● Work closely with the Data Scientist leads, CTO, Product, Engineering, DevOps and other members of the Ai Science teams

● Collaborate with the product team, share feedback from project implementations and influence the product roadmap.

● Be comfortable in a highly dynamic, agile environment without sacrificing the quality of work products.


Position Requirements:


● Bachelor's degree in Computer Science, Software Engineering, MIS or equivalent combination of education and experience

● 5+ years of experience as Data application developer

● AWS Solutions Architect or AWS Developer Certification preferred

● Experience implementing software applications supporting data lakes, data warehouses and data applications on AWS for large enterprises

● Solid Programming experience with Python, Shell scripting and SQL

● Solid experience of AWS services such as CloudFormation, S3, Athena, Glue, EMR/Spark, RDS, Redshift, DataSync, DMS, DynamoDB, Lambda, Step Functions, IAM, KMS, SM etc.

● Solid experience implementing solutions on AWS based data lakes.

● Experience in AWS data lake/data warehouse/business analytics/

● Experience in system analysis, design, development, and implementation of data ingestion pipeline in AWS

● Knowledge of ETL/ELT

● End-to-end data solutions (ingest, storage, integration, processing, access) on AWS ● Experience developing  business applications using NoSQL/SQL databases.

● Experience working with Object stores(S3) and JSON is must have

● Should have good experience with AWS Services – Glue, Lambda, Step Functions, SQS, DynamoDB, S3, Redshift, RDS, Cloudwatch and ECS.

● Should have hands-on experience with Python, Django

● Great knowledge of Data Science models

● Plus to have knowledge on Snowflake


Nice to have:


● Solid experience in AWS AI solutions such Recognition, Comperhind and Transcribe

● Python, NodeJS, .NetCore, C#, Reactjs, RestAPI, Microservices, Postman, GraphQL, Mongo, Linux, Javascript, HTML5, CSS, Django


For direct application fill the Form: https://forms.gle/z1Zhz32oHkNmANFV8

Read more

at Mobile Programming LLC

1 video
34 recruiters
DP
Posted by Sukhdeep Singh
Gurugram
4 - 7 yrs
₹10L - ₹15L / yr
NodeJS (Node.js)
MongoDB
Mongoose
Express
Microservices
+12 more

Job description

  • Engage with the business team and stakeholder at different levels to understand business needs, analyze, document, prioritize the requirements, and make recommendations on the solution and implementation.
  • Delivering the product that meets business requirements, reliability, scalability, and performance goals
  • Work with Agile scrum team and create the scrum team strategy roadmap/backlog, develop minimal viable product and Agile user stories that drive a highly effective and efficient project development and delivery scrum team.
  • Work on Data mapping/transformation, solution design, process diagram, acceptance criteria, user acceptance testing and other project artifacts.
  • Work effectively with the technical/development team and help them understand the specifications/requirements for technical development, testing and implementation.
  • Ensure solutions promote simplicity, efficiency, and conform to enterprise and architecture standards and guidelines.
  • Partner with the support organization to provide training, support and technical assistance to operation team and end users as necessary
  • Product/Application Developer
  • Designs and develops software applications based on user requirements in a variety of coding environments such as graphical user interface, database query languages, report writers, and specific development languages
  • Consult on the use and implementation of software products and applications and specialize in the business development environment, including the selection of development tools and methodology

Primary / Mandatory skills:


  • Overall Experience: Overall 4 to 6 years of IT development experience
  • Design and Code NodeJS based Microservices, API Webservices, NoSql technologies (Cassandra/MongoDb)
  • Expert in developing code for Node-JS based Microservice in TypeScript
  • Good Experience in understanding the data Transmission through pug/sub mechanism like Event Hub and Kafka
  • Good Understanding of Analytics and clickstream data capture is HUGE Plus
  • Good understanding of frameworks like Java Spring Boot, Python is preferred
  • Good understanding of Microsoft Azure principles and services is preferred
  • Able to write Unit test cases
  • Familiarity with performance testing tools such as Akamai SOASTA is preferred
  • Good knowledge on Source Code control like GIT, code clout, etc and understanding of CI/CD(Jenkins and Kubernetes)
  • Solid technical background with understanding and/or experience in software development and web technologies
  • Strong analytical skills and the ability to convert consumer insights and performance data into high impact initiatives
  • Experience working within scaled agile development team
  • Excellent written and verbal communication skills with demonstrated ability to present complex technical information in a clear manner to peers, developers, and senior leaders
  • The desire to be continually learning about emerging technologies/industry trends


Read more
DP
Posted by Joshua YAP
Remote only
3 - 6 yrs
S$3K - S$9K / yr
Docker
Kubernetes
DevOps
Amazon Web Services (AWS)
EKS
+3 more

We are looking for a DevOps Engineer (individual contributor) to maintain and build upon our next-generation infrastructure. We aim to ensure that our systems are secure, reliable and high-performing by constantly striving to achieve best-in-class infrastructure and security by:


  • Leveraging a variety of tools to ensure all configuration is codified (using tools like Terraform and Flux) and applied in a secure, repeatable way (via CI)
  • Routinely identifying new technologies and processes that enable us to streamline our operations and improve overall security
  • Holistically monitoring our overall DevOps setup and health to ensure our roadmap constantly delivers high-impact improvements
  • Eliminating toil by automating as many operational aspects of our day-to-day work as possible using internally created, third party and/or open-source tools
  • Maintain a culture of empowerment and self-service by minimizing friction for developers to understand and use our infrastructure through a combination of innovative tools, excellent documentation and teamwork


Tech stack: Microservices primarily written in JavaScript, Kotlin, Scala, and Python. The majority of our infrastructure sits within EKS on AWS, using Istio. We use Terraform and Helm/Flux when working with AWS and EKS (k8s). Deployments are managed with a combination of Jenkins and Flux. We rely heavily on Kafka, Cassandra, Mongo and Postgres and are increasingly leveraging AWS-managed services (e.g. RDS, lambda).



Read more
Pune
7 - 12 yrs
₹15L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+3 more

8+ years of Experience including –

MS SQL Server

Big Data – Hadoop Stack (HDFS, Hive)

Big Data – Basics of Hadoop Cluster Set Up / Job Monitoring / Troubleshooting

Data Pipeline Design – Experience in designing data processing flows / pipelines

Complex Query Writing (Joins, aggregations, analytical functions, etc.)

Exposure to Stored Procedure Development

Basics of Unix Shell Scripting

Excellent Written and Verbal Communication Skills (Regular Client Interaction Expected)

Team-leading experience is required

Ability to work independently along with leading the team, take technical ownership, mentor team members

Good Problem solving skills


Good to Have:

Experience / Exposure to Apache Spark

Knowledge of Programming (any language Java / Scala / Python)


Team Lead Role – Take Technical Ownership

Design & Maintain Data Pipeline Solutions in SQL Server / Big Data technologies as per customer requirement

Mentor and groom Team Members

Perform individual tasks along with the lead role

Regularly interact with Customer and their partners over e-mails and meetings etc.


Read more
Bengaluru (Bangalore)
3 - 6 yrs
₹5L - ₹20L / yr
Amazon Web Services (AWS)
Amazon EMR
EMR
Spark
PySpark
+9 more

About Kloud9:

 

Kloud9 exists with the sole purpose of providing cloud expertise to the retail industry. Our team of cloud architects, engineers and developers help retailers launch a successful cloud initiative so you can quickly realise the benefits of cloud technology. Our standardised, proven cloud adoption methodologies reduce the cloud adoption time and effort so you can directly benefit from lower migration costs.

 

Kloud9 was founded with the vision of bridging the gap between E-commerce and cloud. The E-commerce of any industry is limiting and poses a huge challenge in terms of the finances spent on physical data structures.

 

At Kloud9, we know migrating to the cloud is the single most significant technology shift your company faces today. We are your trusted advisors in transformation and are determined to build a deep partnership along the way. Our cloud and retail experts will ease your transition to the cloud.

 

Our sole focus is to provide cloud expertise to retail industry giving our clients the empowerment that will take their business to the next level. Our team of proficient architects, engineers and developers have been designing, building and implementing solutions for retailers for an average of more than 20 years.

 

We are a cloud vendor that is both platform and technology independent. Our vendor independence not just provides us with a unique perspective into the cloud market but also ensures that we deliver the cloud solutions available that best meet our clients' requirements.


What we are looking for:

● 3+ years’ experience developing Data & Analytic solutions

● Experience building data lake solutions leveraging one or more of the following AWS, EMR, S3, Hive& Spark

● Experience with relational SQL

● Experience with scripting languages such as Shell, Python

● Experience with source control tools such as GitHub and related dev process

● Experience with workflow scheduling tools such as Airflow

● In-depth knowledge of scalable cloud

● Has a passion for data solutions

● Strong understanding of data structures and algorithms

● Strong understanding of solution and technical design

● Has a strong problem-solving and analytical mindset

● Experience working with Agile Teams.

● Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders

● Able to quickly pick up new programming languages, technologies, and frameworks

● Bachelor’s Degree in computer science


Why Explore a Career at Kloud9:

 

With job opportunities in prime locations of US, London, Poland and Bengaluru, we help build your career paths in cutting edge technologies of AI, Machine Learning and Data Science. Be part of an inclusive and diverse workforce that's changing the face of retail technology with their creativity and innovative solutions. Our vested interest in our employees translates to deliver the best products and solutions to our customers.

Read more
DP
Posted by Deepika Agarwal
Remote only
5 - 8 yrs
₹5L - ₹15L / yr
Python
PySpark
apache airflow
Spark
Hadoop
+4 more

Requirements:

● Understanding our data sets and how to bring them together.

● Working with our engineering team to support custom solutions offered to the product development.

● Filling the gap between development, engineering and data ops.

● Creating, maintaining and documenting scripts to support ongoing custom solutions.

● Excellent organizational skills, including attention to precise details

● Strong multitasking skills and ability to work in a fast-paced environment

● 5+ years experience with Python to develop scripts.

● Know your way around RESTFUL APIs.[Able to integrate not necessary to publish]

● You are familiar with pulling and pushing files from SFTP and AWS S3.

● Experience with any Cloud solutions including GCP / AWS / OCI / Azure.

● Familiarity with SQL programming to query and transform data from relational Databases.

● Familiarity to work with Linux (and Linux work environment).

● Excellent written and verbal communication skills

● Extracting, transforming, and loading data into internal databases and Hadoop

● Optimizing our new and existing data pipelines for speed and reliability

● Deploying product build and product improvements

● Documenting and managing multiple repositories of code

● Experience with SQL and NoSQL databases (Casendra, MySQL)

● Hands-on experience in data pipelining and ETL. (Any of these frameworks/tools: Hadoop, BigQuery,

RedShift, Athena)

● Hands-on experience in AirFlow

● Understanding of best practices, common coding patterns and good practices around

● storing, partitioning, warehousing and indexing of data

● Experience in reading the data from Kafka topic (both live stream and offline)

● Experience in PySpark and Data frames

Responsibilities:

You’ll

● Collaborating across an agile team to continuously design, iterate, and develop big data systems.

● Extracting, transforming, and loading data into internal databases.

● Optimizing our new and existing data pipelines for speed and reliability.

● Deploying new products and product improvements.

● Documenting and managing multiple repositories of code.

Read more

at DeepIntent

2 candid answers
17 recruiters
DP
Posted by Indrajeet Deshmukh
Pune
3 - 5 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+5 more

About DeepIntent:

DeepIntent is a marketing technology company that helps healthcare brands strengthen communication with patients and healthcare professionals by enabling highly effective and performant digital advertising campaigns. Our healthcare technology platform, MarketMatch™, connects advertisers, data providers, and publishers to operate the first unified, programmatic marketplace for healthcare marketers. The platform’s built-in identity solution matches digital IDs with clinical, behavioural, and contextual data in real-time so marketers can qualify 1.6M+ verified HCPs and 225M+ patients to find their most clinically-relevant audiences and message them on a one-to-one basis in a privacy-compliant way. Healthcare marketers use MarketMatch to plan, activate, and measure digital campaigns in ways that best suit their business, from managed service engagements to technical integration or self-service solutions. DeepIntent was founded by Memorial Sloan Kettering alumni in 2016 and acquired by Propel Media, Inc. in 2017. We proudly serve major pharmaceutical and Fortune 500 companies out of our offices in New York, Bosnia and India.


What You’ll Do:

  • Establish formal data practice for the organisation.
  • Build & operate scalable and robust data architectures.
  • Create pipelines for the self-service introduction and usage of new data
  • Implement DataOps practices
  • Design, Develop, and operate Data Pipelines which support Data scientists and machine learning
  • Engineers.
  • Build simple, highly reliable Data storage, ingestion, and transformation solutions which are easy
  • to deploy and manage.
  • Collaborate with various business stakeholders, software engineers, machine learning
  • engineers, and analysts.

Who You Are:

  • Experience in designing, developing and operating configurable Data pipelines serving high
  • volume and velocity data.
  • Experience working with public clouds like GCP/AWS.
  • Good understanding of software engineering, DataOps, data architecture, Agile and
  • DevOps methodologies.
  • Experience building Data architectures that optimize performance and cost, whether the
  • components are prepackaged or homegrown
  • Proficient with SQL, Python or JVM-based language, Bash.
  • Experience with any of Apache open source projects such as Spark, Druid, Beam, Airflow
  • etc. and big data databases like BigQuery, Clickhouse, etc
  • Good communication skills with the ability to collaborate with both technical and non-technical
  • people.
  • Ability to Think Big, take bets and innovate, Dive Deep, Bias for Action, Hire and Develop the Best, Learn and be Curious

 

Read more

at DeepIntent

2 candid answers
17 recruiters
DP
Posted by Indrajeet Deshmukh
Pune
1 - 3 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+5 more

Senior Software Engineer - Data 

 

Job Description:

We are looking for a tech savvy Data Engineer to join our growing data team. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. The hire must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. 

 

What You’ll Do:


  • Establish formal data practices for the organization
  • Build & operate scalable and robust data architectures
  • Create pipelines for the self-service introduction and usage of new data
  • Implement data ops practices
  • Design, develop, operate data Pipelines which support data scientists and machine learning Engineers
  • Build simple, highly reliable data storage, ingestion, transformation solutions which are easy to deploy and manage
  • Collaborate with various business stakeholders, software engineers, machine learning engineers, analysts

Who You Are:


  • Experience in designing, developing and operating configurable data pipelines serving high volume and velocity data
  • Experience working with public clouds like GCP/AWS
  • Good understanding of software engineering, data ops, and data architecture, Agile and DevOps methodologies
  • Experience building data architectures that optimize performance and cost, whether the components are prepackaged or homegrown
  • Proficient with SQL, Python or JVM based language, Bash
  • Experience with any of Apache open source projects such as Spark, Druid, Beam, Airflow etc. and big data databases like BigQuery, Clickhouse, etc
  • Good communication skills with ability to collaborate with both technical and non technical teams
  • Ability to think big, take bets and innovate, learn and be curious.


Read more
Bengaluru (Bangalore)
3 - 6 yrs
₹7L - ₹11L / yr
Business Analysis
Digital Marketing
Apache Hive
SQL
Apache Pig
Develop a deep domain understanding of the customer data platform and marketing automation
space. Especially key digital marketing and performance metrics and how use cases set up and
executed on the company platform move these metrics
● Combine them with knowledge of the customer’s domain and their business objectives to provide
insights on how they can be achieved with the company platform
● Execute quantitative analysis of customer data – typically user behavior, marketing campaign and
conversion data
● Derive insights from potentially incomplete data sets by creatively obtaining and vetting additional
data from engineering and data analytics teams
● Work with customer success and operations teams to understand business implications of
analysis, convert them into insights and recommendations that can be presented to senior
customer audiences
● Set up business analysis queries and processes to be scalable across a broad set of customers

Qualifications

● 3+ years of business analysis experience, preferably in digital marketing
● Strong knowledge of SQL analysis is a must
● Working knowledge of HIVE, Pig scripts is a big plus
● Must be a self-starter who will be able to work in quickly evolving start-up environment
● Detail oriented with an inquisitive approach to data and business
● Possess strong sense of ownership and urgency required for success in early-stage startups
● Relevant undergraduate degree, additional qualifications is a plus
Read more

at LiftOff Software India

2 recruiters
DP
Posted by Hameeda Haider
Remote, Bengaluru (Bangalore)
5 - 8 yrs
₹1L - ₹30L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark

Why LiftOff? 

 

We at LiftOff specialize in product creation, for our main forte lies in helping Entrepreneurs realize their dream. We have helped businesses and entrepreneurs launch more than 70 plus products.

Many on the team are serial entrepreneurs with a history of successful exits.

 

As a Data Engineer, you will work directly with our founders and alongside our engineers on a variety of software projects covering various languages, frameworks, and application architectures.

 

About the Role

 

If you’re driven by the passion to build something great from scratch, a desire to innovate, and a commitment to achieve excellence in your craftLiftOff is a great place for you.


  • Architecture/design / configure the data ingestion pipeline for data received from 3rd party vendors
  • Data loading should be configured with ease/flexibility for adding new data sources & also refresh of the previously loaded data
  • Design & implement a consumer graph, that provides an efficient means to query the data via email, phone, and address information (using any one of the fields or combination)
  • Expose the consumer graph/search capability for consumption by our middleware APIs, which would be shown in the portal
  • Design / review the current client-specific data storage, which is kept as a copy of the consumer master data for easier retrieval/query for subsequent usage


Please Note that this is for a Consultant Role

Candidates who are okay with freelancing/Part-time can apply

Read more

at Virtusa

2 recruiters
DP
Posted by Priyanka Sathiyamoorthi
Chennai
11 - 15 yrs
₹15L - ₹33L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+3 more

We are looking for a Big Data Engineer with java for Chennai Location

Location : Chennai 

Exp : 11 to 15 Years 



Job description

Required Skill:

1. Candidate should have minimum 7 years of experience as total

2. Candidate should have minimum 4 years of experience in Big Data design and development

3. Candidate should have experience in Java, Spark, Hive & Hadoop, Python 

4. Candidate should have experience in any RDBMS.

Roles & Responsibility:

1. To create work plans, monitor and track the work schedule for on time delivery as per the defined quality standards.

2. To develop and guide the team members in enhancing their technical capabilities and increasing productivity.

3. To ensure process improvement and compliance in the assigned module, and participate in technical discussions or review.

4. To prepare and submit status reports for minimizing exposure and risks on the project or closure of escalation


Regards,

Priyanka S

7P8R9I9Y4A0N8K8A7S7

Read more

at codersbrain

1 recruiter
DP
Posted by Aishwarya Hire
Bengaluru (Bangalore)
4 - 6 yrs
₹8L - ₹10L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+1 more
  • Design the architecture of our big data platform
  • Perform and oversee tasks such as writing scripts, calling APIs, web scraping, and writing SQL queries
  • Design and implement data stores that support the scalable processing and storage of our high-frequency data
  • Maintain our data pipeline
  • Customize and oversee integration tools, warehouses, databases, and analytical systems
  • Configure and provide availability for data-access tools used by all data scientists


Read more

at StepChange

2 recruiters
DP
Posted by kannan ka
Bengaluru (Bangalore)
5 - 8 yrs
₹15L - ₹25L / yr
Java
Spring Boot
Microservices
Cassandra
MVC Framework
+4 more
Interested Candidates, please apply at -  https://tally.so/r/31ALpl
 
About StepChange

StepChange is focused on developing state-of-the-art ESG tools for businesses, with the goal of empowering companies with a suite of sophisticated solutions aimed at measuring, managing and reporting their ESG performance across a comprehensive set of metrics.

We believe that equipping companies with the specialized tools they need to achieve their ESG goals will play a vital role in enabling the transition to a more sustainable global economy and tackling the world's most pressing problem 'Climate Change'.

We are a rapidly growing early-stage global team and are backed by leading VCs. Our founders are MIT and IITB alumni with experience at McKinsey, Ola, Accel, and multiple social enterprises.
 

Requirements:

  • 5+ years of experience in backend software technologies like Java and Spring boot
  • Experience in working with MYSQL Database and prior knowledge on db modeling
  • Strong background in JS(Java script)
  • Strong background in responsive and multi-platform design.
  • Experience in JPA 2.6.3 (Java Persistent API) frameworks
  • Prior experience working with firebase for authentication
  • Prior experience working with any of analytics tool integration ( GA / Firebase/ Mix Panel)
  • Experience in integrating with web service APIs / Rest APIs
  • Experience in working with micro-service architecture
  • Experience integrating with backend infrastructure and manipulating data structures
  • Proficient understanding of client-side scripting and JavaScript frameworks and front-end software design patterns
  • Should have worked in a minimum of Agile projects (SCRUM)
  • Experience in Unit testing/integration testing of backend APIs using Junit or any unit testing framework
  • Prior experience in working with MVC Architecture
  • Prior experience working with containerized applications and CI/CD architecture
  • Prior experience working with cloud platforms, preferably AWS (Amazon web service)
  • Should be able to work in a dynamic and fast-moving environment
  • Working proficiency and communication skills in verbal and written English
  • Startup experience is a huge plus.
Read more
Bengaluru (Bangalore), Chennai
5 - 8 yrs
Best in industry
Data engineering
Big Data
Java
Python
Hibernate (Java)
+10 more

Data Engineer- Senior

Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.

What are you going to do?

Design & Develop high performance and scalable solutions that meet the needs of our customers.

Closely work with the Product Management, Architects and cross functional teams.

Build and deploy large-scale systems in Java/Python.

Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

Create data tools for analytics and data scientist team members that assist them in building and optimizing their algorithms.

Follow best practices that can be adopted in Bigdata stack.

Use your engineering experience and technical skills to drive the features and mentor the engineers.

What are we looking for ( Competencies) :

Bachelor’s degree in computer science, computer engineering, or related technical discipline.

Overall 5 to 8 years of programming experience in Java, Python including object-oriented design.

Data handling frameworks: Should have a working knowledge of one or more data handling frameworks like- Hive, Spark, Storm, Flink, Beam, Airflow, Nifi etc.

Data Infrastructure: Should have experience in building, deploying and maintaining applications on popular cloud infrastructure like AWS, GCP etc.

Data Store: Must have expertise in one of general-purpose No-SQL data stores like Elasticsearch, MongoDB, Redis, RedShift, etc.

Strong sense of ownership, focus on quality, responsiveness, efficiency, and innovation.

Ability to work with distributed teams in a collaborative and productive manner.

Benefits:

Competitive Salary Packages and benefits.

Collaborative, lively and an upbeat work environment with young professionals.

Job Category: Development

Job Type: Full Time

Job Location: Bangalore

 

Read more
Pune
0 - 1 yrs
₹10L - ₹15L / yr
Java
J2EE
Spring Boot
Hibernate (Java)
SQL
+6 more
1. Work closely with senior engineers to design, implement and deploy applications that impact the business with an emphasis on mobile, payments, and product website development
2. Design software and make technology choices across the stack (from data storage to application to front-end)
3. Understand a range of tier-1 systems/services that power our product to make scalable changes to critical path code
4. Own the design and delivery of an integral piece of a tier-1 system or application
5. Work closely with product managers, UX designers, and end users and integrate software components into a fully functional system
6. Work on the management and execution of project plans and delivery commitments
7. Take ownership of product/feature end-to-end for all phases from the development to the production
8. Ensure the developed features are scalable and highly available with no quality concerns
9. Work closely with senior engineers for refining and implementation
10. Manage and execute project plans and delivery commitments
11. Create and execute appropriate quality plans, project plans, test strategies, and processes for development activities in concert with business and project management efforts
Read more

at Concentric AI

7 candid answers
1 product
DP
Posted by Gopal Agarwal
Pune
2 - 10 yrs
₹2L - ₹50L / yr
Software Testing (QA)
Test Automation (QA)
Python
Jenkins
Automation
+9 more
•3-10  years of experience in test automation for distributed scalable software
• Good QA engineering background with proven automation skills
• Able to understand, design and define approach for automation (Backend/UI/service)
• Design and develop automation scripts for QA testing and tools for quality measurements
• Good to have knowledge of Microservices, API, Web services testing
• Strong in Cloud Engineering skillsets (performance, response time, horizontal scale testing)
• Expertise using automation tools/frameworks (Pytest, Jenkins, Robot, etc)
• Expert at one of the scripting languages – Python, shell, etc
• High level system admin skills to configure and manage test environments
• Basics of Kubernetes and databases like Cassandra, Elasticsearch, MongoDB, etc
• Must have worked in agile environment with CI/CD knowledge
• Having security testing background is a plus
Read more

at Concentric AI

7 candid answers
1 product
DP
Posted by Gopal Agarwal
Pune
3 - 10 yrs
₹4L - ₹50L / yr
Docker
Kubernetes
DevOps
Python
Jenkins
+9 more
• 3-10 yrs of industry experience
• Energetic self-starter, fast learner, with a desire to work in a startup environment
• Experience working with Public Clouds like AWS
• Operating and Monitoring cloud infrastructure on AWS
• Primary focus on building, implementing and managing operational support
• Design, Develop and Troubleshoot Automation scripts (Configuration/Infrastructure as code or others) for Managing Infrastructure
• Expert at one of the scripting languages – Python, shell, etc
• Experience with Nginx/HAProxy, ELK Stack, Ansible, Terraform, Prometheus-Grafana stack, etc
• Handling load monitoring, capacity planning, services monitoring
• Proven experience With CICD Pipelines and Handling Database Upgrade Related Issues
• Good Understanding and experience in working with Containerized environments like Kubernetes and Datastores like Cassandra, Elasticsearch, MongoDB, etc
Read more
Hyderabad
4 - 7 yrs
₹14L - ₹25L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more

Roles and Responsibilities

Big Data Engineer + Spark Responsibilies Atleast 3 to 4 years of relevant experience as Big Data Engineer Min 1 year of relevant hands-on experience into Spark framework. Minimum 4 years of Application Development experience using any programming language like Scala/Java/Python. Hands on experience on any major components in Hadoop Ecosystem like HDFS or Map or Reduce or Hive or Impala. Strong programming experience of building applications / platforms using Scala/Java/Python. Experienced in implementing Spark RDD Transformations, actions to implement business analysis. An efficient interpersonal communicator with sound analytical problemsolving skills and management capabilities. Strive to keep the slope of the learning curve high and able to quickly adapt to new environments and technologies. Good knowledge on agile methodology of Software development.
Read more
Hyderabad
7 - 12 yrs
₹12L - ₹24L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more

Skills

Proficient experience of minimum 7 years into Hadoop. Hands-on experience of minimum 2 years into AWS - EMR/ S3 and other AWS services and dashboards. Good experience of minimum 2 years into Spark framework. Good understanding of Hadoop Eco system including Hive, MR, Spark and Zeppelin. Responsible for troubleshooting and recommendation for Spark and MR jobs. Should be able to use existing logs to debug the issue. Responsible for implementation and ongoing administration of Hadoop infrastructure including monitoring, tuning and troubleshooting Triage production issues when they occur with other operational teams. Hands on experience to troubleshoot incidents, formulate theories and test hypothesis and narrow down possibilities to find the root cause.
Read more

at Classplus

1 video
4 recruiters
DP
Posted by Peoples Office
Noida
8 - 10 yrs
₹35L - ₹55L / yr
Docker
Kubernetes
DevOps
Google Cloud Platform (GCP)
Amazon Web Services (AWS)
+16 more

About us

 

Classplus is India's largest B2B ed-tech start-up, enabling 1 Lac+ educators and content creators to create their digital identity with their own branded apps. Starting in 2018, we have grown more than 10x in the last year, into India's fastest-growing video learning platform.

 

Over the years, marquee investors like Tiger Global, Surge, GSV Ventures, Blume, Falcon, Capital, RTP Global, and Chimera Ventures have supported our vision. Thanks to our awesome and dedicated team, we achieved a major milestone in March this year when we secured a “Series-D” funding.

 

Now as we go global, we are super excited to have new folks on board who can take the rocketship higher🚀. Do you think you have what it takes to help us achieve this? Find Out Below!

 

 

What will you do?

 

· Define the overall process, which includes building a team for DevOps activities and ensuring that infrastructure changes are reviewed from an architecture and security perspective

 

· Create standardized tooling and templates for development teams to create CI/CD pipelines

 

· Ensure infrastructure is created and maintained using terraform

 

· Work with various stakeholders to design and implement infrastructure changes to support new feature sets in various product lines.

 

· Maintain transparency and clear visibility of costs associated with various product verticals, environments and work with stakeholders to plan for optimization and implementation

 

· Spearhead continuous experimenting and innovating initiatives to optimize the infrastructure in terms of uptime, availability, latency and costs

 

 

You should apply, if you

 

 

1. Are a seasoned Veteran: Have managed infrastructure at scale running web apps, microservices, and data pipelines using tools and languages like JavaScript(NodeJS), Go, Python, Java, Erlang, Elixir, C++ or Ruby (experience in any one of them is enough)

 

2. Are a Mr. Perfectionist: You have a strong bias for automation and taking the time to think about the right way to solve a problem versus quick fixes or band-aids.

 

3. Bring your A-Game: Have hands-on experience and ability to design/implement infrastructure with GCP services like Compute, Database, Storage, Load Balancers, API Gateway, Service Mesh, Firewalls, Message Brokers, Monitoring, Logging and experience in setting up backups, patching and DR planning

 

4. Are up with the times: Have expertise in one or more cloud platforms (Amazon WebServices or Google Cloud Platform or Microsoft Azure), and have experience in creating and managing infrastructure completely through Terraform kind of tool

 

5. Have it all on your fingertips: Have experience building CI/CD pipeline using Jenkins, Docker for applications majorly running on Kubernetes. Hands-on experience in managing and troubleshooting applications running on K8s

 

6. Have nailed the data storage game: Good knowledge of Relational and NoSQL databases (MySQL,Mongo, BigQuery, Cassandra…)

 

7. Bring that extra zing: Have the ability to program/script is and strong fundamentals in Linux and Networking.

 

8. Know your toys: Have a good understanding of Microservices architecture, Big Data technologies and experience with highly available distributed systems, scaling data store technologies, and creating multi-tenant and self hosted environments, that’s a plus

 

 

Being Part of the Clan

 

At Classplus, you’re not an “employee” but a part of our “Clan”. So, you can forget about being bound by the clock as long as you’re crushing it workwise😎. Add to that some passionate people working with and around you, and what you get is the perfect work vibe you’ve been looking for!

 

It doesn’t matter how long your journey has been or your position in the hierarchy (we don’t do Sirs and Ma’ams); you’ll be heard, appreciated, and rewarded. One can say, we have a special place in our hearts for the Doers! ✊🏼❤️

 

Are you a go-getter with the chops to nail what you do? Then this is the place for you.

Read more
DP
Posted by Phani Kumar
Hyderabad
7 - 15 yrs
₹20L - ₹35L / yr
NodeJS (Node.js)
MongoDB
Mongoose
Express
React.js
+6 more

Senior Software Developer

Location: Hyderabad (Work From Office, Hybrid)

Permanent Position

 

Technical skills in most of the following areas:

  • Node.js , ReactJS and Redux, Saga, GraphQL, REST APIs , HTML, CSS, CSS3, JavaScript , TypeScript & Serverless.com framework or (Knowledge or Experience with AWS SAM, Lambda , S3,  CloudWatch & DynamoDB.) , Knowledge or Experience with Cassandra , Mysql Databases.
  • Should be able to communicate vision with enthusiasm
  • Demonstrated ability to recognize problems, recommend solutions, and collaboratively implement changes
  • Excellent interpersonal and communication skills, strong analytical skills, and the ability to interface effectively with all levels within the organization, including executive and senior management teams.
  • Should be capable to undertake innovative & critical thinking that challenges the status quo with ideas that create significant business value.
  • Independent thinker and researcher
Read more

at Concentric AI

7 candid answers
1 product
DP
Posted by Gopal Agarwal
Pune
4 - 10 yrs
₹10L - ₹45L / yr
Python
Shell Scripting
DevOps
Amazon Web Services (AWS)
Infrastructure architecture
+7 more
About us:

Ask any CIO about corporate data and they’ll happily share all the work they’ve done to make their databases secure and compliant. Ask them about other sensitive information, like contracts, financial documents, and source code, and you’ll probably get a much less confident response. Few organizations have any insight into business-critical information stored in unstructured data.

There was a time when that didn’t matter. Those days are gone. Data is now accessible, copious, and dispersed, and it includes an alarming amount of business-critical information. It’s a target for both cybercriminals and regulators but securing it is incredibly difficult. It’s the data challenge of our generation.

Existing approaches aren’t doing the job. Keyword searches produce a bewildering array of possibly relevant documents that may or may not be business critical. Asking users to categorize documents requires extensive training and constant vigilance to make sure users are doing their part. What’s needed is an autonomous solution that can find and assess risk so you can secure your unstructured data wherever it lives.

That’s our mission. Concentric’s semantic intelligence solution reveals the meaning in your structured and unstructured data so you can fight off data loss and meet compliance and privacy mandates.

Check out our core cultural values and behavioural tenets here: https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/

Title: Cloud DevOps Engineer 

Role: Individual Contributor (4-8 yrs)  

      

Requirements: 

  • Energetic self-starter, a fast learner, with a desire to work in a startup environment  
  • Experience working with Public Clouds like AWS 
  • Operating and Monitoring cloud infrastructure on AWS. 
  • Primary focus on building, implementing and managing operational support 
  • Design, Develop and Troubleshoot Automation scripts (Configuration/Infrastructure as code or others) for Managing Infrastructure. 
  • Expert at one of the scripting languages – Python, shell, etc  
  • Experience with Nginx/HAProxy, ELK Stack, Ansible, Terraform, Prometheus-Grafana stack, etc 
  • Handling load monitoring, capacity planning, and services monitoring. 
  • Proven experience With CICD Pipelines and Handling Database Upgrade Related Issues. 
  • Good Understanding and experience in working with Containerized environments like Kubernetes and Datastores like Cassandra, Elasticsearch, MongoDB, etc
Read more

at ZeMoSo Technologies

11 recruiters
DP
Posted by HR Team
Remote only
4 - 6 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+8 more

What will you do?

 

You will help build cutting-edge products in various verticals.

You will have to understand the solution domain and understand/architect the data flow.

You will be accountable for the data models and data pipelines driving the solution.

You will also be researching, and iterating for better solutions and this would involve staying up to speed with the latest technologies in the data space.

 

Skills Required:

Should have a clear understanding of one or more of the below technologies –

 

• Database: PostgreSQL, MySQL etc.

 

• BI Reporting: QlikView, Qliksense, SSRS, Tableau & Power BI.

 

• Cloud – One of AWS, Azure, GCP

 

• Big Data – Spark SQL, Scala, pySpark, Red Shift, Hive, HDFS, Cloudera 

About us:

Zemoso Technologies is a Software Product Market Fit Studio that brings silicon valley style rapid prototyping and rapid application builds to Entrepreneurs and Corporate innovation. We offer Innovation as a service and work on ideas from scratch and take it to the Product Market Fit stage using Design Thinking->Lean Execution->Agile Methodology.

We were featured as one of Deloitte's Fastest 50 growing tech companies from India thrice (2016, 2018 and 2019). We were also featured in Deloitte Technology Fast 500 Asia Pacific both in 2016 and 2018.

We are located in Hyderabad, India, and Dallas, US. We have recently incorporated another office in Waterloo, Canada.

Our founders have had past successes - founded a decision management company acquired by SAP AG (now part of Hana Big data stack & NetWeaver BPM), the early engineering team of Zoho (a leading billion $ SaaS player) & some Private Equity experience.


Marquee customers along with some exciting start-ups are part of our clientele.

Read more

at RaRa Now

3 recruiters
DP
Posted by N SHUBHANGINI
Remote only
3 - 5 yrs
₹7L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark

About RARA NOW :

  • RaRa Now is revolutionizing instant delivery for e-commerce in Indonesia through data-driven logistics.

  • RaRa Now is making instant and same-day deliveries scalable and cost-effective by leveraging a differentiated operating model and real-time optimization technology. RaRa makes it possible for anyone, anywhere to get same-day delivery in Indonesia. While others are focusing on - one-to-one- deliveries, the company has developed proprietary, real-time batching tech to do - many-to-many- deliveries within a few hours. RaRa is already in partnership with some of the top eCommerce players in Indonesia like Blibli, Sayurbox, Kopi Kenangan, and many more.

  • We are a distributed team with the company headquartered in Singapore, core operations in Indonesia, and a technology team based out of India.

Future of eCommerce Logistics :

  • Data driven logistics company that is bringing in same-day delivery revolution in Indonesia

  • Revolutionizing delivery as an experience

  • Empowering D2C Sellers with logistics as the core technology

About the Role :

  • Create and maintain optimal data pipeline architecture,
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders including the Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
  • Prior experience on working on Big Query, Redshift or other data warehouses
Read more
Agency job
via posterity consulting by Kapil Tiwari
Bengaluru (Bangalore)
3 - 7 yrs
₹8L - ₹14L / yr
Data engineering
Big Data
Google Cloud Platform (GCP)
ETL
Datawarehousing
+6 more
You'll have the following skills & experience:

• Problem Solving:. Resolving production issues to fix service P1-4 issues. Problems relating to
introducing new technology, and resolving major issues in the platform and/or service.
• Software Development Concepts: Understands and is experienced with the use of a wide range of
programming concepts and is also aware of and has applied a range of algorithms.
• Commercial & Risk Awareness: Able to understand & evaluate both obvious and subtle commercial
risks, especially in relation to a programme.
Experience you would be expected to have
• Cloud: experience with one of the following cloud vendors: AWS, Azure or GCP
• GCP : Experience prefered, but learning essential.
• Big Data: Experience with Big Data methodology and technologies
• Programming : Python or Java worked with Data (ETL)
• DevOps: Understand how to work in a Dev Ops and agile way / Versioning / Automation / Defect
Management – Mandatory
• Agile methodology - knowledge of Jira
Read more
Chennai, Hyderabad
5 - 10 yrs
₹10L - ₹25L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+2 more

Bigdata with cloud:

 

Experience : 5-10 years

 

Location : Hyderabad/Chennai

 

Notice period : 15-20 days Max

 

1.  Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight

2.  Experience in developing lambda functions with AWS Lambda

3.  Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark

4.  Should be able to code in Python and Scala.

5.  Snowflake experience will be a plus

Read more
Remote only
5 - 8 yrs
₹10L - ₹25L / yr
DevOps
Kubernetes
Docker
SAS
Apache Hive
+2 more

Must Have skills :

Experience in Linux Administration

Experience in building, deploying, and monitoring distributed apps using container systems (Docker) and container orchestration (Kubernetes, EKS)

Ability to read and understand code (Java / Python / R / Scala)

Experience AWS and tools

 

Nice to have skills:

Experience in SAS Viya administration

Experience managing large Big Data clusters

Experience in Big Data tools like Hue, Hive, Spark, Jupyter, SAS and R-Studio

Read more
Mumbai
5 - 6 yrs
₹15L - ₹20L / yr
Amazon Web Services (AWS)
Amazon Redshift
Data modeling
ITL
Agile/Scrum
+7 more

Roles and

Responsibilities

Seeking AWS Cloud Engineer /Data Warehouse Developer for our Data CoE team to

help us in configure and develop new AWS environments for our Enterprise Data Lake,

migrate the on-premise traditional workloads to cloud. Must have a sound

understanding of BI best practices, relational structures, dimensional data modelling,

structured query language (SQL) skills, data warehouse and reporting techniques.

 Extensive experience in providing AWS Cloud solutions to various business

use cases.

 Creating star schema data models, performing ETLs and validating results with

business representatives

 Supporting implemented BI solutions by: monitoring and tuning queries and

data loads, addressing user questions concerning data integrity, monitoring

performance and communicating functional and technical issues.

Job Description: -

This position is responsible for the successful delivery of business intelligence

information to the entire organization and is experienced in BI development and

implementations, data architecture and data warehousing.

Requisite Qualification

Essential

-

AWS Certified Database Specialty or -

AWS Certified Data Analytics

Preferred

Any other Data Engineer Certification

Requisite Experience

Essential 4 -7 yrs of experience

Preferred 2+ yrs of experience in ETL & data pipelines

Skills Required

Special Skills Required

 AWS: S3, DMS, Redshift, EC2, VPC, Lambda, Delta Lake, CloudWatch etc.

 Bigdata: Databricks, Spark, Glue and Athena

 Expertise in Lake Formation, Python programming, Spark, Shell scripting

 Minimum Bachelor’s degree with 5+ years of experience in designing, building,

and maintaining AWS data components

 3+ years of experience in data component configuration, related roles and

access setup

 Expertise in Python programming

 Knowledge in all aspects of DevOps (source control, continuous integration,

deployments, etc.)

 Comfortable working with DevOps: Jenkins, Bitbucket, CI/CD

 Hands on ETL development experience, preferably using or SSIS

 SQL Server experience required

 Strong analytical skills to solve and model complex business requirements

 Sound understanding of BI Best Practices/Methodologies, relational structures,

dimensional data modelling, structured query language (SQL) skills, data

warehouse and reporting techniques

Preferred Skills

Required

 Experience working in the SCRUM Environment.

 Experience in Administration (Windows/Unix/Network/Database/Hadoop) is a

plus.

 Experience in SQL Server, SSIS, SSAS, SSRS

 Comfortable with creating data models and visualization using Power BI

 Hands on experience in relational and multi-dimensional data modelling,

including multiple source systems from databases and flat files, and the use of

standard data modelling tools

 Ability to collaborate on a team with infrastructure, BI report development and

business analyst resources, and clearly communicate solutions to both

technical and non-technical team members

Read more
Bengaluru (Bangalore)
7 - 15 yrs
₹15L - ₹25L / yr
Cassandra
Technical Architecture
Debugging
Communication Skills

Casandra Architeture- 7+ Yrs- Bangalore

 

Strong knowledge of Cassandra Architecture including read/write paths, hinted handoffs, read repairs, compaction, cluster/replication strategies, client drivers, caching, GC Tuning.

                Experience in writing queries and performance tuning.

                Experience in handling real time Cassandra clusters, debugging and resolution of issues.

                Experience in implementing Keyspaces, Table, Indexes, security, data models & access administration.

                Knowledge in cassandra backup and recovery.

                Good communication skills.

Read more
Gurugram
8 - 12 yrs
₹20L - ₹30L / yr
Data Analytics
Marketing analytics
SQL
Media Analytics
Digital Analytics
+4 more
Role - Analytics Associate Director

About our Client :-

Our Client is a global data and measurement-driven media agency whose mission is to make brands more valuable to the world. Clients include Google, Flipkart, NBCUniversal, L'Oréal and the Financial Times. The agency is more than 2,000 people strong, manages $4.5B in annualized media spend, and deploys campaigns in 121 markets via 22 offices in APAC, EMEA and the Americas.

About the role :-

Accountable for quantifying and measuring the success of our paid media campaigns and for delivering insights that enable us to innovate the work we deliver at MFG. Leading multi-product projects, developing best practices, being the main point of contact for other teams and direct line management for multiple team members.

Some of the things we’d like you to do -

● Build a deep understanding of marketing plans and their objectives to help Account teams (Activation, Planning, etc) build comprehensive measurement, and test & learn plans
● Play an instrumental role in evolving and designing new, innovative measurement tools. Managing the process through to delivery and take ownership of global roll out
● Recruit, manage and mentor analytical resource(s), ensuring the efficient flow of work through the team, the timely delivery of high-quality outputs and their continuing development as professionals
● Lead the creation of clear, robust and thought-provoking campaign reviews and insights
● Work with Account teams (Activation, Planning, etc) to help define the correct questions to understand correct metrics for quantifying campaign performance
● To help deliver “best in class” analytical capabilities across the agency with the wider Analytics team, including the use of new methods, techniques, tools and systems
● Develop innovative marketing campaigns and assist clients to define objectives
● Develop deep understanding of marketing platform testing and targeting abilities, and act in a consultative capacity in their implementation
● Provide hands-on leadership, mentorship, and coaching in the expert delivery of data strategies, AdTech solutions, audiences solutions and data management solutions to our clients
● Leading stakeholder management on certain areas of the client portfolio
● Coordination and communication with 3rd party vendors to critically assess new/bespoke measurement solutions. Includes development and management of contracts and SOWs.

A bit about yourself -

● 8+ years of experience in a data & insight role; practical experience on how analytical techniques/models are used in marketing. Previous agency, media, or consultancy background is desirable.
● A proven track record in working with a diverse array of clients to solve complex problems and delivering demonstrable business success. Including (but not limited to) the development of compelling and sophisticated data strategies and AdTech / martech strategies to enable
marketing objectives.
● Ideally you have worked with Ad Platforms, DMPs, CDPs, Clean Rooms, Measurement Platforms, Business Intelligence Tools, Data Warehousing and Big Data Solutions to some degree
● 3+ years of management experience and ability to delegate effectively
● Proficiency with systems such as SQL, Social Analytics tools, Python, and ‘R’
● Understand measurement for both Direct Response and Brand Awareness campaigns desired
● Excellent at building and presenting data in a visually engaging and insightful manner that cuts through the noise
● Strong organizational and project management skills including team resourcing
● Strong understanding of what data points can be collected and analyzed in a digital campaign, and how each data point should be analyzed
● Established and professional communication, presentation, and motivational skills
Read more
Mumbai, Navi Mumbai, Bengaluru (Bangalore), Pune
5 - 9 yrs
₹10L - ₹35L / yr
Java
Spring Boot
Apache Kafka
RabbitMQ
Cassandra
+3 more

Job Title -Senior Java Developers

Job Description - Backend Engineer - Lead (Java)

Mumbai, India | Engineering Team | Full-time

 

Are you passionate enough to be a crucial part of a highly analytical and scalable user engagement platform?

Are you ready learn new technologies and willing to step out of your comfort zone to explore and learn new skills?

 

If so, this is an opportunity for you to join a high-functioning team and make your mark on our organisation!

 

The Impact you will create:

  • Build campaign generation services which can send app notifications at a speed of 10 million a minute
  • Dashboards to show Real time key performance indicators to clients
  • Develop complex user segmentation engines which creates segments on Terabytes of data within few seconds
  • Building highly available & horizontally scalable platform services for ever growing data
  • Use cloud based services like AWS Lambda for blazing fast throughput & auto scalability
  • Work on complex analytics on terabytes of data like building Cohorts, Funnels, User path analysis, Recency Frequency & Monetary analysis at blazing speed
  • You will build backend services and APIs to create scalable engineering systems.
  • As an individual contributor, you will tackle some of our broadest technical challenges that requires deep technical knowledge, hands-on software development and seamless collaboration with all functions.
  • You will envision and develop features that are highly reliable and fault tolerant to deliver a superior customer experience.
  • Collaborating various highly-functional teams in the company to meet deliverables throughout the software development lifecycle.
  • Identify and improvise areas of improvement through data insights and research.

 

What we look for?

  • 5-9 years of experience in backend development and must have worked on Java/shell/Perl/python scripting.
  • Solid understanding of engineering best practices, continuous integration, and incremental delivery.
  • Strong analytical skills, debugging and troubleshooting skills, product line analysis.
  • Follower of agile methodology (Sprint planning, working on JIRA, retrospective etc).
  • Proficiency in usage of tools like Docker, Maven, Jenkins and knowledge on frameworks in Java like spring, spring boot, hibernate, JPA.
  • Ability to design application modules using various concepts like object oriented, multi-threading, synchronization, caching, fault tolerance, sockets, various IPCs, database interfaces etc.
  • Hands on experience on Redis, MySQL and streaming technologies like Kafka producer consumers and NoSQL databases like mongo dB/Cassandra.
  • Knowledge about versioning like Git and deployment processes like CICD.

What’s in it for you?

 

  • Immense growth, continuous learning and deliver the best to the top-notch brands
  • Work with some of the most innovative brains
  • Opportunity to explore your entrepreneurial mind-set
  • Open culture where your creative bug gets activated.

 

If this sounds like a company you would like to be a part of, and a role you would thrive in, please don’t hold back from applying! We need your unique perspective for our continued innovation and success!

So let’s converse! Our inquisitive nature is all keen to know more about you.

Skills

JAVA, MONGO, Redis, Cassandra, Kafka, rabbitMQ


 

Read more
Mumbai
10 - 12 yrs
₹20L - ₹28L / yr
Java
J2EE
Spring Boot
Hibernate (Java)
Microservices
+13 more

About the role:

An exciting opportunity to work with an established company in Asia with our global clients, the team needs to expand their capabilities in all things Back End. We especially want to hear from you if you have JAVA and subsequent frameworks. This is a great opportunity for a leader to shine and further their career on a global stage.

 

What YOU do - job description

  • Understanding business requirements and business process of the client’s request
  • Delivery and deployment of features on the server
  • Fix defects requested by clients
  • Writing analysis documents and related documents of the implementation
  • Providing unit testing and integration testing
  • Self-starting on new technologies related to business requirements
  • Work with business and system analysts to design and develop technical requirements.
  • Develop accurate and efficient programs with unit tests according to the requirements.
  • Maintain current knowledge of the standard language, coding conventions, and operations requirements.
  • Build and deploy code to several environments (i.e. Development, SIT, UAT).
  • Support, investigate and analyze the root causes of reported issues/defects/problems of the developed application.
  • Consult with users, analyze requirements and recommend technical specifications.


What YOU Bring - must have qualifications/ skills

  • Bachelor’s Degree in Computer Science, IT or other related fields.
  • 10+ years of technical experience 
  • Experience in Spring Boot or Spring MVC framework is a plus
  • Strong problem-solving skills, good attitude and teamwork.
  • Experience and knowledge in Web service development based on J2EE framework including
    • Java 8 and Java 11
    • Spring Boot, SQL (MySQL, PostGres etc)
    • NoSQL databases (Redis, Cassandra, couchbase etc)
    • Queues (Kafka, PubSub etc), Architecture: SOA
    • Microservices
  • Experience in developing applications with Docker, Kubernetes and cloud-like Google GCP and AWS.
  • Using enterprise-level databases (e.g., Oracle, MSSQL, Postgresql)
  • Using NoSQL databases (e.g., Redis, Cassandra)
  • Well-versed with Git.
  • Strong knowledge of OOP software and REST/SOAP/gRPC web services design and implementation.
  • Fluent in deploying and troubleshooting applications in the Linux OS environment.
  • A full understanding of what the clean code principles mean and the implications of adhering to them.

 

 Good to have qualifications/ skills

  • Experience in developing applications from at least more 2 programming languages e.g. Node.JS, React Native, Angular, React.JS, Ruby, Golang would be an advantage including Servlet, Java beans, EJB, JMS, JavaMail, Web Services, HTML, XML, UML etc.
  • Using enterprise-level database (e.g. Oracle, MSSQL) Eclipse, Netbeans or Jetbrain IDE.
  • Basic knowledge of computer networks relating to building web applications (i.e. frequently used protocols in TCP/IP stack such as FTP, SMTP, DNS etc.).
  • Experience in developing software in an Agile process.

 

About Bluebik Tech Center
Bluebik Technology Center (India) Private Limited (Bluebik Tech Center) is a subsidiary of Bluebik Group Public Limited Company, a top leading strategic and digital transformation consulting firm with a net worth of USD180M in Thailand. Striving to serve the high demands of digital transformation from our multi-national clients, Bluebik Tech Center has been established to be the source of state-of-the-art technology development and to produce world-class IT professionals in the world and share innovative technology wisdom. We are keen on innovation and R&D and have unique dynamic training programs to upskill and reskill our talents.

With rapidly expanding business across the globe, we are looking for a candidate who is passionate about building and developing new software products and enhancements by excelling at large-scale applications and frameworks with outstanding analytical and communication skills

Read more
Agency job
via Zyoin Web Private Limited by Vishali Vashnavi
Bengaluru (Bangalore)
8 - 12 yrs
₹40L - ₹50L / yr
Java
J2EE
PostgreSQL
MySQL
MongoDB
+19 more
Requirements:
• B. E. /B. Tech. in Computer Science or MCA from a reputed university.
• 3.5 plus years of experience in software development, with emphasis on JAVA/J2EE Server side
programming.
• Hands on experience in core Java, multithreading, RMI, socket programing, JDBC, NIO, webservices
and design patterns.
• Knowledge of distributed system, distributed caching, messaging frameworks, ESB etc.
• Experience in Linux operating system and PostgreSQL/MySQL/MongoDB/Cassandra database.
• Additionally, knowledge of HBase, Hadoop and Hive is desirable.
• Familiarity with message queue systems and AMQP and Kafka is desirable.
• Experience as a participant in agile methodologies.
• Excellent written and verbal communication skills and presentation skills.
• This is not a fullstack requirement, we are looking for a purely backend expert.
Read more
DP
Posted by Ratnakumari Modhalavalasa
Visakhapatnam
3 - 5 yrs
₹2L - ₹4L / yr
Hadoop
Apache Sqoop
Apache Hive
Apache Spark
Apache Pig
+9 more
Position : Data Engineer

Duration : Full Time

Location : Vishakhapatnam, Bangalore, Chennai

years of experience : 3+ years

Job Description :

- 3+ Years of working as a Data Engineer with thorough understanding of data frameworks that collect, manage, transform and store data that can derive business insights.

- Strong communications (written and verbal) along with being a good team player.

- 2+ years of experience within the Big Data ecosystem (Hadoop, Sqoop, Hive, Spark, Pig, etc.)

- 2+ years of strong experience with SQL and Python (Data Engineering focused).

- Experience with GCP Data Services such as BigQuery, Dataflow, Dataproc, etc. is an added advantage and preferred.

- Any prior experience in ETL tools such as DataStage, Informatica, DBT, Talend, etc. is an added advantage for the role.
Read more

at Shopalyst Technologies

1 video
10 recruiters
DP
Posted by Hiring Manager
Thiruvananthapuram
1 - 4 yrs
Best in industry
Java
Spring Boot
RESTful APIs
NOSQL Databases
Cassandra
+5 more

About Shopalyst

Shopalyst offers a Discovery Commerce platform for digital marketers. Combining data, AI and deep integrations with digital media and e-commerce platforms, Shopalyst connects people with products they love. More than 500 marquee brands leverage our SaaS platform for data driven marketing and sales in 30 countries across Asia, Europe and Americas. We have offices in Fremont CA, Bangalore, and Trivandrum. Our company is backed by Kalaari Capital.

About the Role: Software Engineer (Backend/Java)

We are currently looking for people to join our Engineering team where internet scale, reliability, security, high performance and self-management drives almost every design decision that we take. This role will be based out of Thiruvananthapuram, Kerala.

We are looking for Software Engineer(s) to help us build functional products and applications. Software Engineer’s responsibilities include participating in software design, writing clean and efficient code adhering to coding standards, guidelines & best practices for various applications, running tests to improve system functionality, performance & security and documenting design and code.

Responsibilities

• Core server side Technology Skills; An expert in one or more of: • Deep expertise in Java
• Implementation of REST based APIs
• Expertise in Spring/Play

• Cassandra or other comparable NOSQL data storage

Requirements

• 1-4 years of software engineering experience.
• Managing API Authentication, Authorisation and Auditing
• Python or similar scripts
• Knowledge of Kotlin
• Design of large distributed cache systems
• Proficient understanding of code versioning tools (such as Git, Mercurial or SVN)• Elasticsearch • SOLR Search
• Bachelor's in Engineering (Required)

We understand that not all applicants will have skills that match the exact job description. We value diverse experiences in the relevant industry and encourage everyone who meets the required qualifications to apply. If you lack the desired experience, but do have the knowledge and confidence to leave a mark, go ahead and apply.

Additional Notes

At Shopalyst, we are creating a global workplace that enables everyone to find their true potential,purpose, and passion irrespective of their background, gender, race, sexual orientation, religion and ethnicity. We are committed to providing equal opportunity for all and believe that diversity in the workplace creates a more vibrant, richer work environment that advances the goals of our employees,communities and the business. www.shopalyst.com

Read more

at Astegic

3 recruiters
DP
Posted by Nikita Pasricha
Remote only
5 - 7 yrs
₹8L - ₹15L / yr
Data engineering
SQL
Relational Database (RDBMS)
Big Data
Scala
+14 more

WHAT YOU WILL DO:

  • ●  Create and maintain optimal data pipeline architecture.

  • ●  Assemble large, complex data sets that meet functional / non-functional business requirements.

  • ●  Identify, design, and implement internal process improvements: automating manual processes,

    optimizing data delivery, re-designing infrastructure for greater scalability, etc.

  • ●  Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide

    variety of data sources using Spark,Hadoop and AWS 'big data' technologies.(EC2, EMR, S3, Athena).

  • ●  Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition,

    operational efficiency and other key business performance metrics.

  • ●  Work with stakeholders including the Executive, Product, Data and Design teams to assist with

    data-related technical issues and support their data infrastructure needs.

  • ●  Keep our data separated and secure across national boundaries through multiple data centers and AWS

    regions.

  • ●  Create data tools for analytics and data scientist team members that assist them in building and

    optimizing our product into an innovative industry leader.

  • ●  Work with data and analytics experts to strive for greater functionality in our data systems.

    REQUIRED SKILLS & QUALIFICATIONS:

  • ●  5+ years of experience in a Data Engineer role.

  • ●  Advanced working SQL knowledge and experience working with relational databases, query authoring

    (SQL) as well as working familiarity with a variety of databases.

  • ●  Experience building and optimizing 'big data' data pipelines, architectures and data sets.

  • ●  Experience performing root cause analysis on internal and external data and processes to answer

    specific business questions and identify opportunities for improvement.

  • ●  Strong analytic skills related to working with unstructured datasets.

  • ●  Build processes supporting data transformation, data structures, metadata, dependency and workload

    management.

  • ●  A successful history of manipulating, processing and extracting value from large disconnected datasets.

  • ●  Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores.

  • ●  Strong project management and organizational skills.

  • ●  Experience supporting and working with cross-functional teams in a dynamic environment

  • ●  Experience with big data tools: Hadoop, Spark, Pig, Vetica, etc.

  • ●  Experience with AWS cloud services: EC2, EMR, S3, Athena

  • ●  Experience with Linux

  • ●  Experience with object-oriented/object function scripting languages: Python, Java, Shell, Scala, etc.


    PREFERRED SKILLS & QUALIFICATIONS:

● Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.

Read more
DP
Posted by Ashish Dhyani
Gurugram, Delhi, Noida, Ghaziabad, Faridabad
2 - 8 yrs
₹15L - ₹28L / yr
Data Visualization
Data management
Big Data
Data-flow analysis
Data Science
+6 more
SKIDS Health (https://skids.health) is a highly innovative startup developing pediatric
evaluations, clinics and care modules to help Indian parents and children. Our approach
leverages a first-of-its-kind schools-parents-monitoring technologies to evaluate, monitor and
guide children’s health and wellbeing. Based in Delhi NCR and Bengaluru, SKIDS is founded by
serial entrepreneurs with expertise in medical science, machine learning and education
technology. SKIDS is backed by marquee investors in Asia and India. We are actively
expanding our team and looking for passionate and motivated professionals who are keen to
join us in our next phase of growth.
Data Engineer
Location: Gurgaon, India
The Opportunity
We are looking for a data engineer who enjoys solving challenging problems. We are excited
about applicants who are creative, meticulous, and looking to learn broadly in a startup. The
ideal candidate is dedicated to excellence in the workplace, enjoys collaborating with others,
and thrives in a dynamic, fast-paced environment.
You would be expected to:
• Design and implement data pipelines using emerging technologies and tools
• Implement data and compute solutions in cloud platforms such as AWS
• Engineer data storage solutions for large and noisy datasets
• Work with the team to find optimal, scalable solutions for the business
• Collaborate efficiently with other data engineers, scientists, and technicians
You would have:
• a degree in computer science or related technical field
• 2-4 years of experience in a similar role
• proficiency in relevant languages and tech (Python, Linux, Docker etc)
• good working knowledge of AWS based cloud architecture
• familiarity with storage design and best practices
• familiarity with standard security protocols and practices
Our company encourages diversity and is an equal opportunity employer
Read more
Bengaluru (Bangalore)
2 - 5 yrs
₹5L - ₹15L / yr
Java
MySQL
PostgreSQL
NOSQL Databases
MongoDB
+12 more

Responsibilities:

  • Lead simultaneous development for multiple business verticals.
  • Design & develop highly scalable, reliable, secure, and fault-tolerant systems.
  • Ensure that exceptional standards are maintained in all aspects of engineering.
  • Collaborate with other engineering teams to learn and share best practices.
  • Take ownership of technical performance metrics and strive actively to improve them.
  • Mentors junior members of the team and contributes to code reviews.

 

Requirements:

  • A passion to solve tough engineering/data challenges.
  • Be well versed with cloud computing platforms AWS/GCP
  • Experience with SQL technologies (MySQL, PostgreSQL)
  • Experience working with NoSQL technologies (MongoDB, ElasticSearch)
  • Excellent Programming skills in Python/Java/GoLang
  • Big Data streaming services (Kinesis, Kafka, RabbitMQ)
  • Distributed cache systems(Redis, Memcache)
  • Advanced data solutions(BigQuery, RedShift, DynamoDB, Cassandra)
  • Automated testing frameworks and CI/CD pipelines Infrastructure orchestration(Docker/Kubernetes/Nginx)
  • Cloud-native tech like Lambda, ASG, CDN, ELB, SNS/SQS, S3 Route53 SES
Read more
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort