Cutshort logo
Big data Jobs in Hyderabad

37+ Big data Jobs in Hyderabad | Big data Job openings in Hyderabad

Apply to 37+ Big data Jobs in Hyderabad on CutShort.io. Explore the latest Big data Job opportunities across top companies like Google, Amazon & Adobe.

icon
PloPdo
Chandan Nadkarni
Posted by Chandan Nadkarni
Hyderabad
3 - 12 yrs
₹22L - ₹25L / yr
Cassandra
Data modeling

Responsibilities -

  • Collaborate with the development team to understand data requirements and identify potential scalability issues.
  • Design, develop, and implement scalable data pipelines and ETL processes to ingest, process, and analyse large - volumes of data from various sources.
  • Optimize data models and database schemas to improve query performance and reduce latency.
  • Monitor and troubleshoot the performance of our Cassandra database on Azure Cosmos DB, identifying bottlenecks and implementing optimizations as needed.
  • Work with cross-functional teams to ensure data quality, integrity, and security.
  • Stay up to date with emerging technologies and best practices in data engineering and distributed systems.


Qualifications & Requirements -

  • Proven experience as a Data Engineer or similar role, with a focus on designing and optimizing large-scale data systems.
  • Strong proficiency in working with NoSQL databases, particularly Cassandra.
  • Experience with cloud-based data platforms, preferably Azure Cosmos DB.
  • Solid understanding of Distributed Systems, Data modelling, Data Warehouse Designing, and ETL Processes.
  • Detailed understanding of Software Development Life Cycle (SDLC) is required.
  • Good to have knowledge on any visualization tool like Power BI, Tableau.
  • Good to have knowledge on SAP landscape (SAP ECC, SLT, BW, HANA etc).
  • Good to have experience on Data Migration Project.
  • Knowledge of Supply Chain domain would be a plus.
  • Familiarity with software architecture (data structures, data schemas, etc.)
  • Familiarity with Python programming language is a plus.
  • The ability to work in a dynamic, fast-paced, work environment.
  • A passion for data and information with strong analytical, problem solving, and organizational skills.
  • Self-motivated with the ability to work under minimal direction.
  • Strong communication and collaboration skills, with the ability to work effectively in a cross-functional team environment.


Read more
TensorGo Software Private Limited
Deepika Agarwal
Posted by Deepika Agarwal
Hyderabad
7 - 12 yrs
₹15L - ₹15L / yr
Engineering Management
Java
NodeJS (Node.js)
Microservices
Big Data
+4 more

Role & Responsibilities

  1. Create innovative architectures based on business requirements.
  2. Design and develop cloud-based solutions for global enterprises.
  3. Coach and nurture engineering teams through feedback, design reviews, and best practice input.
  4. Lead cross-team projects, ensuring resolution of technical blockers.
  5. Collaborate with internal engineering teams, global technology firms, and the open-source community.
  6. Lead initiatives to learn and apply modern and advanced technologies.
  7. Oversee the launch of innovative products in high-volume production environments.
  8. Develop and maintain high-quality software applications using JS frameworks (React, NPM, Node.js etc.,).
  9. Utilize design patterns for backend technologies and ensure strong coding skills.
  10. Deploy and manage applications on AWS cloud services, including ECS (Fargate), Lambda, and load balancers. Work with Docker to containerize services.
  11. Implement and follow CI/CD practices using GitLab for automated build, test, and deployment processes.
  12. Collaborate with cross-functional teams to design technical solutions, ensuring adherence to Microservice Design patterns and Architecture.
  13. Apply expertise in Authentication & Authorization protocols (e.g., JWT, OAuth), including certificate handling, to ensure robust application security.
  14. Utilize databases such as Postgres, MySQL, Mongo and DynamoDB for efficient data storage and retrieval.
  15. Demonstrate familiarity with Big Data technologies, including but not limited to:

- Apache Kafka for distributed event streaming.

- Apache Spark for large-scale data processing.

- Containers for scalable and portable deployments.


Technical Skills:

  1. 7+ years of hands-on development experience with JS frameworks, specifically MERN.
  2. Strong coding skills in backend technologies using various design patterns.
  3. Strong UI development skills using React.
  4. Expert in containerization using Docker.
  5. Knowledge of cloud platforms, specifically OCI, and familiarity with serverless technology, services like ECS, Lambda, and load balancers.
  6. Proficiency in CI/CD practices using GitLab or Bamboo.
  7. Strong knowledge of Microservice Design patterns and Architecture.
  8. Expertise in Authentication and authorization protocols like JWT, and OAuth including certificate handling.
  9. Experience working with high stream media data.
  10. Experience working with databases such as Postgres, MySQL, and DynamoDB.
  11. Familiarity with Big Data technologies related to Kafka, PySpark, Apache Spark, Containers, etc.
  12. Experience with container Orchestration tools like Kubernetes.


Read more
Hyderabad
8 - 10 yrs
₹13L - ₹15L / yr
SQL server
Oracle
Cassandra
Terraform
Shell Scripting
+3 more

Role: Oracle DBA Developer


Location: Hyderabad


Required Experience: 8 + Years


Skills : DBA, Terraform, Ansible, Python, Shell Script, DevOps activities, Oracle DBA, SQL server, Cassandra, Oracle sql/plsql, MySQL/Oracle/MSSql/Mongo/Cassandra, Security measure configuration


cid:[email protected]


Roles and Responsibilities:


 


1. 8+ years of hands-on DBA experience in one or many of the following: SQL Server, Oracle, Cassandra


2. DBA experience in a SRE environment will be an advantage.


3. Experience in Automation/building databases by providing self-service tools. analyze and implement solutions for database administration (e.g., backups, performance tuning, Troubleshooting, Capacity planning)


4. Analyze solutions and implement best practices for cloud database and their components.


5. Build and enhance tooling, automation, and CI/CD workflows (Jenkins etc.) that provide safe self-service capabilities to th6. Implement proactive monitoring and alerting to detect issues before they impact users. Use a metrics-driven approach to identify and root cause performance and scalability bottlenecks in the system.


7. Work on automation of database infrastructure and help engineering succeed by providing self-service tools.


8. Write database documentation, including data standards, procedures, and definitions for the data dictionary (metadata)


9. Monitor database performance, control access permissions and privileges, capacity planning, implement changes and apply new patches and versions when required.


10. Recommend query and schema changes to optimize the performance of database queries.


11. Have experience with cloud-based environments (OCI, AWS, Azure) as well as On-Premises.


12. Have experience with cloud database such as SQL server, Oracle, Cassandra


13. Have experience with infrastructure automation and configuration management (Jira, Confluence, Ansible, Gitlab, Terraform)


14. Have excellent written and verbal English communication skills.


15. Planning, managing, and scaling of data stores to ensure a business’ complex data requirements are met and it can easily access its data in a fast, reliable, and safe manner.


16. Ensures the quality of orchestration and integration of tools needed to support daily operations by patching together existing infrastructure with cloud solutions and additional data infrastructures.


17. Data Security and protecting the data through rigorous testing of backup and recovery processes and frequently auditing well-regulated security procedures.


18. use software and tooling to automate manual tasks and enable engineers to move fast without the concern of losing data during their experiments.


19. service level objectives (SLOs), risk analysis to determine which problems to address and which problems to automate.


20. Bachelor's Degree in a technical discipline required.


21. DBA Certifications required: Oracle, SQLServer, Cassandra (2 or more)


21. Cloud, DevOps certifications will be an advantage.


 


Must have Skills:


 


Ø Oracle DBA with development


Ø SQL


Ø Devops tools


Ø Cassandra






Read more
Quadratic Insights
Praveen Kondaveeti
Posted by Praveen Kondaveeti
Hyderabad
7 - 10 yrs
₹15L - ₹24L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+6 more

About Quadratyx:

We are a product-centric insight & automation services company globally. We help the world’s organizations make better & faster decisions using the power of insight & intelligent automation. We build and operationalize their next-gen strategy, through Big Data, Artificial Intelligence, Machine Learning, Unstructured Data Processing and Advanced Analytics. Quadratyx can boast more extensive experience in data sciences & analytics than most other companies in India.

We firmly believe in Excellence Everywhere.


Job Description

Purpose of the Job/ Role:

• As a Technical Lead, your work is a combination of hands-on contribution, customer engagement and technical team management. Overall, you’ll design, architect, deploy and maintain big data solutions.


Key Requisites:

• Expertise in Data structures and algorithms.

• Technical management across the full life cycle of big data (Hadoop) projects from requirement gathering and analysis to platform selection, design of the architecture and deployment.

• Scaling of cloud-based infrastructure.

• Collaborating with business consultants, data scientists, engineers and developers to develop data solutions.

• Led and mentored a team of data engineers.

• Hands-on experience in test-driven development (TDD).

• Expertise in No SQL like Mongo, Cassandra etc, preferred Mongo and strong knowledge of relational databases.

• Good knowledge of Kafka and Spark Streaming internal architecture.

• Good knowledge of any Application Servers.

• Extensive knowledge of big data platforms like Hadoop; Hortonworks etc.

• Knowledge of data ingestion and integration on cloud services such as AWS; Google Cloud; Azure etc. 


Skills/ Competencies Required

Technical Skills

• Strong expertise (9 or more out of 10) in at least one modern programming language, like Python, or Java.

• Clear end-to-end experience in designing, programming, and implementing large software systems.

• Passion and analytical abilities to solve complex problems Soft Skills.

• Always speaking your mind freely.

• Communicating ideas clearly in talking and writing, integrity to never copy or plagiarize intellectual property of others.

• Exercising discretion and independent judgment where needed in performing duties; not needing micro-management, maintaining high professional standards.


Academic Qualifications & Experience Required

Required Educational Qualification & Relevant Experience

• Bachelor’s or Master’s in Computer Science, Computer Engineering, or related discipline from a well-known institute.

• Minimum 7 - 10 years of work experience as a developer in an IT organization (preferably Analytics / Big Data/ Data Science / AI background.

Read more
Hyderabad
4 - 7 yrs
₹14L - ₹25L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more

Roles and Responsibilities

Big Data Engineer + Spark Responsibilies Atleast 3 to 4 years of relevant experience as Big Data Engineer Min 1 year of relevant hands-on experience into Spark framework. Minimum 4 years of Application Development experience using any programming language like Scala/Java/Python. Hands on experience on any major components in Hadoop Ecosystem like HDFS or Map or Reduce or Hive or Impala. Strong programming experience of building applications / platforms using Scala/Java/Python. Experienced in implementing Spark RDD Transformations, actions to implement business analysis. An efficient interpersonal communicator with sound analytical problemsolving skills and management capabilities. Strive to keep the slope of the learning curve high and able to quickly adapt to new environments and technologies. Good knowledge on agile methodology of Software development.
Read more
Hyderabad
7 - 12 yrs
₹12L - ₹24L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more

Skills

Proficient experience of minimum 7 years into Hadoop. Hands-on experience of minimum 2 years into AWS - EMR/ S3 and other AWS services and dashboards. Good experience of minimum 2 years into Spark framework. Good understanding of Hadoop Eco system including Hive, MR, Spark and Zeppelin. Responsible for troubleshooting and recommendation for Spark and MR jobs. Should be able to use existing logs to debug the issue. Responsible for implementation and ongoing administration of Hadoop infrastructure including monitoring, tuning and troubleshooting Triage production issues when they occur with other operational teams. Hands on experience to troubleshoot incidents, formulate theories and test hypothesis and narrow down possibilities to find the root cause.
Read more
Chennai, Hyderabad
5 - 10 yrs
₹10L - ₹25L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+2 more

Bigdata with cloud:

 

Experience : 5-10 years

 

Location : Hyderabad/Chennai

 

Notice period : 15-20 days Max

 

1.  Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight

2.  Experience in developing lambda functions with AWS Lambda

3.  Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark

4.  Should be able to code in Python and Scala.

5.  Snowflake experience will be a plus

Read more
Ahmedabad, Hyderabad, Pune, Delhi
5 - 7 yrs
₹18L - ₹25L / yr
AWS Lambda
AWS Simple Notification Service (SNS)
AWS Simple Queuing Service (SQS)
Python
PySpark
+9 more
  1. Data Engineer

 Required skill set: AWS GLUE, AWS LAMBDA, AWS SNS/SQS, AWS ATHENA, SPARK, SNOWFLAKE, PYTHON

Mandatory Requirements  

  • Experience in AWS Glue
  • Experience in Apache Parquet 
  • Proficient in AWS S3 and data lake 
  • Knowledge of Snowflake
  • Understanding of file-based ingestion best practices.
  • Scripting language - Python & pyspark 

CORE RESPONSIBILITIES 

  • Create and manage cloud resources in AWS 
  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 
  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 
  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 
  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
  • Define process improvement opportunities to optimize data collection, insights and displays.
  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 
  • Identify and interpret trends and patterns from complex data sets 
  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 
  • Key participant in regular Scrum ceremonies with the agile teams  
  • Proficient at developing queries, writing reports and presenting findings 
  • Mentor junior members and bring best industry practices 

QUALIFICATIONS 

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 
  • Strong background in math, statistics, computer science, data science or related discipline
  • Advanced knowledge one of language: Java, Scala, Python, C# 
  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  
  • Proficient with
  • Data mining/programming tools (e.g. SAS, SQL, R, Python)
  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
  • Data visualization (e.g. Tableau, Looker, MicroStrategy)
  • Comfortable learning about and deploying new technologies and tools. 
  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 
  • Good written and oral communication skills and ability to present results to non-technical audiences 
  • Knowledge of business intelligence and analytical tools, technologies and techniques.

  

Familiarity and experience in the following is a plus:  

  • AWS certification
  • Spark Streaming 
  • Kafka Streaming / Kafka Connect 
  • ELK Stack 
  • Cassandra / MongoDB 
  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools
Read more
Impetus

at Impetus

3 recruiters
Agency job
via Impetus by Gangadhar TM
Bengaluru (Bangalore), Pune, Hyderabad, Indore, Noida, Gurugram
10 - 16 yrs
₹30L - ₹50L / yr
Big Data
Data Warehouse (DWH)
Product Management

Job Title: Product Manager

 

Job Description

Bachelor or master’s degree in computer science or equivalent experience.
Worked as Product Owner before and took responsibility for a product or project delivery.
Well-versed with data warehouse modernization to Big Data and Cloud environments.
Good knowledge* of any of the Cloud (AWS/Azure/GCP) – Must Have
Practical experience with continuous integration and continuous delivery workflows.
Self-motivated with strong organizational/prioritization skills and ability to multi-task with close attention to detail.
Good communication skills
Experience in working within a distributed agile team
Experience in handling migration projects – Good to Have
 

*Data Ingestion, Processing, and Orchestration knowledge

 

Roles & Responsibilities


Responsible for coming up with innovative and novel ideas for the product.
Define product releases, features, and roadmap.
Collaborate with product teams on defining product objectives, including creating a product roadmap, delivery, market research, customer feedback, and stakeholder inputs.
Work with the Engineering teams to communicate release goals and be a part of the product lifecycle. Work closely with the UX and UI team to create the best user experience for the end customer.
Work with the Marketing team to define GTM activities.
Interface with Sales & Customer teams to identify customer needs and product gaps
Market and competition analysis activities.
Participate in the Agile ceremonies with the team, define epics, user stories, acceptance criteria
Ensure product usability from the end-user perspective

 

Mandatory Skills

Product Management, DWH, Big Data

Read more
Product and Service based company
Hyderabad, Ahmedabad
4 - 8 yrs
₹15L - ₹30L / yr
Amazon Web Services (AWS)
Apache
Snow flake schema
Python
Spark
+13 more

Job Description

 

Mandatory Requirements 

  • Experience in AWS Glue

  • Experience in Apache Parquet 

  • Proficient in AWS S3 and data lake 

  • Knowledge of Snowflake

  • Understanding of file-based ingestion best practices.

  • Scripting language - Python & pyspark

CORE RESPONSIBILITIES

  • Create and manage cloud resources in AWS 

  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 

  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 

  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 

  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.

  • Define process improvement opportunities to optimize data collection, insights and displays.

  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 

  • Identify and interpret trends and patterns from complex data sets 

  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 

  • Key participant in regular Scrum ceremonies with the agile teams  

  • Proficient at developing queries, writing reports and presenting findings 

  • Mentor junior members and bring best industry practices.

 

QUALIFICATIONS

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 

  • Strong background in math, statistics, computer science, data science or related discipline

  • Advanced knowledge one of language: Java, Scala, Python, C# 

  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  

  • Proficient with

  • Data mining/programming tools (e.g. SAS, SQL, R, Python)

  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)

  • Data visualization (e.g. Tableau, Looker, MicroStrategy)

  • Comfortable learning about and deploying new technologies and tools. 

  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 

  • Good written and oral communication skills and ability to present results to non-technical audiences 

  • Knowledge of business intelligence and analytical tools, technologies and techniques.

Familiarity and experience in the following is a plus: 

  • AWS certification

  • Spark Streaming 

  • Kafka Streaming / Kafka Connect 

  • ELK Stack 

  • Cassandra / MongoDB 

  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools

Read more
Bengaluru (Bangalore), Pune, Hyderabad
4 - 6 yrs
₹6L - ₹22L / yr
Apache HBase
Apache Hive
Apache Spark
Go Programming (Golang)
Ruby on Rails (ROR)
+5 more
Urgently require Hadoop Developer in reputed MNC company

Location: Bangalore/Pune/Hyderabad/Nagpur

4-5 years of overall experience in software development.
- Experience on Hadoop (Apache/Cloudera/Hortonworks) and/or other Map Reduce Platforms
- Experience on Hive, Pig, Sqoop, Flume and/or Mahout
- Experience on NO-SQL – HBase, Cassandra, MongoDB
- Hands on experience with Spark development,  Knowledge of Storm, Kafka, Scala
- Good knowledge of Java
- Good background of Configuration Management/Ticketing systems like Maven/Ant/JIRA etc.
- Knowledge around any Data Integration and/or EDW tools is plus
- Good to have knowledge of  using Python/Perl/Shell

 

Please note - Hbase hive and spark are must.

Read more
Impetus Technologies

at Impetus Technologies

1 recruiter
Gangadhar T.M
Posted by Gangadhar T.M
Bengaluru (Bangalore), Hyderabad, Pune, Indore, Gurugram, Noida
10 - 17 yrs
₹25L - ₹50L / yr
Product Management
Big Data
Data Warehouse (DWH)
ETL
Hi All, 
Greetings! We are looking for Product Manager for our Data modernization product. We need a resource with good knowledge on Big Data/DWH. should have strong Stakeholders management and Presentation skills
Read more
Picture the future
Agency job
via Jobdost by Sathish Kumar
Hyderabad
4 - 7 yrs
₹5L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+7 more

CORE RESPONSIBILITIES

  • Create and manage cloud resources in AWS 
  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 
  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 
  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 
  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
  • Define process improvement opportunities to optimize data collection, insights and displays.
  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 
  • Identify and interpret trends and patterns from complex data sets 
  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 
  • Key participant in regular Scrum ceremonies with the agile teams  
  • Proficient at developing queries, writing reports and presenting findings 
  • Mentor junior members and bring best industry practices 

 

QUALIFICATIONS

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 
  • Strong background in math, statistics, computer science, data science or related discipline
  • Advanced knowledge one of language: Java, Scala, Python, C# 
  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  
  • Proficient with
  • Data mining/programming tools (e.g. SAS, SQL, R, Python)
  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
  • Data visualization (e.g. Tableau, Looker, MicroStrategy)
  • Comfortable learning about and deploying new technologies and tools. 
  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 
  • Good written and oral communication skills and ability to present results to non-technical audiences 
  • Knowledge of business intelligence and analytical tools, technologies and techniques.


Mandatory Requirements 

  • Experience in AWS Glue
  • Experience in Apache Parquet 
  • Proficient in AWS S3 and data lake 
  • Knowledge of Snowflake
  • Understanding of file-based ingestion best practices.
  • Scripting language - Python & pyspark

 

Read more
Hyderabad
4 - 7 yrs
₹12L - ₹28L / yr
Python
Spark
Big Data
Hadoop
Apache Hive
Must have :

  • At least 4 to 7 years of relevant experience as Big Data Engineer
  • Hands-on experience in Scala or Python
  • Hands-on experience on major components in Hadoop Ecosystem like HDFS, Map Reduce, Hive, Impala.
  • Strong programming experience in building applications/platform using Scala or Python.
  • Experienced in implementing Spark RDD Transformations, actions to implement business analysis


We are specialized in productizing solutions of new technology. 
Our vision is to build engineers with entrepreneurial and leadership mindsets who can create highly impactful products and solutions using technology to deliver immense value to our clients.
We strive to develop innovation and passion into everything we do, whether it is services or products, or solutions.
Read more
Silverlabs India Private Limited
Ruchi  Sharma
Posted by Ruchi Sharma
Remote, Hyderabad
3 - 7 yrs
₹10L - ₹30L / yr
NodeJS (Node.js)
MongoDB
Mongoose
Express
Amazon Web Services (AWS)
+2 more
Back-end Developer
At Rizzle, we are building the World's #1 Short Videos Platform and are working towards building a team that is
deeply motivated to make that happen. Rizzle is a community for people to talk, react, perform, or create a new
show. Talk about life, relationships, ideas big and small, anything at all - Rizzle is the short video platform you've
been waiting for!
We're passionate about connecting people in interactive ways, the way life should be. We're obsessed about

building positive communities and providing people the right tools to keep interactions positive and are dual-
homed in San Francisco and Hyderabad (India).

Responsibilities:
• Work on an agile engineering team writing maintainable and scalable code for software components and
influencing team decisions.
• Collaborate with stakeholders to imagine, design, develop, test, and launch software.
• Capable of independently clarifying technical requirements, assessing development estimates and
applying a broad range of design approaches.
• Review code of other team members and provide constructive direction.
• Drive continuous improvement of software quality and maintainability of products/features.
• Continuous learning of technology trends, tools, and approaches, including sharing this knowledge with
your team.
• Mentor and lead developers by cultivating curiosity and deep technical understanding.

Requirements:
• Experience contributing to the architecture and design (architecture, design patterns, reliability and
scaling) of new and current systems
• A Bachelor's degree in Computer Science or equivalent combination of technical education and work
experience;
• 3 to 6 years of Software Development experience;
• Experience designing highly interactive web applications with performance, scalability, accessibility,
usability, design, and security in mind. If you don't have all of these, that's ok.
• Strong coding skills
• Solid software development background including design patterns, data structures, test driven
development.
• Experience with distributed systems, algorithms, and relational and no-sql databases.
• Familiar with building complex web applications.
• Software development experience in building highly scalable applications.
• Any previous experience in working with Node.js, Redis, FFMPEG, MongoDB, ElasticSearch, Cassandra,
Kafka or AWS is a plus.

Why choose Us?
• Enjoy a start-up culture where you learn and grow along with the organization. We know when to
work hard and play hard!
• We value your time-off, so we have an unlimited leavespolicy
• We also keep you covered in terms of a health insurance
• You wear what you are most comfortable in (Yes! You heard it right); lunch is on us (Everyday!) and
you also enjoy the liberty to choose your workinghours
Join us if you want to work with a flat and collaborative team and contribute to learnings of the team. We want to
learn from you too!

What you can expect in the interview process:
• Initial screening with HR
• Technical Interview I
• Technical Interview II
• HR Discussion
Read more
Hyderabad
6 - 8 yrs
₹8L - ₹15L / yr
Big Data
Apache Kafka
Kibana
Elastic Search
Logstash
Passionate data engineer with ability to manage data coming from different sources.
Should design and operate data pipe lines.
Build and manage analytics platform using Elastic search, Redshift, Mongo db.
Strong programming fundamentals in Datastructures and algorithms.
Read more
Remote, Bengaluru (Bangalore), Hyderabad
0 - 1 yrs
₹2.5L - ₹4L / yr
SQL
Data engineering
Big Data
Python
● Hands-on Work experience as a Python Developer
● Hands-on Work experience in SQL/PLSQL
● Expertise in at least one popular Python framework (like Django,
Flask or Pyramid)
● Knowledge of object-relational mapping (ORM)
● Familiarity with front-end technologies (like JavaScript and HTML5)
● Willingness to learn & upgrade to Big data and cloud technologies
like Pyspark Azure etc.
● Team spirit
● Good problem-solving skills
● Write effective, scalable code
Read more
Hyderabad
8 - 10 yrs
₹20L - ₹35L / yr
DevOps
Docker
Kubernetes
Terraform
Amazon Web Services (AWS)
+18 more
Sr Cloud & DevOps Engineer

We are looking for a self motivated and goal oriented candidate to lead in architecting, developing, deploying, and maintaining first class, highly scalable, highly available SaaS platforms.

This is a very hands-on role.  You will have a significant impact on Wenable's success.

Technical Requirements:

    8+ years SaaS and Cloud Architecture and Development with frameworks such as:
        - AWS, GoogleCloud, Azure, and/or other
        - Kafka, RabbitMQ, Redis, MongoDB, Cassandra, ElasticSearch
        - Docker, Kubernetes, Helm, Terraform, Mesos, VMs, and/or similar orchestration, scaling, and deployment frameworks
        - ProtoBufs, JSON modeling
        - CI/CD utilities like Jenkins, CircleCi, etc.. 
        - Log aggregation systems like Graylog or ELK
        - Additional development tools typically used in orchestration and automation like Python, Shell, etc...
        - Strong security best practices background
        - Strong software development a plus

Leadership Requirements:
    
    - Strong written and verbal skills.  This role will entail significant coordination both internally and externally.
    - Ability to lead projects of blended teams, on/offshore, of various sizes.
    - Ability to report to executive and leadership teams.
    - Must be data driven, and objective/goal oriented.
Read more
Fragma Data Systems

at Fragma Data Systems

8 recruiters
Evelyn Charles
Posted by Evelyn Charles
Remote, Bengaluru (Bangalore), Hyderabad
0 - 1 yrs
₹3L - ₹3.5L / yr
SQL
Data engineering
Data Engineer
Python
Big Data
+1 more
Strong Programmer with expertise in Python and SQL
 
● Hands-on Work experience in SQL/PLSQL
● Expertise in at least one popular Python framework (like Django,
Flask or Pyramid)
● Knowledge of object-relational mapping (ORM)
● Familiarity with front-end technologies (like JavaScript and HTML5)
● Willingness to learn & upgrade to Big data and cloud technologies
like Pyspark Azure etc.
● Team spirit
● Good problem-solving skills
● Write effective, scalable code
Read more
UAE Client
Remote, Bengaluru (Bangalore), Hyderabad
6 - 10 yrs
₹15L - ₹22L / yr
Informatica
Big Data
SQL
Hadoop
Apache Spark
+1 more

Skills- Informatica with Big Data Management

 

1.Minimum 6 to 8 years of experience in informatica BDM development
2.Experience working on Spark/SQL
3.Develops informtica mapping/Sql 

4. Should have experience in Hadoop, spark etc
Read more
DataMetica

at DataMetica

1 video
7 recruiters
Nikita Aher
Posted by Nikita Aher
Pune, Hyderabad
7 - 12 yrs
₹12L - ₹33L / yr
Big Data
Hadoop
Spark
Apache Spark
Apache Hive
+3 more

Job description

Role : Lead Architecture (Spark, Scala, Big Data/Hadoop, Java)

Primary Location : India-Pune, Hyderabad

Experience : 7 - 12 Years

Management Level: 7

Joining Time: Immediate Joiners are preferred


  • Attend requirements gathering workshops, estimation discussions, design meetings and status review meetings
  • Experience of Solution Design and Solution Architecture for the data engineer model to build and implement Big Data Projects on-premises and on cloud.
  • Align architecture with business requirements and stabilizing the developed solution
  • Ability to build prototypes to demonstrate the technical feasibility of your vision
  • Professional experience facilitating and leading solution design, architecture and delivery planning activities for data intensive and high throughput platforms and applications
  • To be able to benchmark systems, analyses system bottlenecks and propose solutions to eliminate them
  • Able to help programmers and project managers in the design, planning and governance of implementing projects of any kind.
  • Develop, construct, test and maintain architectures and run Sprints for development and rollout of functionalities
  • Data Analysis, Code development experience, ideally in Big Data Spark, Hive, Hadoop, Java, Python, PySpark,
  • Execute projects of various types i.e. Design, development, Implementation and migration of functional analytics Models/Business logic across architecture approaches
  • Work closely with Business Analysts to understand the core business problems and deliver efficient IT solutions of the product
  • Deployment sophisticated analytics program of code using any of cloud application.


Perks and Benefits we Provide!


  • Working with Highly Technical and Passionate, mission-driven people
  • Subsidized Meals & Snacks
  • Flexible Schedule
  • Approachable leadership
  • Access to various learning tools and programs
  • Pet Friendly
  • Certification Reimbursement Policy
  • Check out more about us on our website below!

www.datametica.com

Read more
DataMetica

at DataMetica

1 video
7 recruiters
Nikita Aher
Posted by Nikita Aher
Pune, Hyderabad
3 - 12 yrs
₹5L - ₹25L / yr
Apache Kafka
Big Data
Hadoop
Apache Hive
Java
+1 more

Summary
Our Kafka developer has a combination of technical skills, communication skills and business knowledge. The developer should be able to work on multiple medium to large projects. The successful candidate will have excellent technical skills of Apache/Confluent Kafka, Enterprise Data WareHouse preferable GCP BigQuery or any equivalent Cloud EDW and also will be able to take oral and written business requirements and develop efficient code to meet set deliverables.

 

Must Have Skills

  • Participate in the development, enhancement and maintenance of data applications both as an individual contributor and as a lead.
  • Leading in the identification, isolation, resolution and communication of problems within the production environment.
  • Leading developer and applying technical skills Apache/Confluent Kafka (Preferred) AWS Kinesis (Optional), Cloud Enterprise Data Warehouse Google BigQuery (Preferred) or AWS RedShift or SnowFlakes (Optional)
  • Design recommending best approach suited for data movement from different sources to Cloud EDW using Apache/Confluent Kafka
  • Performs independent functional and technical analysis for major projects supporting several corporate initiatives.
  • Communicate and Work with IT partners and user community with various levels from Sr Management to detailed developer to business SME for project definition .
  • Works on multiple platforms and multiple projects concurrently.
  • Performs code and unit testing for complex scope modules, and projects
  • Provide expertise and hands on experience working on Kafka connect using schema registry in a very high volume environment (~900 Million messages)
  • Provide expertise in Kafka brokers, zookeepers, KSQL, KStream and Kafka Control center.
  • Provide expertise and hands on experience working on AvroConverters, JsonConverters, and StringConverters.
  • Provide expertise and hands on experience working on Kafka connectors such as MQ connectors, Elastic Search connectors, JDBC connectors, File stream connector,  JMS source connectors, Tasks, Workers, converters, Transforms.
  • Provide expertise and hands on experience on custom connectors using the Kafka core concepts and API.
  • Working knowledge on Kafka Rest proxy.
  • Ensure optimum performance, high availability and stability of solutions.
  • Create topics, setup redundancy cluster, deploy monitoring tools, alerts and has good knowledge of best practices.
  • Create stubs for producers, consumers and consumer groups for helping onboard applications from different languages/platforms.  Leverage Hadoop ecosystem knowledge to design, and develop capabilities to deliver our solutions using Spark, Scala, Python, Hive, Kafka and other things in the Hadoop ecosystem. 
  • Use automation tools like provisioning using Jenkins, Udeploy or relevant technologies
  • Ability to perform data related benchmarking, performance analysis and tuning.
  • Strong skills in In-memory applications, Database Design, Data Integration.
Read more
DataMetica

at DataMetica

1 video
7 recruiters
Sumangali Desai
Posted by Sumangali Desai
Pune, Hyderabad
7 - 12 yrs
₹7L - ₹20L / yr
Apache Spark
Big Data
Spark
Scala
Hadoop
+3 more
We at Datametica Solutions Private Limited are looking for Big Data Spark Lead who have a passion for cloud with knowledge of different on-premise and cloud Data implementation in the field of Big Data and Analytics including and not limiting to Teradata, Netezza, Exadata, Oracle, Cloudera, Hortonworks and alike.
Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators.

Job Description
Experience : 7+ years
Location : Pune / Hyderabad
Skills :
  • Drive and participate in requirements gathering workshops, estimation discussions, design meetings and status review meetings
  • Participate and contribute in Solution Design and Solution Architecture for implementing Big Data Projects on-premise and on cloud
  • Technical Hands on experience in design, coding, development and managing Large Hadoop implementation
  • Proficient in SQL, Hive, PIG, Spark SQL, Shell Scripting, Kafka, Flume, Scoop with large Big Data and Data Warehousing projects with either Java, Python or Scala based Hadoop programming background
  • Proficient with various development methodologies like waterfall, agile/scrum and iterative
  • Good Interpersonal skills and excellent communication skills for US and UK based clients

About Us!
A global Leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.

We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.

Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.


We have our own products!
Eagle –
Data warehouse Assessment & Migration Planning Product
Raven –
Automated Workload Conversion Product
Pelican -
Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.

Why join us!
Datametica is a place to innovate, bring new ideas to live and learn new things. We believe in building a culture of innovation, growth and belonging. Our people and their dedication over these years are the key factors in achieving our success.

Benefits we Provide!
Working with Highly Technical and Passionate, mission-driven people
Subsidized Meals & Snacks
Flexible Schedule
Approachable leadership
Access to various learning tools and programs
Pet Friendly
Certification Reimbursement Policy

Check out more about us on our website below!
www.datametica.com
Read more
Surplus Hand
Agency job
via SurplusHand by Anju John
Remote, Hyderabad
3 - 5 yrs
₹10L - ₹14L / yr
Apache Hadoop
Apache Hive
PySpark
Big Data
Java
+3 more
Tech Skills:
• Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc)
• should have good hands-on Spark (spark with java/PySpark)
• Hive
• must be good with SQL's(spark SQL/ HiveQL)
• Application design, software development and automated testing
Environment Experience:
• Experience with implementing integrated automated release management using tools/technologies/frameworks like Maven, Git, code/security review tools, Jenkins, Automated testing, and Junit.
• Demonstrated experience with Agile or other rapid application development methods
• Cloud development (AWS/Azure/GCP)
• Unix / Shell scripting
• Web services , open API development, and REST concepts
Read more
Persistent Systems

at Persistent Systems

1 video
1 recruiter
Agency job
via Milestone Hr Consultancy by Haina khan
Bengaluru (Bangalore), Hyderabad, Pune
9 - 16 yrs
₹7L - ₹32L / yr
Big Data
Scala
Spark
Hadoop
Python
+1 more
Greetings..
 
We have urgent requirement for the post of Big Data Architect in reputed MNC company
 
 


Location:  Pune/Nagpur,Goa,Hyderabad/Bangalore

Job Requirements:

  • 9 years and above of total experience preferably in bigdata space.
  • Creating spark applications using Scala to process data.
  • Experience in scheduling and troubleshooting/debugging Spark jobs in steps.
  • Experience in spark job performance tuning and optimizations.
  • Should have experience in processing data using Kafka/Pyhton.
  • Individual should have experience and understanding in configuring Kafka topics to optimize the performance.
  • Should be proficient in writing SQL queries to process data in Data Warehouse.
  • Hands on experience in working with Linux commands to troubleshoot/debug issues and creating shell scripts to automate tasks.
  • Experience on AWS services like EMR.
Read more
SpringML

at SpringML

1 video
2 recruiters
Kayal Vizhi
Posted by Kayal Vizhi
Hyderabad
4 - 11 yrs
₹8L - ₹20L / yr
Big Data
Hadoop
Apache Spark
Spark
Data Structures
+3 more

SpringML is looking to hire a top-notch Senior  Data Engineer who is passionate about working with data and using the latest distributed framework to process large dataset. As an Associate Data Engineer, your primary role will be to design and build data pipelines. You will be focused on helping client projects on data integration, data prep and implementing machine learning on datasets. In this role, you will work on some of the latest technologies, collaborate with partners on early win, consultative approach with clients, interact daily with executive leadership, and help build a great company. Chosen team members will be part of the core team and play a critical role in scaling up our emerging practice.

RESPONSIBILITIES:

 

  • Ability to work as a member of a team assigned to design and implement data integration solutions.
  • Build Data pipelines using standard frameworks in Hadoop, Apache Beam and other open-source solutions.
  • Learn quickly – ability to understand and rapidly comprehend new areas – functional and technical – and apply detailed and critical thinking to customer solutions.
  • Propose design solutions and recommend best practices for large scale data analysis

 

SKILLS:

 

  • B.tech  degree in computer science, mathematics or other relevant fields.
  • 4+years of experience in ETL, Data Warehouse, Visualization and building data pipelines.
  • Strong Programming skills – experience and expertise in one of the following: Java, Python, Scala, C.
  • Proficient in big data/distributed computing frameworks such as Apache,Spark, Kafka,
  • Experience with Agile implementation methodologies
Read more
MNC

at MNC

Agency job
via Fragma Data Systems by geeti gaurav mohanty
Bengaluru (Bangalore), Hyderabad
3 - 6 yrs
₹10L - ₹15L / yr
Big Data
Spark
ETL
Apache
Hadoop
+2 more
Desired Skill, Experience, Qualifications, and Certifications:
• 5+ years’ experience developing and maintaining modern ingestion pipeline using
technologies like Spark, Apache Nifi etc).
• 2+ years’ experience with Healthcare Payors (focusing on Membership, Enrollment, Eligibility,
• Claims, Clinical)
• Hands on experience on AWS Cloud and its Native components like S3, Athena, Redshift &
• Jupyter Notebooks
• Strong in Spark Scala & Python pipelines (ETL & Streaming)
• Strong experience in metadata management tools like AWS Glue
• String experience in coding with languages like Java, Python
• Worked on designing ETL & streaming pipelines in Spark Scala / Python
• Good experience in Requirements gathering, Design & Development
• Working with cross-functional teams to meet strategic goals.
• Experience in high volume data environments
• Critical thinking and excellent verbal and written communication skills
• Strong problem-solving and analytical abilities, should be able to work and delivery
individually
• Good-to-have AWS Developer certified, Scala coding experience, Postman-API and Apache
Airflow or similar schedulers experience
• Nice-to-have experience in healthcare messaging standards like HL7, CCDA, EDI, 834, 835, 837
• Good communication skills
Read more
SpringML

at SpringML

1 video
4 recruiters
Sai Raj Sampath
Posted by Sai Raj Sampath
Remote, Hyderabad
4 - 9 yrs
₹12L - ₹20L / yr
Big Data
Data engineering
TensorFlow
Apache Spark
Java
+2 more
REQUIRED SKILLS:

• Total of 4+ years of experience in development, architecting/designing and implementing Software solutions for enterprises.

• Must have strong programming experience in either Python or Java/J2EE.

• Minimum of 4+ year’s experience working with various Cloud platforms preferably Google Cloud Platform.

• Experience in Architecting and Designing solutions leveraging Google Cloud products such as Cloud BigQuery, Cloud DataFlow, Cloud Pub/Sub, Cloud BigTable and Tensorflow will be highly preferred.

• Presentation skills with a high degree of comfort speaking with management and developers

• The ability to work in a fast-paced, work environment

• Excellent communication, listening, and influencing skills

RESPONSIBILITIES:

• Lead teams to implement and deliver software solutions for Enterprises by understanding their requirements.

• Communicate efficiently and document the Architectural/Design decisions to customer stakeholders/subject matter experts.

• Opportunity to learn new products quickly and rapidly comprehend new technical areas – technical/functional and apply detailed and critical thinking to customer solutions.

• Implementing and optimizing cloud solutions for customers.

• Migration of Workloads from on-prem/other public clouds to Google Cloud Platform.

• Provide solutions to team members for complex scenarios.

• Promote good design and programming practices with various teams and subject matter experts.

• Ability to work on any product on the Google cloud platform.

• Must be hands-on and be able to write code as required.

• Ability to lead junior engineers and conduct code reviews



QUALIFICATION:

• Minimum B.Tech/B.E Engineering graduate
Read more
INSOFE

at INSOFE

1 recruiter
Nitika Bist
Posted by Nitika Bist
Hyderabad, Bengaluru (Bangalore)
7 - 10 yrs
₹12L - ₹18L / yr
Big Data
Data engineering
Apache Hive
Apache Spark
Hadoop
+4 more
Roles & Responsibilities:
  • Total Experience of 7-10 years and should be interested in teaching and research
  • 3+ years’ experience in data engineering which includes data ingestion, preparation, provisioning, automated testing, and quality checks.
  • 3+ Hands-on experience in Big Data cloud platforms like AWS and GCP, Data Lakes and Data Warehouses
  • 3+ years of Big Data and Analytics Technologies. Experience in SQL, writing code in spark engine using python, scala or java Language. Experience in Spark, Scala
  • Experience in designing, building, and maintaining ETL systems
  • Experience in data pipeline and workflow management tools like Airflow
  • Application Development background along with knowledge of Analytics libraries, opensource Natural Language Processing, statistical and big data computing libraries
  • Familiarity with Visualization and Reporting Tools like Tableau, Kibana.
  • Should be good at storytelling in Technology
Please note that candidates should be interested in teaching and research work.

Qualification: B.Tech / BE / M.Sc / MBA / B.Sc, Having Certifications in Big Data Technologies and Cloud platforms like AWS, Azure and GCP will be preferred
Primary Skills: Big Data + Python + Spark + Hive + Cloud Computing
Secondary Skills: NoSQL+ SQL + ETL + Scala + Tableau
Selection Process: 1 Hackathon, 1 Technical round and 1 HR round
Benefit: Free of cost training on Data Science from top notch professors
Read more
Dremio

at Dremio

4 recruiters
Kiran B
Posted by Kiran B
Hyderabad, Bengaluru (Bangalore)
15 - 20 yrs
Best in industry
Java
Data Structures
Algorithms
Multithreading
Problem solving
+7 more

About the Role

The Dremio India team owns the DataLake Engine along with Cloud Infrastructure and services that power it. With focus on next generation data analytics supporting modern table formats like Iceberg, Deltalake, and open source initiatives such as Apache Arrow, Project Nessie and hybrid-cloud infrastructure, this team provides various opportunities to learn, deliver, and grow in career. We are looking for technical leaders with passion and experience in architecting and delivering high-quality distributed systems at massive scale.

Responsibilities & ownership

  • Lead end-to-end delivery and customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product
  • Lead and mentor others about concurrency, parallelization to deliver scalability, performance and resource optimization in a multithreaded and distributed environment
  • Propose and promote strategic company-wide tech investments taking care of business goals, customer requirements, and industry standards
  • Lead the team to solve complex, unknown and ambiguous problems, and customer issues cutting across team and module boundaries with technical expertise, and influence others
  • Review and influence designs of other team members 
  • Design and deliver architectures that run optimally on public clouds like GCP, AWS, and Azure
  • Partner with other leaders to nurture innovation and engineering excellence in the team
  • Drive priorities with others to facilitate timely accomplishments of business objectives
  • Perform RCA of customer issues and drive investments to avoid similar issues in future
  • Collaborate with Product Management, Support, and field teams to ensure that customers are successful with Dremio
  • Proactively suggest learning opportunities about new technology and skills, and be a role model for constant learning and growth

Requirements

  • B.S./M.S/Equivalent in Computer Science or a related technical field or equivalent experience
  • Fluency in Java/C++ with 15+ years of experience developing production-level software
  • Strong foundation in data structures, algorithms, multi-threaded and asynchronous programming models and their use in developing distributed and scalable systems
  • 8+ years experience in developing complex and scalable distributed systems and delivering, deploying, and managing microservices successfully
  • Subject Matter Expert in one or more of query processing or optimization, distributed systems, concurrency, micro service based architectures, data replication, networking, storage systems
  • Experience in taking company-wide initiatives, convincing stakeholders, and delivering them
  • Expert in solving complex, unknown and ambiguous problems spanning across teams and taking initiative in planning and delivering them with high quality
  • Ability to anticipate and propose plan/design changes based on changing requirements 
  • Passion for quality, zero downtime upgrades, availability, resiliency, and uptime of the platform
  • Passion for learning and delivering using latest technologies
  • Hands-on experience of working projects on AWS, Azure, and GCP 
  • Experience with containers and Kubernetes for orchestration and container management in private and public clouds (AWS, Azure,  and GCP) 
  • Understanding of distributed file systems such as  S3, ADLS or HDFS
  • Excellent communication skills and affinity for collaboration and teamwork

 

Read more
Milestone Hr Consultancy

at Milestone Hr Consultancy

2 recruiters
Jyoti Sharma
Posted by Jyoti Sharma
Remote, Hyderabad
3 - 8 yrs
₹6L - ₹16L / yr
Python
Django
Data engineering
Apache Hive
Apache Spark
We are currently looking for passionate Data Engineers to join our team and mission. In this role, you will help doctors from across the world improve care and save lives by helping extract insights and predict risk. Our Data Engineers ensure that data are ingested and prepared, ready for insights and intelligence to be derived from them. We’re looking for smart individuals to join our incredibly talented team, that is on a mission to transform healthcare.As a Data Engineer you will be engaged in some or all of the following activities:• Implement, test and deploy distributed data ingestion, data processing and feature engineering systems computing on large volumes of Healthcare data using a variety of open source and proprietary technologies.• Design data architectures and schemas optimized for analytics and machine learning.• Implement telemetry to monitor the performance and operations of data pipelines.• Develop tools and libraries to implement and manage data processing pipelines, including ingestion, cleaning, transformation, and feature computation.• Work with large data sets, and integrate diverse data sources, data types and data structures.• Work with Data Scientists, Machine Learning Engineers and Visualization Engineers to understand data requirements, and translate them into production-ready data pipelines.• Write and automate unit, functional, integration and performance tests in a Continuous Integration environment.• Take initiative to find solutions to technical challenges for healthcare data.You are a great match if you have some or all of the following skills and qualifications.• Strong understanding of database design and feature engineering to support Machine Learning and analytics.• At least 3 years of industry experience building, testing and deploying large-scale, distributed data processing systems.• Proficiency in working with multiple data processing tools and query languages (Python, Spark, SQL, etc.).• Excellent understanding of distributed computing concepts and Big Data technologies (Spark, Hive, etc.).• Proficiency in performance tuning and optimization of data processing pipelines.• Attention to detail and focus on software quality, with experience in software testing.• Strong cross discipline communication skills and teamwork.• Demonstrated clear and thorough logical and analytical thinking, as well as problem solving skills.• Bachelor or Masters in Computer Science or related field. Skill - Apache Spark-Python-Hive Skill Description - Skill1– SparkSkill2- PythonSkill3 – Hive, SQL Responsibility - Sr. data engineer"
Read more
Qentelli

at Qentelli

5 recruiters
Pratap Garlapati
Posted by Pratap Garlapati
Remote only
4 - 13 yrs
₹5L - ₹15L / yr
PostgreSQL
Relational Database (RDBMS)
Data modeling
Software Development
Big Data

Qentelli is seeking a Solution Architect to untangle and redesign a huge granny old monolithic legacy system. Interesting part is that the new system should be commissioned module by module and legacy system should phase off accordingly. So your design will have a cutting edge future state and a transition state to get there. Implementation now is all Microsoft tech stack and will continue to be on newer Microsoft tech stack. Also there is a critical component of API management to be introduced into the solution. Performance and scalability will be at the center of your solution architecture. Data modelling is one thing that is of super high importance to know.

 

You’ll have a distributed team with onshore in the US and offshore in India. As a Solution Architect, you should be able to wear multiple hats of working with client on solutioning and getting it implemented by engineering and infrastructure teams that are both onshore and offshore. Right candidate will be awesome at fleshing out and documenting every finer detail of the solution, elaborate at communicating with your teams, disciplined at getting it implemented and passionate for client success.

 

TECHNOLOGIES YOU’LL NEED TO KNOW

Greetings from Qentelli Solutions Private Limited!

 

We are hiring for PostgreSQL Developer

Experience: 4 to 12 years

Job Location: Hyderabad

 

Job Description:

  • Experience in RDBMS (PostgreSQL preferred), Database Backend development, Data Modelling, Performance Tuning, exposure to NoSQL DB, Kubernetes or Cloud (AWS/Azure/GCS)

 

Skillset for Developer-II:

  • Experience on any Big Data Tools (Nifi, Kafka, Spark, sqoop, storm, snowflake), Database Backend development, Python, No SQL DB, API Exposure, cloud or Kubernetes exposure

 

Skillset for API Developer:

  • API Development with extensive knowledge on any RDBMS (preferred PostgreSQL), exposure to cloud or Kubernetes
Read more
Indium Software

at Indium Software

16 recruiters
Mohamed Aslam
Posted by Mohamed Aslam
Hyderabad
3 - 7 yrs
₹7L - ₹13L / yr
Python
Spark
SQL
PySpark
HiveQL
+2 more

Indium Software is a niche technology solutions company with deep expertise in Digital , QA and Gaming. Indium helps customers in their Digital Transformation journey through a gamut of solutions that enhance business value.

With over 1000+ associates globally, Indium operates through offices in the US, UK and India

Visit http://www.indiumsoftware.com">www.indiumsoftware.com to know more.

Job Title: Analytics Data Engineer

What will you do:
The Data Engineer must be an expert in SQL development further providing support to the Data and Analytics in database design, data flow and analysis activities. The position of the Data Engineer also plays a key role in the development and deployment of innovative big data platforms for advanced analytics and data processing. The Data Engineer defines and builds the data pipelines that will enable faster, better, data-informed decision-making within the business.

We ask:

Extensive Experience with SQL and strong ability to process and analyse complex data

The candidate should also have an ability to design, build, and maintain the business’s ETL pipeline and data warehouse The candidate will also demonstrate expertise in data modelling and query performance tuning on SQL Server
Proficiency with analytics experience, especially funnel analysis, and have worked on analytical tools like Mixpanel, Amplitude, Thoughtspot, Google Analytics, and similar tools.

Should work on tools and frameworks required for building efficient and scalable data pipelines
Excellent at communicating and articulating ideas and an ability to influence others as well as drive towards a better solution continuously.
Experience working in python, Hive queries, spark, pysaprk, sparkSQL, presto

  • Relate Metrics to product
  • Programmatic Thinking
  • Edge cases
  • Good Communication
  • Product functionality understanding

Perks & Benefits:
A dynamic, creative & intelligent team they will make you love being at work.
Autonomous and hands-on role to make an impact you will be joining at an exciting time of growth!

Flexible work hours and Attractive pay package and perks
An inclusive work environment that lets you work in the way that works best for you!

Read more
Helical IT Solutions

at Helical IT Solutions

4 recruiters
Niyotee Gupta
Posted by Niyotee Gupta
Hyderabad
1 - 5 yrs
₹3L - ₹8L / yr
ETL
Big Data
TAC
PL/SQL
Relational Database (RDBMS)
+1 more

ETL Developer – Talend

Job Duties:

  • ETL Developer is responsible for Design and Development of ETL Jobs which follow standards,

best practices and are maintainable, modular and reusable.

  • Proficiency with Talend or Pentaho Data Integration / Kettle.
  • ETL Developer will analyze and review complex object and data models and the metadata

repository in order to structure the processes and data for better management and efficient

access.

  • Working on multiple projects, and delegating work to Junior Analysts to deliver projects on time.
  • Training and mentoring Junior Analysts and building their proficiency in the ETL process.
  • Preparing mapping document to extract, transform, and load data ensuring compatibility with

all tables and requirement specifications.

  • Experience in ETL system design and development with Talend / Pentaho PDI is essential.
  • Create quality rules in Talend.
  • Tune Talend / Pentaho jobs for performance optimization.
  • Write relational(sql) and multidimensional(mdx) database queries.
  • Functional Knowledge of Talend Administration Center/ Pentaho data integrator, Job Servers &

Load balancing setup, and all its administrative functions.

  • Develop, maintain, and enhance unit test suites to verify the accuracy of ETL processes,

dimensional data, OLAP cubes and various forms of BI content including reports, dashboards,

and analytical models.

  • Exposure in Map Reduce components of Talend / Pentaho PDI.
  • Comprehensive understanding and working knowledge in Data Warehouse loading, tuning, and

maintenance.

  • Working knowledge of relational database theory and dimensional database models.
  • Creating and deploying Talend / Pentaho custom components is an add-on advantage.
  • Nice to have java knowledge.

Skills and Qualification:

  • BE, B.Tech / MS Degree in Computer Science, Engineering or a related subject.
  • Having an experience of 3+ years.
  • Proficiency with Talend or Pentaho Data Integration / Kettle.
  • Ability to work independently.
  • Ability to handle a team.
  • Good written and oral communication skills.
Read more
Hyderabad
5 - 13 yrs
₹15L - ₹26L / yr
Java
Agile/Scrum
ITIL
DevOps
Big Data
+2 more
Full Stack Developer belongs to Self-Organizing and Cross Functional development team and is able to convert sprint backlog items to shippable product. He collectively owns end to end development responsibility for a given Agile Team / POD. He will design, code and test the user stories committed for a sprint. Works independently under limited supervision. Possess skills to effectively deal with issues,challenges within field of specialization to develop application solutions. Primary Responsibilities:  Lead an agile team within a Release Team/Value Stream or IT Support Team.  Accountable for team delivery.  Develop and automate business solutions by creating new and modifying existing software applications.  Develop innovation, strategies, processes, and best practices  Technically hands on and excellent in Design, Coding and Testing.  Collectively responsible for end to end product quality.  Creation of high/low level application design.  Participates and contributes in Sprint Ceremonies.  Promote and develop the culture of collaboration, accountability & quality.  Provides technical support to team. Helps team in resolving technical issues .  Closely working Business Teams, Onshore partners, deployment and infrastructure teams.  <Others – If any> Required Qualifications:  8 - 13 Years of experience - working on multiple layer of technology  Excellent verbal, written and interpersonal communication skills  Demonstrate capability to create high/low level designs.  Engineering Practices o Agile:  Working experience of 2+ year in “Agile team”.  Understanding of various agile methodologies such as Scrum, Kanban  Working experience of Test Driven Development. o ITIL/ITSM: Good understanding of IT Support / Production Support o Data / Information Security – Working knowledge on the below –  Common security vulnerabilities, their causes and implementations to fix the same.  Security scanning methodologies and tools (e.g. HP Fortify, Whitehat, Webinspect) o Good in Data Structure, Algorithms and Design Patterns. o Demonstrates excellent problem solving skills. o Good in design thinking and approach to solve business problem by applying suitable technologies (cost efficient, high performance, resilient and scalable).  Common Technical Skills o Database: 2+ year working experience of database (SQL or PL/SQL), Good knowledge of. Exposure to Big Data, NoSQL/Flat Database. o API /Web Services: 1+ year working experience in Web Services / API, REST Architecture etc. o DevOps:  Working experience in set up or maintenance of CI/CD pipeline (test, build , deployment and monitoring automation)  2+ years working experience of software configuration management and packaging.  Experience in using automated deployment and release management tools such as XL Deploy, XL Release, Jenkins.  2+ years working knowledge of build tools such as Maven/Gradle o Cloud: Working experience or good knowledge of cloud platforms (e.g OpenShift, Azure, AWS). Capable of demonstrating how to develop a sample cloud based application / micro- services architecture. o Open Source:  Demonstrate hands on knowledge of OpenSource adoption and use cases.  Real implementation experience of one or more open source technology (MySQL, JBoss Platform, Apache Camel)  Good to have - Contributing to one or more technical forums related to an open source technology.  Product / Project / Program Related Tech Stack : o Front End – <Desired Technologies and Tools> o Back End – <Desired Technologies and Tools> o Middleware – <Desired Technologies and Tools> o Testing - <Desired Technologies and Tools> o DevOps - <Desired Tools> o Others – <Desired Technologies and Tools> o Certifications - <Desired Certifications> o Development Methodology / Engineering Practices – Agile (SCRUM / KANBAN / SAFe) Preferred Qualifications:  Excellent verbal, written and interpersonal communication skills  Ability to work collaboratively in a global team with a positive team spirit  Knowledge of US Healthcare domain  Knowledge or certification – SAFe  Knowledge of certification – ITIL  Work experience in product engineering
Read more
OpexAI

at OpexAI

1 recruiter
Jasmine Shaik
Posted by Jasmine Shaik
Hyderabad
0 - 1 yrs
₹1L - ₹1L / yr
Business Intelligence (BI)
Python
Big Data
Bigdata, Business intelligence , python, R with their skills
Read more
UpX Academy

at UpX Academy

2 recruiters
Suchit Majumdar
Posted by Suchit Majumdar
Noida, Hyderabad, NCR (Delhi | Gurgaon | Noida)
2 - 6 yrs
₹4L - ₹12L / yr
Spark
Hadoop
MongoDB
Python
Scala
+3 more
Looking for a technically sound and excellent trainer on big data technologies. Get an opportunity to become popular in the industry and get visibility. Host regular sessions on Big data related technologies and get paid to learn.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort