Cutshort logo
Data engineering jobs

50+ Data engineering Jobs in India

Apply to 50+ Data engineering Jobs on CutShort.io. Find your next job, effortlessly. Browse Data engineering Jobs and apply today!

icon
Hashone Careers

at Hashone Careers

2 candid answers
Madhavan I
Posted by Madhavan I
Mumbai
5 - 8 yrs
₹12L - ₹24L / yr
Data engineering
skill iconPython
SQL

Job Description

Location: Mumbai (with short/medium-term travel opportunities within India & foreign location)

Experience: 5 -8 years

Job Type: Full-time

About the Role

We are looking for experienced data engineers who can independently build, optimize, and manage scalable data pipelines and platforms. In this role, you’ll work closely with clients and internal teams to deliver robust data solutions that power analytics, AI/ML, and operational systems. You’ll also help mentor junior engineers and bring engineering discipline into our data engagements.

Key Responsibilities

Design, build, and optimize large-scale, distributed data pipelines for both batch and streaming use cases.


Implement scalable data models, data warehouses/lakehouses, and data lakes to support analytics and decision-making.


Collaborate with cross-functional stakeholders to understand business requirements and translate them into technical data solutions.


Drive performance tuning, monitoring, and reliability of data pipelines.


Write clean, modular, and production-ready code with proper documentation and testing.


Contribute to architectural discussions, tool evaluations, and platform setup.


Mentor junior engineers and participate in code/design reviews.


Must-Have Skills

Strong programming skills in Python and advanced SQL expertise.


Deep understanding of data engineering concepts such as ETL/ELT, data modeling (OLTP & OLAP), warehousing, and stream processing.


Experience with distributed data processing frameworks (e.g., Apache Spark, Flink, or similar).


Exposure with Java is mandate


Experience with building pipelines using orchestration tools like Airflow or similar.


Familiarity with CI/CD pipelines and version control tools like Git.


Ability to debug, optimize, and scale data pipelines in real-world settings.


Good to Have

Experience working on any major cloud platform (AWS preferred; GCP or Azure also welcome).


Exposure to Databricks, dbt, or similar platforms is a plus.


Experience with Snowflake is preferred.


Understanding of data governance, data quality frameworks, and observability.


Certification in AWS (e.g., Data Analytics, Solutions Architect) or Databricks is a plus.


Other Expectations

Comfortable working in fast-paced, client-facing environments.


Strong analytical and problem-solving skills with attention to detail.


Ability to adapt across tools, stacks, and business domains.


Willingness to travel within India for short/medium-term client engagements as needed.



Read more
CGI Inc

at CGI Inc

3 recruiters
Shruthi BT
Posted by Shruthi BT
Bengaluru (Bangalore), Mumbai, Pune, Hyderabad, Chennai
8 - 15 yrs
₹15L - ₹25L / yr
Google Cloud Platform (GCP)
Data engineering
Big query

Google Data Engineer - SSE


Position Description

Google Cloud Data Engineer

Notice Period: Immediate to 30 days serving

Job Description:

We are seeking a highly skilled Data Engineer with extensive experience in Google Cloud Platform (GCP) data services and big data technologies. The ideal candidate will be responsible for designing, implementing, and optimizing scalable data solutions while ensuring high performance, reliability, and security.

Key Responsibilities:


• Design, develop, and maintain scalable data pipelines and architectures using GCP data services.

• Implement and optimize solutions using BigQuery, Dataproc, Composer, Pub/Sub, Dataflow, GCS, and BigTable.

• Work with GCP databases such as Bigtable, Spanner, CloudSQL, AlloyDB, ensuring performance, security, and availability.

• Develop and manage data processing workflows using Apache Spark, Hadoop, Hive, Kafka, and other Big Data technologies.

• Ensure data governance and security using Dataplex, Data Catalog, and other GCP governance tooling.

• Collaborate with DevOps teams to build CI/CD pipelines for data workloads using Cloud Build, Artifact Registry, and Terraform.

• Optimize query performance and data storage across structured and unstructured datasets.

• Design and implement streaming data solutions using Pub/Sub, Kafka, or equivalent technologies.


Required Skills & Qualifications:


• 8-15 years of experience

• Strong expertise in GCP Dataflow, Pub/Sub, Cloud Composer, Cloud Workflow, BigQuery, Cloud Run, Cloud Build.

• Proficiency in Python and Java, with hands-on experience in data processing and ETL pipelines.

• In-depth knowledge of relational databases (SQL, MySQL, PostgreSQL, Oracle) and NoSQL databases (MongoDB, Scylla, Cassandra, DynamoDB).

• Experience with Big Data platforms such as Cloudera, Hortonworks, MapR, Azure HDInsight, IBM Open Platform.

• Strong understanding of AWS Data services such as Redshift, RDS, Athena, SQS/Kinesis.

• Familiarity with data formats such as Avro, ORC, Parquet.

• Experience handling large-scale data migrations and implementing data lake architectures.

• Expertise in data modeling, data warehousing, and distributed data processing frameworks.

• Deep understanding of data formats such as Avro, ORC, Parquet.

• Certification in GCP Data Engineering Certification or equivalent.


Good to Have:


• Experience in BigQuery, Presto, or equivalent.

• Exposure to Hadoop, Spark, Oozie, HBase.

• Understanding of cloud database migration strategies.

• Knowledge of GCP data governance and security best practices.

Read more
Deqode

at Deqode

1 recruiter
Shraddha Katare
Posted by Shraddha Katare
Pune, Gurugram, Jaipur, Bhopal
5 - 8 yrs
₹10L - ₹18L / yr
Data engineering
databricks
Data Structures
skill iconPython
PySpark

Job Description -

Position: Senior Data Engineer (Azure)

Experience - 6+ Years

Mode - Hybrid

Location - Gurgaon, Pune, Jaipur, Bangalore, Bhopal


Key Responsibilities:

  • Data Processing on Azure: Azure Data Factory, Streaming Analytics, Event Hubs, Azure Databricks, Data Migration Service, Data Pipeline.
  • Provisioning, configuring, and developing Azure solutions (ADB, ADF, ADW, etc.).
  • Design and implement scalable data models and migration strategies.
  • Work on distributed big data batch or streaming pipelines (Kafka or similar).
  • Develop data integration and transformation solutions for structured and unstructured data.
  • Collaborate with cross-functional teams for performance tuning and optimization.
  • Monitor data workflows and ensure compliance with data governance and quality standards.
  • Contribute to continuous improvement through automation and DevOps practices.

Required Skills & Experience:

  • 6–10 years of experience as a Data Engineer.
  • Strong proficiency in Azure Databricks, PySpark, Python, SQL, and Azure Data Factory.
  • Experience in Data Modelling, Data Migration, and Data Warehousing.
  • Good understanding of database structure principles and schema design.
  • Hands-on experience using MS SQL Server, Oracle, or similar RDBMS platforms.
  • Experience in DevOps tools (Azure DevOps, Jenkins, Airflow, Azure Monitor) – good to have.
  • Knowledge of distributed data processing and real-time streaming (Kafka/Event Hub).
  • Familiarity with visualization tools like Power BI or Tableau.
  • Strong analytical, problem-solving, and debugging skills.
  • Self-motivated, detail-oriented, and capable of managing priorities effectively.


Read more
CoffeeBeans

at CoffeeBeans

2 candid answers
Ariba Khan
Posted by Ariba Khan
Mumbai, Hyderabad
4 - 6 yrs
Upto ₹27L / yr (Varies
)
skill iconPython
SQL
skill iconJava
Data engineering

We are looking for experienced Data Engineers who can independently build, optimize, and manage scalable data pipelines and data platforms. In this role, you will collaborate with clients and internal teams to deliver robust data solutions that support analytics, AI/ML, and operational systems. You will also mentor junior engineers and bring strong engineering discipline to our data engagements.


Key Responsibilities

  • Design, build, and optimize large-scale, distributed batch and streaming data pipelines.
  • Implement scalable data models, data warehouses/lakehouses, and data lakes to support analytics and decision-making.
  • Work closely with cross-functional stakeholders to translate business requirements into technical data solutions.
  • Drive performance tuning, monitoring, and reliability of data pipelines.
  • Write clean, modular, production-ready code with proper documentation and testing.
  • Contribute to architecture discussions, tool evaluations, and platform setup.
  • Mentor junior engineers and participate in code/design reviews.

Must-Have Skills

  • Strong programming skills in Python (exp with Java is a good to have).
  • Advanced SQL expertise with ability to work on complex queries and optimizations.
  • Deep understanding of data engineering concepts such as ETL/ELT, data modeling (OLTP & OLAP), warehousing, and stream processing.
  • Experience with distributed processing frameworks like Apache Spark, Flink, or similar.
  • Experience with Snowflake (preferred).
  • Hands-on experience building pipelines using orchestration tools such as Airflow or similar.
  • Familiarity with CI/CD, version control (Git), and modern development practices.
  • Ability to debug, optimize, and scale data pipelines in real-world environments.

Good to Have

  • Experience with major cloud platforms (AWS preferred; GCP/Azure also welcome).
  • Exposure to Databricks, dbt, or similar platforms.
  • Understanding of data governance, data quality frameworks, and observability.
  • Certifications in AWS (Data Analytics / Solutions Architect) or Databricks.

Other Expectations

  • Comfortable working in fast-paced, client-facing environments.
  • Strong analytical and problem-solving skills with excellent attention to detail.
  • Ability to adapt across tools, stacks, and business domains.
  • Willingness to travel within India for short/medium-term client engagements as needed.
Read more
The Blue Owls Solutions

at The Blue Owls Solutions

2 candid answers
Apoorvo Chakraborty
Posted by Apoorvo Chakraborty
Remote only
4 - 8 yrs
₹20L - ₹25L / yr
Microsoft Fabric
Fabric
Data engineering
skill iconData Analytics
databricks
+1 more

Role Summary

We’re seeking a seasoned Azure Data Engineer / Consultant to lead design, build, and operationalize cloud-native data solutions. You’ll partner with external clients—from discovery through Go-Live—to ingest, model, transform, and serve high-volume data using Synapse, Fabric, PySpark, and SQL. You’ll also establish best practices around CI/CD, security, and data quality within a medallion architecture.



Required Skills (at least 4+ years of relevant experience - this is a senior-level role)

  • Ability to talk to the customer
  • Manage the delivery of an End-To-End Enterprise project
  • Fabric (or Synapse/ADF/Databricks) Data Engineering background
  • Data Warehousing experience
  • Basic understanding of Agentic AI


Preferred Skills & Certifications:

  • DP-600, DP-700 or equivalent certification.
  • Experience with Azure Purview, Data Catalogs, or metadata management tools.
  • Familiarity with orchestration frameworks (ADF, Synapse Pipelines), Spark optimization (broadcast joins, partition pruning), and data-quality frameworks (Great Expectations, custom).
  • Prior consulting or client-facing engagements in enterprise environments.
  • Knowledge of BI tools (Power BI).


What We Offer:

  • Opportunity to own and shape cutting-edge data solutions for diverse industries.
  • Collaborative, outcome-driven culture with career growth and skill-building opportunities.
  • Flexible hours and remote-first work model.
  • Competitive compensation and benefits package.



Read more
Loyalty Juggernaut Inc

at Loyalty Juggernaut Inc

2 recruiters
Shraddha Dhavle
Posted by Shraddha Dhavle
Hyderabad
3 - 5 yrs
₹5L - ₹15L / yr
ETL
ETL architecture
skill iconPython
Data engineering

At Loyalty Juggernaut, we’re on a mission to revolutionize customer loyalty through AI-driven SaaS solutions. We are THE JUGGERNAUTS, driving innovation and impact in the loyalty ecosystem with GRAVTY®, our SaaS Product that empowers multinational enterprises to build deeper customer connections. Designed for scalability and personalization, GRAVTY® delivers cutting-edge loyalty solutions that transform customer engagement across diverse industries including Airlines, Airport, Retail, Hospitality, Banking, F&B, Telecom, Insurance and Ecosystem.


Our Impact:

  • 400+ million members connected through our platform.
  • Trusted by 100+ global brands/partners, driving loyalty and brand devotion worldwide.


Proud to be a Three-Time Champion for Best Technology Innovation in Loyalty!!


Explore more about us at www.lji.io.


What you will OWN:

  • Build the infrastructure required for optimal extraction, transformation, and loading of data from various sources using SQL and AWS ‘big data’ technologies.
  • Create and maintain optimal data pipeline architecture.
  • Identify, design, and implement internal process improvements, automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Work with stakeholders, including the Technical Architects, Developers, Product Owners, and Executives, to assist with data-related technical issues and support their data infrastructure needs.
  • Create tools for data management and data analytics that can assist them in building and optimizing our product to become an innovative industry leader.


You would make a GREAT FIT if you have:

  • Have 2 to 5 years of relevant backend development experience, with solid expertise in Python.
  • Possess strong skills in Data Structures and Algorithms, and can write optimized, maintainable code.
  • Are familiar with database systems, and can comfortably work with PostgreSQL, as well as NoSQL solutions like MongoDB or DynamoDB.
  • Hands-on experience using Cloud Dataware houses like AWS Redshift, GBQ, etc.
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift, and AWS Batch would be an added advantage.
  • Have a solid understanding of ETL processes and tools and can build or modify ETL pipelines effectively.
  • Have experience managing or building data pipelines and architectures at scale.
  • Understand the nuances of data ingestion, transformation, storage, and analytics workflows.
  • Communicate clearly and work collaboratively across engineering, product.


Why Choose US?

  • This opportunity offers a dynamic and supportive work environment where you'll have the chance to not just collaborate with talented technocrats but also work with globally recognized brands, gain exposure, and carve your own career path.
  • You will get to innovate and dabble in the future of technology -Enterprise Cloud Computing, Blockchain, Machine Learning, AI, Mobile, Digital Wallets, and much more.


Read more
Remote only
3 - 6 yrs
₹15L - ₹30L / yr
PySpark
skill iconScala
databricks
Data engineering

Job Title: Senior Data Engineer

No. of Positions: 3

Employment Type: Full-Time, Permanent

Location: Remote (Pan India)

Shift Timings: 10:00 AM – 7:00 PM IST

Experience Required: Minimum 3+ Years

Mandatory Skills: Scala & PySpark


Role Overview

We are looking for an experienced Senior Data Engineer to design, build, and optimize scalable data pipelines and architectures. The ideal candidate should have hands-on experience working with Big Data technologies, distributed systems, and ETL pipelines. You will work closely with cross-functional teams including Data Analysts, Data Scientists, and Software Engineers to ensure efficient data flow and reliable data infrastructure.


Key Responsibilities

  • Design and build scalable data pipelines for extraction, transformation, and loading (ETL) from various data sources.
  • Enhance internal processes by automating tasks, optimizing data workflows, and improving infrastructure performance.
  • Collaborate with Product, Engineering, Data, and Business teams to understand data needs and provide solutions.
  • Work closely with machine learning and analytics teams to support advanced data modeling and innovation.
  • Ensure systems are highly reliable, maintainable, and optimized for performance.


Required Qualifications & Skills

  • Bachelor’s degree in Computer Science, Engineering, or related field.
  • 3+ years of hands-on experience in Data Engineering.
  • Strong experience with Apache Spark, with solid understanding of distributed data processing.
  • Proficiency in Scala and PySpark is mandatory.
  • Strong SQL skills and experience working with relational and non-relational data.
  • Experience with cloud-based data platforms (preferably Databricks).
  • Good understanding of Delta Lake architecture, Parquet, JSON, CSV, and related data file formats.
  • Comfortable working in Linux/macOS environments with scripting capabilities.
  • Ability to work in an Agile environment and deliver independently.
  • Good communication and collaboration skills.
  • Knowledge of Machine Learning concepts is an added advantage.


Reporting

  • This role will report to the CEO or a designated Team Lead.



Benefits & Work Environment

  • Remote work flexibility across India.
  • Encouraging and diverse work culture.
  • Paid leaves, holidays, performance incentives, and learning opportunities.
  • Supportive environment that promotes personal and professional growth.


Read more
Technology Industry

Technology Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Delhi
10 - 15 yrs
₹105L - ₹140L / yr
Data engineering
Apache Spark
Apache
Apache Kafka
skill iconJava
+25 more

MANDATORY:

  • Super Quality Data Architect, Data Engineering Manager / Director Profile
  • Must have 12+ YOE in Data Engineering roles, with at least 2+ years in a Leadership role
  • Must have 7+ YOE in hands-on Tech development with Java (Highly preferred) or Python, Node.JS, GoLang
  • Must have strong experience in large data technologies, tools like HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, Presto etc.
  • Strong expertise in HLD and LLD, to design scalable, maintainable data architectures.
  • Must have managed a team of at least 5+ Data Engineers (Read Leadership role in CV)
  • Product Companies (Prefers high-scale, data-heavy companies)


PREFERRED:

  • Must be from Tier - 1 Colleges, preferred IIT
  • Candidates must have spent a minimum 3 yrs in each company.
  • Must have recent 4+ YOE with high-growth Product startups, and should have implemented Data Engineering systems from an early stage in the Company


ROLES & RESPONSIBILITIES:

  • Lead and mentor a team of data engineers, ensuring high performance and career growth.
  • Architect and optimize scalable data infrastructure, ensuring high availability and reliability.
  • Drive the development and implementation of data governance frameworks and best practices.
  • Work closely with cross-functional teams to define and execute a data roadmap.
  • Optimize data processing workflows for performance and cost efficiency.
  • Ensure data security, compliance, and quality across all data platforms.
  • Foster a culture of innovation and technical excellence within the data team.


IDEAL CANDIDATE:

  • 10+ years of experience in software/data engineering, with at least 3+ years in a leadership role.
  • Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS.
  • Proficiency in SQL, Python, and Scala for data processing and analytics.
  • Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services.
  • Strong foundation and expertise in HLD and LLD, as well as design patterns, preferably using Spring Boot or Google Guice
  • Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks.
  • Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery
  • Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.).
  • Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB.
  • Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK.
  • Proven ability to drive technical strategy and align it with business objectives.
  • Strong leadership, communication, and stakeholder management skills.


PREFERRED QUALIFICATIONS:

  • Experience in machine learning infrastructure or MLOps is a plus.
  • Exposure to real-time data processing and analytics.
  • Interest in data structures, algorithm analysis and design, multicore programming, and scalable architecture.
  • Prior experience in a SaaS or high-growth tech company.
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Robin Silverster
Posted by Robin Silverster
Bengaluru (Bangalore)
5 - 11 yrs
₹10L - ₹35L / yr
skill iconPython
Spark
Apache Kafka
Snow flake schema
databricks
+1 more

Required Skills:

· 8+ years of being a practitioner in data engineering or a related field.

· Proficiency in programming skills in Python

· Experience with data processing frameworks like Apache Spark or Hadoop.

· Experience working on Databricks.

· Familiarity with cloud platforms (AWS, Azure) and their data services.

· Experience with data warehousing concepts and technologies.

· Experience with message queues and streaming platforms (e.g., Kafka).

· Excellent communication and collaboration skills.

· Ability to work independently and as part of a geographically distributed team.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Gagandeep Kaur
Posted by Gagandeep Kaur
Bengaluru (Bangalore), Mumbai, Pune
4 - 7 yrs
Best in industry
skill iconPython
PySpark
pandas
Airflow
Data engineering

Wissen Technology is hiring for Data Engineer

About Wissen Technology: At Wissen Technology, we deliver niche, custom-built products that solve complex business challenges across industries worldwide. Founded in 2015, our core philosophy is built around a strong product engineering mindset—ensuring every solution is architected and delivered right the first time. Today, Wissen Technology has a global footprint with 2000+ employees across offices in the US, UK, UAE, India, and Australia. Our commitment to excellence translates into delivering 2X impact compared to traditional service providers. How do we achieve this? Through a combination of deep domain knowledge, cutting-edge technology expertise, and a relentless focus on quality. We don’t just meet expectations—we exceed them by ensuring faster time-to-market, reduced rework, and greater alignment with client objectives. We have a proven track record of building mission-critical systems across industries, including financial services, healthcare, retail, manufacturing, and more. Wissen stands apart through its unique delivery models. Our outcome-based projects ensure predictable costs and timelines, while our agile pods provide clients the flexibility to adapt to their evolving business needs. Wissen leverages its thought leadership and technology prowess to drive superior business outcomes. Our success is powered by top-tier talent. Our mission is clear: to be the partner of choice for building world-class custom products that deliver exceptional impact—the first time, every time.

Job Summary: Wissen Technology is hiring a Data Engineer with expertise in Python, Pandas, Airflow, and Azure Cloud Services. The ideal candidate will have strong communication skills and experience with Kubernetes.

Experience: 4-7 years

Notice Period: Immediate- 15 days

Location: Pune, Mumbai, Bangalore

Mode of Work: Hybrid

Key Responsibilities:

  • Develop and maintain data pipelines using Python and Pandas.
  • Implement and manage workflows using Airflow.
  • Utilize Azure Cloud Services for data storage and processing.
  • Collaborate with cross-functional teams to understand data requirements and deliver solutions.
  • Ensure data quality and integrity throughout the data lifecycle.
  • Optimize and scale data infrastructure to meet business needs.

Qualifications and Required Skills:

  • Proficiency in Python (Must Have).
  • Strong experience with Pandas (Must Have).
  • Expertise in Airflow (Must Have).
  • Experience with Azure Cloud Services.
  • Good communication skills.

Good to Have Skills:

  • Experience with Pyspark.
  • Knowledge of Kubernetes.

Wissen Sites:


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Bipasha Rath
Posted by Bipasha Rath
Pune, Bengaluru (Bangalore)
7 - 15 yrs
₹15L - ₹40L / yr
skill iconPython
pandas
Data engineering

Experience - 7+Yrs


Must-Have:

o Python (Pandas, PySpark)

o Data engineering & workflow optimization

o Delta Tables, Parquet

· Good-to-Have:

o Databricks

o Apache Spark, DBT, Airflow

o Advanced Pandas optimizations

o PyTest/DBT testing frameworks


Interested candidates can revert back with detail below.


Total Experience -

Relevant Experience in Python,Pandas.DE,Workflow optimization,delta table.-

Current CTC -

Expected CTC -

Notice Period -LWD -

Current location -

Desired location -



Read more
Wissen Technology

at Wissen Technology

4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Bengaluru (Bangalore), Pune, Mumbai
7 - 12 yrs
Best in industry
skill iconPython
pandas
PySpark
SQL
Data engineering

Wissen Technology is hiring for Data Engineer

About Wissen Technology:At Wissen Technology, we deliver niche, custom-built products that solve complex business challenges across industries worldwide. Founded in 2015, our core philosophy is built around a strong product engineering mindset—ensuring every solution is architected and delivered right the first time. Today, Wissen Technology has a global footprint with 2000+ employees across offices in the US, UK, UAE, India, and Australia. Our commitment to excellence translates into delivering 2X impact compared to traditional service providers. How do we achieve this? Through a combination of deep domain knowledge, cutting-edge technology expertise, and a relentless focus on quality. We don’t just meet expectations—we exceed them by ensuring faster time-to-market, reduced rework, and greater alignment with client objectives. We have a proven track record of building mission-critical systems across industries, including financial services, healthcare, retail, manufacturing, and more. Wissen stands apart through its unique delivery models. Our outcome-based projects ensure predictable costs and timelines, while our agile pods provide clients the flexibility to adapt to their evolving business needs. Wissen leverages its thought leadership and technology prowess to drive superior business outcomes. Our success is powered by top-tier talent. Our mission is clear: to be the partner of choice for building world-class custom products that deliver exceptional impact—the first time, every time.

Job Summary:Wissen Technology is hiring a Data Engineer with a strong background in Python, data engineering, and workflow optimization. The ideal candidate will have experience with Delta Tables, Parquet, and be proficient in Pandas and PySpark.

Experience:7+ years

Location:Pune, Mumbai, Bangalore

Mode of Work:Hybrid

Key Responsibilities:

  • Develop and maintain data pipelines using Python (Pandas, PySpark).
  • Optimize data workflows and ensure efficient data processing.
  • Work with Delta Tables and Parquet for data storage and management.
  • Collaborate with cross-functional teams to understand data requirements and deliver solutions.
  • Ensure data quality and integrity throughout the data lifecycle.
  • Implement best practices for data engineering and workflow optimization.

Qualifications and Required Skills:

  • Proficiency in Python, specifically with Pandas and PySpark.
  • Strong experience in data engineering and workflow optimization.
  • Knowledge of Delta Tables and Parquet.
  • Excellent problem-solving skills and attention to detail.
  • Ability to work collaboratively in a team environment.
  • Strong communication skills.

Good to Have Skills:

  • Experience with Databricks.
  • Knowledge of Apache Spark, DBT, and Airflow.
  • Advanced Pandas optimizations.
  • Familiarity with PyTest/DBT testing frameworks.

Wissen Sites:

 

Wissen | Driving Digital Transformation

A technology consultancy that drives digital innovation by connecting strategy and execution, helping global clients to strengthen their core technology.

 

Read more
Nyx Wolves
Remote only
5 - 7 yrs
₹11L - ₹13L / yr
SQL
Data modeling
Web performance optimization
Data engineering

Now Hiring: Tableau Developer (Banking Domain) 🚀

We’re looking for a 6+ years experienced Tableau pro to design and optimize dashboards for Banking & Financial Services.


🔹 Design & optimize interactive Tableau dashboards for large banking datasets

🔹 Translate KPIs into scalable reporting solutions

🔹 Ensure compliance with regulations like KYC, AML, Basel III, PCI-DSS

🔹 Collaborate with business analysts, data engineers, and banking experts

🔹 Bring deep knowledge of SQL, data modeling, and performance optimization


🌍 Location: Remote

📊 Domain Expertise: Banking / Financial Services


✨ Preferred experience with cloud data platforms (AWS, Azure, GCP) & certifications in Tableau are a big plus!


Bring your data visualization skills to transform banking intelligence & compliance reporting.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Rithika SharonM
Posted by Rithika SharonM
Bengaluru (Bangalore)
4 - 6 yrs
Best in industry
skill iconPython
SQL
Data engineering
PowerBI
Tableau
+1 more

Data Engineer

Experience: 4–6 years

Key Responsibilities

  • Design, build, and maintain scalable data pipelines and workflows.
  • Manage and optimize cloud-native data platforms on Azure with Databricks and Apache Spark (1–2 years).
  • Implement CI/CD workflows and monitor data pipelines for performance, reliability, and accuracy.
  • Work with relational databases (Sybase, DB2, Snowflake, PostgreSQL, SQL Server) and ensure efficient SQL query performance.
  • Apply data warehousing concepts including dimensional modelling, star schema, data vault modelling, Kimball and Inmon methodologies, and data lake design.
  • Develop and maintain ETL/ELT pipelines using open-source frameworks such as Apache Spark and Apache Airflow.
  • Integrate and process data streams from message queues and streaming platforms (Kafka, RabbitMQ).
  • Collaborate with cross-functional teams in a geographically distributed setup.
  • Leverage Jupyter notebooks for data exploration, analysis, and visualization.

Required Skills

  • 4+ years of experience in data engineering or a related field.
  • Strong programming skills in Python with experience in Pandas, NumPy, Flask.
  • Hands-on experience with pipeline monitoring and CI/CD workflows.
  • Proficiency in SQL and relational databases.
  • Familiarity with Git for version control.
  • Strong communication and collaboration skills with ability to work independently.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Ritika Mehra
Posted by Ritika Mehra
Mumbai
8 - 15 yrs
Best in industry
skill iconPython
Data engineering
SQL
skill iconAmazon Web Services (AWS)
skill iconData Analytics

Job Title:  Data Engineering Support Engineer / Manager

Experience range:-8+ Years

Location:- Mumbai 

 Experience :

Knowledge, Skills and Abilities 

- Python, SQL 

- Familiarity with data engineering 

- Experience with AWS data and analytics services or similar cloud vendor services 

- Strong problem solving and communication skills 

- Ablity to organise and prioritise work effectively 

Key Responsibilities 

- Incident and user management for data and analytics platform  

- Development and maintenance of Data Quliaty framework (including anomaly detection) 

- Implemenation of Python & SQL hotfixes and working with data engineers on more complex issues 

- Diagnostic tools implementation and automation of operational processes 


Key Relationships  

- Work closely with data scientists, data engineers, and platform engineers in a highly commercial environment 

- Support research analysts and traders with issue resolution 



Read more
Wissen Technology

at Wissen Technology

4 recruiters
Annie Varghese
Posted by Annie Varghese
Pune, Bengaluru (Bangalore), Mumbai
7 - 13 yrs
Best in industry
azure
synapse
data lake
Data engineering
skill iconLeadership

Overview

We are seeking an Azure Solutions Lead who will be responsible for managing and maintaining the overall architecture, design, application management and migrations, security of growing Cloud Infrastructure that supports the company’s core business and infrastructure systems and services. In this role, you will protect our critical information, systems, and assets, build new solutions, implement, and configure new applications and hardware, provide training, and optimize/monitor cloud systems. You must be passionate about applying technical skills that create operational efficiencies and offer solutions to support business operations and strategy roadmap.

Responsibilities:

  • Works in tandem with our Architecture, Applications and Security teams
  • Identify and implement the most optimal and secure Azure cloud-based solutions for the company.
  • Design and implement end-to-end Azure data solutions, including data ingestion, storage, processing, and visualization.
  • Architect data platforms using Azure services such as Azure Data Fabric, Azure Data Factory (ADF), Azure Databricks (ADB), Azure SQL Database, and One Lake etc.
  • Develop and maintain data pipelines for efficient data movement and transformation.
  • Design data models and schemas to support business requirements and analytical insights.
  • Collaborate with stakeholders to understand business needs and translate them into technical solutions.
  • Provide technical leadership and guidance to the data engineering team.
  • Stay updated on emerging Azure technologies and best practices in data architecture.
  • Stay current with industry trends, making recommendations as available to help keep the environment operating at it optimum while minimizing waste and maximizing investment.
  • Create and update the documentation to facilitate cross-training and troubleshooting
  • Work with the Security and Architecture teams to refine and deploy security best practices to identify, detect, protect, respond, and recover from threats to assets and information.

Qualifications:

  • Overall 7+ years of IT Experience & minimum 2 years as an Azure Data Lead
  • Strong expertise in all aspects of Azure services with a focus on data engineering & BI reporting.
  • Proficiency in Azure Data Factory (ADF), Data Factory, Azure Databricks (ADB), SQL, NoSQL, PySpark, Power BI and other Azure data tools.
  • Experience in data modelling, data warehousing, and business intelligence concepts.
  • Proven track record of designing and implementing scalable and robust data solutions.
  • Excellent communication skills with strong teamwork, analytical and troubleshooting skills, and attentiveness to detail.
  • Self-starter, ability to work independently and within a team. 

NOTE: IT IS MANDATORY TO GIVE ONE TECHNICHAL ROUND FACE TO FACE.

Read more
Remote only
12 - 16 yrs
₹20L - ₹35L / yr
skill iconScala
Apache Spark
Big Data
Data engineering
databricks
+1 more

What You’ll Be Doing:

● Design and build parts of our data pipeline architecture for extraction, transformation, and loading of data from a wide variety of data sources using the latest Big Data technologies.

● Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

● Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.

● Work with machine learning, data, and analytics experts to drive innovation, accuracy and greater functionality in our data system.


Qualifications:

● Bachelor's degree in Engineering, Computer Science, or relevant field.

● 10+ years of relevant and recent experience in a Data Engineer role. ● 5+ years recent experience with Apache Spark and solid understanding of the fundamentals.

● Deep understanding of Big Data concepts and distributed systems.

● Strong coding skills with Scala, Python, Java and/or other languages and the ability to quickly switch between them with ease.

● Advanced working SQL knowledge and experience working with a variety of relational databases such as Postgres and/or MySQL.

● Cloud Experience with DataBricks

● Experience working with data stored in many formats including Delta Tables, Parquet, CSV and JSON.

● Comfortable working in a linux shell environment and writing scripts as needed.

● Comfortable working in an Agile environment

● Machine Learning knowledge is a plus.

● Must be capable of working independently and delivering stable, efficient and reliable software.

● Excellent written and verbal communication skills in English.

● Experience supporting and working with cross-functional teams in a dynamic environment.


REPORTING: This position will report to our CEO or any other Lead as assigned by Management.


EMPLOYMENT TYPE: Full-Time,

Permanent LOCATION: Remote


SHIFT TIMINGS: 2.00 pm-11:00pm IST

Read more
Remote, Bengaluru (Bangalore), Pune, Chennai, Nagpur
5 - 15 yrs
₹20L - ₹30L / yr
databricks
PySpark
Apache Spark
CI/CD
Data engineering


Technical Architect (Databricks)

  • 10+ Years Data Engineering Experience with expertise in Databricks
  • 3+ years of consulting experience
  • Completed Data Engineering Professional certification & required classes
  • Minimum 2-3 projects delivered with hands-on experience in Databricks
  • Completed Apache Spark Programming with Databricks, Data Engineering with Databricks, Optimizing Apache Spark™ on Databricks
  • Experience in Spark and/or Hadoop, Flink, Presto, other popular big data engines
  • Familiarity with Databricks multi-hop pipeline architecture

 

 

Sr. Data Engineer (Databricks)

 

  • 5+ Years Data Engineering Experience with expertise in Databricks
  • Completed Data Engineering Associate certification & required classes
  • Minimum 1 project delivered with hands-on experience in development on Databricks
  • Completed Apache Spark Programming with Databricks, Data Engineering with Databricks, Optimizing Apache Spark™ on Databricks
  • SQL delivery experience, and familiarity with Bigquery, Synapse or Redshift
  • Proficient in Python, knowledge of additional databricks programming languages (Scala)


Read more
IT Industry

IT Industry

Agency job
Remote only
10 - 17 yrs
₹25L - ₹40L / yr
Data engineering
skill iconScala
Apache Spark
skill iconPostgreSQL

We’re Hiring: Senior Data Engineer | Remote (Pan India)

Are you passionate about building scalable data pipelines and optimizing data architecture? We’re looking for an experienced Senior Data Engineer (10+ years) to join our team and play a key role in shaping next-gen data systems.


What you’ll do:

✅ Design & develop robust data pipelines (ETL) using the latest Big Data tech

✅ Optimize infrastructure & automate processes for scalability

✅ Collaborate with cross-functional teams (Product, Data, Design, ML)

✅ Work with modern tools: Apache Spark, Databricks, SQL, Python/Scala/Java

Scala is mandatory


What we’re looking for:

🔹 Strong expertise in Big Data, Spark & distributed systems

🔹 Hands-on with SQL, relational DBs (Postgres/MySQL), Linux scripting

🔹 Experience with Delta Tables, Parquet, CSV, JSON

🔹 Cloud & Databricks exposure

🔹 Bonus: Machine Learning knowledge


📍 Location: Remote (Pan India)

Shift: 2:00 pm – 11:00 pm IST

💼 Type: Full-time, Permanent

Read more
MindCrew Technologies

at MindCrew Technologies

3 recruiters
Agency job
Pune
8 - 12 yrs
₹10L - ₹15L / yr
Data engineering
Data modeling
Snow flake schema
ETL
ETL architecture
+3 more

Job Title: Lead Data Engineer

📍 Location: Pune

🧾 Experience: 10+ Years

💰 Budget: Up to 1.7 LPM


Responsibilities

  • Collaborate with Data & ETL teams to review, optimize, and scale data architectures within Snowflake.
  • Design, develop, and maintain efficient ETL/ELT pipelines and robust data models.
  • Optimize SQL queries for performance and cost efficiency.
  • Ensure data quality, reliability, and security across pipelines and datasets.
  • Implement Snowflake best practices for performance, scaling, and governance.
  • Participate in code reviews, knowledge sharing, and mentoring within the data engineering team.
  • Support BI and analytics initiatives by enabling high-quality, well-modeled datasets.


Read more
MindCrew Technologies

at MindCrew Technologies

3 recruiters
Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Pune
10 - 14 yrs
₹10L - ₹15L / yr
Snowflake
ETL
SQL
Snow flake schema
Data modeling
+3 more

Exp: 10+ Years

CTC: 1.7 LPM

Location: Pune

SnowFlake Expertise Profile


Should hold 10 + years of experience with strong skills with core understanding of cloud data warehouse principles and extensive experience in designing, building, optimizing, and maintaining robust and scalable data solutions on the Snowflake platform.

Possesses a strong background in data modelling, ETL/ELT, SQL development, performance tuning, scaling, monitoring and security handling.


Responsibilities:

* Collaboration with Data and ETL team to review code, understand current architecture and help improve it based on Snowflake offerings and experience

* Review and implement best practices to design, develop, maintain, scale, efficiently monitor data pipelines and data models on the Snowflake platform for

ETL or BI.

* Optimize complex SQL queries for data extraction, transformation, and loading within Snowflake.

* Ensure data quality, integrity, and security within the Snowflake environment.

* Participate in code reviews and contribute to the team's development standards.

Education:

* Bachelor’s degree in computer science, Data Science, Information Technology, or anything equivalent.

* Relevant Snowflake certifications are a plus (e.g., Snowflake certified Pro / Architecture / Advanced).

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Akansha Sharma
Posted by Akansha Sharma
Bengaluru (Bangalore)
4 - 6 yrs
Best in industry
Generative AI
MLOps
Data engineering
Big Data


Notice Period - 0-15 days Max

Apply only who are currently in Karnataka

F2F interview

Interview - 4 rounds


Job Title: AI Specialist

Company Overview: We are the Technology Center of Excellence for Long Arc Capital

which provides growth capital to businesses with a sustainable competitive advantage and 

a strong management team with whom we can partner to build a category leader. We focus 

on North American and European companies where technology is transforming traditional 

business models in the Financial Services, Business Services, Technology, Media and 

Telecommunications sectors.

As part of our mission to leverage AI for business innovation, we are establishing AI COE to 

develop Generative AI (GenAI) and Agentic AI solutions that enhance decision-making, 

automation, and user experiences.

Job Overview: We are seeking dynamic and talented individuals to join our AI COE. This 

team will focus on developing advanced AI models, integrating them into our cloud-based 

platform, and delivering impactful solutions that drive efficiency, innovation, and customer 

value.

Key Responsibilities:

• As a Full Stack AI Engineer, research, design, and develop AI solutions for text, 

image, audio, and video generation

• Build and deploy Agentic AI systems for autonomous decision-making across 

business outcomes and enhancing associate productivity. 

• Work with domain experts to design and fine-tune AI solutions tailored to portfoliospecific challenges.

• Partner with data engineers across portfolio companies to –

o Preprocess large datasets and ensure high-quality input for training AI 

models.

o Develop scalable and efficient AI pipelines using frameworks like 

TensorFlow, PyTorch, and Hugging Face.

• Implement MLOps best practices for AI model deployment, versioning, and 

monitoring using tools like MLflow and Kubernetes.

• Ensure AI solutions adhere to ethical standards, comply with regulations (e.g., 

GDPR, CCPA), and mitigate biases.

• Design intuitive and user-friendly interfaces for AI-driven applications, collaborating 

with UX designers and frontend developers.

Internal Use Only

• Stay up to date with the latest AI research and tools and evaluate their applicability 

to our business needs.

Key Qualifications:

Technical Expertise:

• Proficiency in full stack application development (specifically using Angular, React).

• Expertise in backend technologies (Django, Flask) and cloud platforms (AWS 

SageMaker/Azure AI Studio).

• Proficiency in deep learning frameworks (TensorFlow, PyTorch, JAX).

• Proficiency with Large Language Models (LLMs) and generative AI tools (e.g., OpenAI 

APIs, LangChain, Stable Diffusion).

• Solid understanding of data engineering workflows, including ETL processes and 

distributed computing tools (Apache Spark, Kafka).

• Experience with data pipelines, big data processing, and database management 

(SQL, NoSQL).

• Knowledge of containerization (Docker) and orchestration (Kubernetes) for scalable 

AI deployment.

• Familiarity with CI/CD pipelines and automation tools (Terraform, Jenkins).

• Good understanding of AI ethics, bias mitigation, and compliance standards.

• Excellent problem-solving abilities and innovative thinking.

• Strong collaboration and communication skills, with the ability to work in crossfunctional teams.

• Proven ability to work in a fast-paced and dynamic environment.

Preferred Qualifications:

• Advanced studies in Artificial Intelligence, or a related field.

• Experience with reinforcement learning, multi-agent systems, or autonomous 

decision-making

Read more
Digitide
Bengaluru (Bangalore)
6 - 9 yrs
₹5L - ₹15L / yr
Windows Azure
Data engineering
databricks
Data Factory

1. Design, develop, and maintain data pipelines using Azure Data Factory

2. ⁠Create and manage data models in PostgreSQL, ensuring efficient data storage and retrieval.

3. ⁠Optimize query performance and database performance in PostgreSQL, including indexing, query tuning, and performance monitoring.

4. Strong knowledge on data modeling and mapping from various sources to data model.

5. ⁠Develop and maintain logging mechanisms in Azure Data Factory to monitor and troubleshoot data pipelines.

6. Strong knowledge on Key Vault, Azure Data lake, PostgreSQL

7. ⁠Manage file handling within Azure Data Factory, including reading, writing, and transforming data from various file formats.

8. Strong SQL query skills with the ability to handle multiple scenarios and optimize query performance.

9. ⁠Excellent problem-solving skills and ability to handle complex data scenarios.

10. Collaborate with Business stakeholder, data architects and PO's to understand and meet their data requirements.

11. Ensure data quality and integrity through validation and quality checks.

12. Having Power BI knowledge, creating and configuring semantic models & reports would be preferred.


Read more
Sounds and Spaces

at Sounds and Spaces

2 candid answers
Soundsand Spaces
Posted by Soundsand Spaces
Qatar
2 - 3 yrs
$25.5K - $30.8K / yr
Mechanical engineering
Data engineering
Civil Engineering
Engineering technologist
AutoCAD
+6 more

Duties

  • Perform site survey and analysis on all noise and vibration requirement
  • Develop acoustic system design concepts and report to achieve project and product performance requirements
  • Troubleshoot noise issue and provide solution
  • Collaborate with the sale team to provide acoustic solution through site visits and measurement
  • Study Project specification and propose suitable product and solution
  • Prepare project BOQ and Technical submittal
  • Develop and improve the company acoustic products


Experience

  • Good knowledge and understanding of acoustic testing and measurement techniques
  • Good experience in Acoustic Software's
  • Good Experience in AutoCAD and other modelling software



Read more
Wissen Technology

at Wissen Technology

4 recruiters
Annie Varghese
Posted by Annie Varghese
Bengaluru (Bangalore), Mumbai, Pune
7 - 16 yrs
Best in industry
fabric
Data engineering
skill iconPython
SQL

Key Responsibilities

● Design & Development

○ Architect and implement data ingestion pipelines using Microsoft Fabric Data Factory (Dataflows) and OneLake sources

○ Build and optimize Lakehouse and Warehouse solutions leveraging Delta Lake, Spark Notebooks, and SQL Endpoints

○ Define and enforce Medallion (Bronze–Silver–Gold) architecture patterns for raw, enriched, and curated datasets

● Data Modeling & Transformation

○ Develop scalable transformation logic in Spark (PySpark/Scala) and Fabric SQL to support reporting and analytics

○ Implement slowly changing dimensions (SCD Type 2), change-data-capture (CDC) feeds, and time-windowed aggregations

● Performance Tuning & Optimization

○ Monitor and optimize data pipelines for throughput, cost efficiency, and reliability

○ Apply partitioning, indexing, caching, and parallelism best practices in Fabric Lakehouses and Warehouse compute

● Data Quality & Governance

○ Integrate Microsoft Purview for metadata cataloging, lineage tracking, and data discovery

○ Develop automated quality checks, anomaly detection rules, and alerts for data reliability

● CI/CD & Automation

○ Implement infrastructure-as-code (ARM templates or Terraform) for Fabric workspaces, pipelines, and artifacts

○ Set up Git-based version control, CI/CD pipelines (e.g. Azure DevOps) for seamless deployment across environments

● Collaboration & Support

○ Partner with data scientists, BI developers, and business analysts to understand requirements and deliver data solutions

○ Provide production support, troubleshoot pipeline failures, and drive root-cause analysis

Required Qualifications

● 5+ years of professional experience in data engineering roles, with at least 1 year working hands-on in Microsoft Fabric

● Strong proficiency in:

○ Languages: SQL (T-SQL), Python, and/or Scala

○ Fabric Components: Data Factory Dataflows, OneLake, Spark Notebooks, Lakehouse, Warehouse

○ Data Storage: Delta Lake, Parquet, CSV, JSON formats

● Deep understanding of data modeling principles (star schemas, snowflake schemas, normalized vs. denormalized)

● Experience with CI/CD and infrastructure-as-code for data platforms (ARM templates, Terraform, Git)

● Familiarity with data governance tools, especially Microsoft Purview

● Excellent problem-solving skills and ability to communicate complex technical concepts clearly


NOTE: Candidate should be willing to take one technical round F2F from any of the branch location. (Pune/ Mumbai/ Bangalore)

Read more
Codnatives
Bengaluru (Bangalore), Pune
5 - 9 yrs
₹5L - ₹14L / yr
Data engineering
skill iconAmazon Web Services (AWS)
Amazon Redshift

Good experience in 5+ SQL and NoSQL database development and optimization. 

∙Strong hands-on experience with Amazon Redshift, MySQL, MongoDB, and Flyway. 

∙In-depth understanding of data warehousing principles and performance tuning techniques. 

∙Strong hands-on experience in building complex aggregation pipelines in NoSQL databases such as MongoDB. 

∙Proficient in Python or Scala for data processing and automation. 

∙3+ years of experience working with AWS-managed database services. 

∙3+ years of experience with Power BI or similar BI/reporting platforms. 

Read more
NeoGenCode Technologies Pvt Ltd
Bengaluru (Bangalore)
8 - 12 yrs
₹15L - ₹22L / yr
Data engineering
Google Cloud Platform (GCP)
Data Transformation Tool (DBT)
Google Dataform
BigQuery
+6 more

Job Title : Data Engineer – GCP + Spark + DBT

Location : Bengaluru (On-site at Client Location | 3 Days WFO)

Experience : 8 to 12 Years

Level : Associate Architect

Type : Full-time


Job Overview :

We are looking for a seasoned Data Engineer to join the Data Platform Engineering team supporting a Unified Data Platform (UDP). This role requires hands-on expertise in DBT, GCP, BigQuery, and PySpark, with a solid foundation in CI/CD, data pipeline optimization, and agile delivery.


Mandatory Skills : GCP, DBT, Google Dataform, BigQuery, PySpark/Spark SQL, Advanced SQL, CI/CD, Git, Agile Methodologies.


Key Responsibilities :

  • Design, build, and optimize scalable data pipelines using BigQuery, DBT, and PySpark.
  • Leverage GCP-native services like Cloud Storage, Pub/Sub, Dataproc, Cloud Functions, and Composer for ETL/ELT workflows.
  • Implement and maintain CI/CD for data engineering projects with Git-based version control.
  • Collaborate with cross-functional teams including Infra, Security, and DataOps for reliable, secure, and high-quality data delivery.
  • Lead code reviews, mentor junior engineers, and enforce best practices in data engineering.
  • Participate in Agile sprints, backlog grooming, and Jira-based project tracking.

Must-Have Skills :

  • Strong experience with DBT, Google Dataform, and BigQuery
  • Hands-on expertise with PySpark/Spark SQL
  • Proficient in GCP for data engineering workflows
  • Solid knowledge of SQL optimization, Git, and CI/CD pipelines
  • Agile team experience and strong problem-solving abilities

Nice-to-Have Skills :

  • Familiarity with Databricks, Delta Lake, or Kafka
  • Exposure to data observability and quality frameworks (e.g., Great Expectations, Soda)
  • Knowledge of MDM patterns, Terraform, or IaC is a plus
Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Pune
6 - 10 yrs
₹12L - ₹23L / yr
skill iconMachine Learning (ML)
skill iconDeep Learning
Natural Language Processing (NLP)
Computer Vision
Data engineering
+8 more

Job Title : AI Architect

Location : Pune (On-site | 3 Days WFO)

Experience : 6+ Years

Shift : US or flexible shifts


Job Summary :

We are looking for an experienced AI Architect to design and deploy AI/ML solutions that align with business goals.

The role involves leading end-to-end architecture, model development, deployment, and integration using modern AI/ML tools and cloud platforms (AWS/Azure/GCP).


Key Responsibilities :

  • Define AI strategy and identify business use cases
  • Design scalable AI/ML architectures
  • Collaborate on data preparation, model development & deployment
  • Ensure data quality, governance, and ethical AI practices
  • Integrate AI into existing systems and monitor performance

Must-Have Skills :

  • Machine Learning, Deep Learning, NLP, Computer Vision
  • Data Engineering, Model Deployment (CI/CD, MLOps)
  • Python Programming, Cloud (AWS/Azure/GCP)
  • Distributed Systems, Data Governance
  • Strong communication & stakeholder collaboration

Good to Have :

  • AI certifications (Azure/GCP/AWS)
  • Experience in big data and analytics
Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Gurugram
6 - 10 yrs
₹12L - ₹22L / yr
Data engineering
Azure Data Factory (ADF)
Azure Cloud Services
SQL
Data modeling
+10 more

Job Title : Senior Data Engineer

Experience : 6 to 10 Years

Location : Gurgaon (Hybrid – 3 days office / 2 days WFH)

Notice Period : Immediate to 30 days (Buyout option available)


About the Role :

We are looking for an experienced Senior Data Engineer to join our Digital IT team in Gurgaon.

This role involves building scalable data pipelines, managing data architecture, and ensuring smooth data flow across the organization while maintaining high standards of security and compliance.


Mandatory Skills :

Azure Data Factory (ADF), Azure Cloud Services, SQL, Data Modelling, CI/CD tools, Git, Data Governance, RDBMS & NoSQL databases (e.g., SQL Server, PostgreSQL, Redis, ElasticSearch), Data Lake migration.


Key Responsibilities :

  • Design and develop secure, scalable end-to-end data pipelines using Azure Data Factory (ADF) and Azure services.
  • Build and optimize data architectures (including Medallion Architecture).
  • Collaborate with cross-functional teams on cybersecurity, data privacy (e.g., GDPR), and governance.
  • Manage structured/unstructured data migration to Data Lake.
  • Ensure CI/CD integration for data workflows and version control using Git.
  • Identify and integrate data sources (internal/external) in line with business needs.
  • Proactively highlight gaps and risks related to data compliance and integrity.

Required Skills :

  • Azure Data Factory (ADF)Mandatory
  • Strong SQL and Data Modelling expertise.
  • Hands-on with Azure Cloud Services and data architecture.
  • Experience with CI/CD tools and version control (Git).
  • Good understanding of Data Governance practices.
  • Exposure to ETL/ELT pipelines and Data Lake migration.
  • Working knowledge of RDBMS and NoSQL databases (e.g., SQL Server, PostgreSQL, Redis, ElasticSearch).
  • Understanding of RESTful APIs, deployment on cloud/on-prem infrastructure.
  • Strong problem-solving, communication, and collaboration skills.

Additional Info :

  • Work Mode : Hybrid (No remote); relocation to Gurgaon required for non-NCR candidates.
  • Communication : Above-average verbal and written English skills.

Perks & Benefits :

  • 5 Days work week
  • Global exposure and leadership collaboration.
  • Health insurance, employee-friendly policies, training and development.
Read more
Deqode

at Deqode

1 recruiter
Alisha Das
Posted by Alisha Das
Bengaluru (Bangalore), Hyderabad, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5 - 7 yrs
₹10L - ₹25L / yr
Microsoft Windows Azure
Data engineering
skill iconPython
Apache Kafka

Role Overview:

We are seeking a Senior Software Engineer (SSE) with strong expertise in Kafka, Python, and Azure Databricks to lead and contribute to our healthcare data engineering initiatives. This role is pivotal in building scalable, real-time data pipelines and processing large-scale healthcare datasets in a secure and compliant cloud environment.

The ideal candidate will have a solid background in real-time streaming, big data processing, and cloud platforms, along with strong leadership and stakeholder engagement capabilities.

Key Responsibilities:

  • Design and develop scalable real-time data streaming solutions using Apache Kafka and Python.
  • Architect and implement ETL/ELT pipelines using Azure Databricks for both structured and unstructured healthcare data.
  • Optimize and maintain Kafka applications, Python scripts, and Databricks workflows to ensure performance and reliability.
  • Ensure data integrity, security, and compliance with healthcare standards such as HIPAA and HITRUST.
  • Collaborate with data scientists, analysts, and business stakeholders to gather requirements and translate them into robust data solutions.
  • Mentor junior engineers, perform code reviews, and promote engineering best practices.
  • Stay current with evolving technologies in cloud, big data, and healthcare data standards.
  • Contribute to the development of CI/CD pipelines and containerized environments (Docker, Kubernetes).

Required Skills & Qualifications:

  • 4+ years of hands-on experience in data engineering roles.
  • Strong proficiency in Kafka (including Kafka Streams, Kafka Connect, Schema Registry).
  • Proficient in Python for data processing and automation.
  • Experience with Azure Databricks (or readiness to ramp up quickly).
  • Solid understanding of cloud platforms, with a preference for Azure (AWS/GCP is a plus).
  • Strong knowledge of SQL and NoSQL databases; data modeling for large-scale systems.
  • Familiarity with containerization tools like Docker and orchestration using Kubernetes.
  • Exposure to CI/CD pipelines for data applications.
  • Prior experience with healthcare datasets (EHR, HL7, FHIR, claims data) is highly desirable.
  • Excellent problem-solving abilities and a proactive mindset.
  • Strong communication and interpersonal skills to work in cross-functional teams.


Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Delhi
10 - 15 yrs
₹90L - ₹120L / yr
Data engineering
skill iconJava
Must have recent 4+ YOE with high-growth Product startups, and...
Mandatory (Experience 4) - Must have strong experience in large...
Top college only
+1 more

Strong Data Architect, Lead Data Engineer, Engineering Manager / Director Profile

Mandatory (Experience 1) - Must have 10+ YOE in Data Engineering roles, with at least 2+ years in a Leadership role

Mandatory (Experience 2) - Must have 7+ YOE in hands-on Tech development with Java (Highly preferred) or Python, Node.JS, GoLang

Mandatory (Experience 3) - Must have recent 4+ YOE with high-growth Product startups, and should have implemented Data Engineering systems from an early stage in the Company

Mandatory (Experience 4) - Must have strong experience in large data technologies, tools like HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, Presto etc.

Mandatory (Experience 5) - Strong expertise in HLD and LLD, to design scalable, maintainable data architectures.

Mandatory (Team Management) - Must have managed a team of atleast 5+ Data Engineers (Read Leadership role in CV)

Mandatory (Education) - Must be from Tier - 1 Colleges, preferred IIT

Mandatory (Company) - B2B Product Companies with High data-traffic

Preferred Companies

MoEngage, Whatfix, Netcore Cloud, Clevertap, Hevo Data, Snowflake, Chargebee, Fractor.ai, Databricks, Dataweave, Wingman, Postman, Zoho, HighRadius, Freshworks, Mindtickle

Read more
VyTCDC
Gobinath Sundaram
Posted by Gobinath Sundaram
Bengaluru (Bangalore)
5 - 8 yrs
₹4L - ₹25L / yr
Data engineering
skill iconPython
Spark

🛠️ Key Responsibilities

  • Design, build, and maintain scalable data pipelines using Python and Apache Spark (PySpark or Scala APIs)
  • Develop and optimize ETL processes for batch and real-time data ingestion
  • Collaborate with data scientists, analysts, and DevOps teams to support data-driven solutions
  • Ensure data quality, integrity, and governance across all stages of the data lifecycle
  • Implement data validation, monitoring, and alerting mechanisms for production pipelines
  • Work with cloud platforms (AWS, GCP, or Azure) and tools like Airflow, Kafka, and Delta Lake
  • Participate in code reviews, performance tuning, and documentation


🎓 Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
  • 3–6 years of experience in data engineering with a focus on Python and Spark
  • Experience with distributed computing and handling large-scale datasets (10TB+)
  • Familiarity with data security, PII handling, and compliance standards is a plus


Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Bengaluru (Bangalore), Hyderabad
4 - 8 yrs
₹10L - ₹24L / yr
skill iconPython
Data engineering
skill iconAmazon Web Services (AWS)
RESTful APIs
Microservices
+9 more

Job Title : Python Data Engineer

Experience : 4+ Years

Location : Bangalore / Hyderabad (On-site)


Job Summary :

We are seeking a skilled Python Data Engineer to work on cloud-native data platforms and backend services.

The role involves building scalable APIs, working with diverse data systems, and deploying containerized services using modern cloud infrastructure.


Mandatory Skills : Python, AWS, RESTful APIs, Microservices, SQL/PostgreSQL/NoSQL, Docker, Kubernetes, CI/CD (Jenkins/GitLab CI/AWS CodePipeline)


Key Responsibilities :

  • Design, develop, and maintain backend systems using Python.
  • Build and manage RESTful APIs and microservices architectures.
  • Work extensively with AWS cloud services for deployment and data storage.
  • Implement and manage SQL, PostgreSQL, and NoSQL databases.
  • Containerize applications using Docker and orchestrate with Kubernetes.
  • Set up and maintain CI/CD pipelines using Jenkins, GitLab CI, or AWS CodePipeline.
  • Collaborate with teams to ensure scalable and reliable software delivery.
  • Troubleshoot and optimize application performance.


Must-Have Skills :

  • 4+ years of hands-on experience in Python backend development.
  • Strong experience with AWS cloud infrastructure.
  • Proficiency in building microservices and APIs.
  • Good knowledge of relational and NoSQL databases.
  • Experience with Docker and Kubernetes.
  • Familiarity with CI/CD tools and DevOps processes.
  • Strong problem-solving and collaboration skills.
Read more
NeoGenCode Technologies Pvt Ltd
Pune
8 - 15 yrs
₹5L - ₹24L / yr
Data engineering
Snow flake schema
SQL
ETL
ELT
+5 more

Job Title : Data Engineer – Snowflake Expert

Location : Pune (Onsite)

Experience : 10+ Years

Employment Type : Contractual

Mandatory Skills : Snowflake, Advanced SQL, ETL/ELT (Snowpipe, Tasks, Streams), Data Modeling, Performance Tuning, Python, Cloud (preferably Azure), Security & Data Governance.


Job Summary :

We are seeking a seasoned Data Engineer with deep expertise in Snowflake to design, build, and maintain scalable data solutions.

The ideal candidate will have a strong background in data modeling, ETL/ELT, SQL optimization, and cloud data warehousing principles, with a passion for leveraging Snowflake to drive business insights.

Responsibilities :

  • Collaborate with data teams to optimize and enhance data pipelines and models on Snowflake.
  • Design and implement scalable ELT pipelines with performance and cost-efficiency in mind.
  • Ensure high data quality, security, and adherence to governance frameworks.
  • Conduct code reviews and align development with best practices.

Qualifications :

  • Bachelor’s in Computer Science, Data Science, IT, or related field.
  • Snowflake certifications (Pro/Architect) preferred.
Read more
Hunarstreet Technologies pvt ltd

Hunarstreet Technologies pvt ltd

Agency job
via Hunarstreet Technologies pvt ltd by Sakshi Patankar
Remote only
10 - 20 yrs
₹15L - ₹30L / yr
Data engineering
databricks
skill iconPython
skill iconScala
Spark
+14 more

What You’ll Be Doing:

● Design and build parts of our data pipeline architecture for extraction, transformation, and loading of data from a wide variety of data sources using the latest Big Data technologies.

● Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

● Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.

● Work with machine learning, data, and analytics experts to drive innovation, accuracy and greater functionality in our data system. Qualifications:

● Bachelor's degree in Engineering, Computer Science, or relevant field.

● 10+ years of relevant and recent experience in a Data Engineer role. ● 5+ years recent experience with Apache Spark and solid understanding of the fundamentals.

● Deep understanding of Big Data concepts and distributed systems.

● Strong coding skills with Scala, Python, Java and/or other languages and the ability to quickly switch between them with ease.

● Advanced working SQL knowledge and experience working with a variety of relational databases such as Postgres and/or MySQL.

● Cloud Experience with DataBricks

● Experience working with data stored in many formats including Delta Tables, Parquet, CSV and JSON.

● Comfortable working in a linux shell environment and writing scripts as needed.

● Comfortable working in an Agile environment

● Machine Learning knowledge is a plus.

● Must be capable of working independently and delivering stable, efficient and reliable software.

● Excellent written and verbal communication skills in English.

● Experience supporting and working with cross-functional teams in a dynamic environment


EMPLOYMENT TYPE: Full-Time, Permanent

LOCATION: Remote (Pan India)

SHIFT TIMINGS: 2.00 pm-11:00pm IST 

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Hanisha Pralayakaveri
Posted by Hanisha Pralayakaveri
Bengaluru (Bangalore), Mumbai
5 - 9 yrs
Best in industry
skill iconPython
skill iconAmazon Web Services (AWS)
PySpark
Data engineering

Job Description: Data Engineer 

Position Overview:

Role Overview

We are seeking a skilled Python Data Engineer with expertise in designing and implementing data solutions using the AWS cloud platform. The ideal candidate will be responsible for building and maintaining scalable, efficient, and secure data pipelines while leveraging Python and AWS services to enable robust data analytics and decision-making processes.

 

Key Responsibilities

· Design, develop, and optimize data pipelines using Python and AWS services such as Glue, Lambda, S3, EMR, Redshift, Athena, and Kinesis.

· Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses).

· Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions.

· Monitor, troubleshoot, and enhance data workflows for performance and cost optimization.

· Ensure data quality and consistency by implementing validation and governance practices.

· Work on data security best practices in compliance with organizational policies and regulations.

· Automate repetitive data engineering tasks using Python scripts and frameworks.

· Leverage CI/CD pipelines for deployment of data workflows on AWS.

Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Remote only
10 - 15 yrs
₹120L - ₹200L / yr
Mandatory (Experience 1) - Must have 10+ YOE in Data...
Data engineering
Mandatory (Experience 2) - Must have 7+ YOE in hands-on...
Mandatory (Experience 3) - Must have recent 4+ YOE with...
Mandatory (Education) - Only IIT colleges Mandatory (Company) - B2B...
+1 more

Ideal Candidate

10+ years of experience in software/data engineering, with at least 3+ years in a leadership role.

Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS.

Proficiency in SQL, Python, and Scala for data processing and analytics.

Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services.

Strong foundation and expertise in HLD and LLD, as well as design patterns, preferably using Spring Boot or Google Guice

Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks.

Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery

Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.).

Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB.

Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK.

Proven ability to drive technical strategy and align it with business objectives.

Strong leadership, communication, and stakeholder management skills.

Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Remote only
8 - 13 yrs
₹80L - ₹100L / yr
Data engineering
skill iconJava
Mandatory (Experience 4) - Must have strong experience in large...
Mandatory (Experience 5) - Strong expertise in HLD and LLD,...

Ideal Candidate

10+ years of experience in software/data engineering, with at least 3+ years in a leadership role.

Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS.

Proficiency in SQL, Python, and Scala for data processing and analytics.

Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services.

Strong foundation and expertise in HLD and LLD, as well as design patterns, preferably using Spring Boot or Google Guice

Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks.

Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery

Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.).

Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB.

Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK.

Proven ability to drive technical strategy and align it with business objectives.

Strong leadership, communication, and stakeholder management skills.

Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Remote only
8 - 13 yrs
₹70L - ₹100L / yr
Mandatory (Experience 1) - Must have 5+ YOE in Data...
Mandatory (Experience 3) - Must have recent 4+ YOE with...
Mandatory (Team Management) - Must have managed a team of...
Data engineering
skill iconJava

Ideal Candidate

10+ years of experience in software/data engineering, with at least 3+ years in a leadership role.

Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS.

Proficiency in SQL, Python, and Scala for data processing and analytics.

Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services.

Strong foundation and expertise in HLD and LLD, as well as design patterns, preferably using Spring Boot or Google Guice

Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks.

Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery

Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.).

Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB.

Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK.

Proven ability to drive technical strategy and align it with business objectives.

Strong leadership, communication, and stakeholder management skills

Read more
ZeMoSo Technologies

at ZeMoSo Technologies

11 recruiters
Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Mumbai, Bengaluru (Bangalore), Hyderabad, Chennai, Pune
4 - 8 yrs
₹10L - ₹15L / yr
Data engineering
skill iconPython
SQL
Data Warehouse (DWH)
skill iconAmazon Web Services (AWS)
+3 more

Work Mode: Hybrid


Need B.Tech, BE, M.Tech, ME candidates - Mandatory



Must-Have Skills:

● Educational Qualification :- B.Tech, BE, M.Tech, ME in any field.

● Minimum of 3 years of proven experience as a Data Engineer.

● Strong proficiency in Python programming language and SQL.

● Experience in DataBricks and setting up and managing data pipelines, data warehouses/lakes.

● Good comprehension and critical thinking skills.


● Kindly note Salary bracket will vary according to the exp. of the candidate - 

- Experience from 4 yrs to 6 yrs - Salary upto 22 LPA

- Experience from 5 yrs to 8 yrs - Salary upto 30 LPA

- Experience more than 8 yrs - Salary upto 40 LPA

Read more
Big4

Big4

Agency job
via Black Turtle by Kajol Teli
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
3 - 7 yrs
₹5L - ₹20L / yr
Google Cloud Platform (GCP)
Data engineering

Hiring for Big4 Company'

GCP Data engineer

GCP - Mandate

3-7 Years

Gurgaon location

only serving candidate or immediately Joiner can apply

Notice period - less than 30 Days

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Seema Srivastava
Posted by Seema Srivastava
Mumbai
5 - 10 yrs
Best in industry
skill iconPython
SQL
Databases
Data engineering
skill iconAmazon Web Services (AWS)

Job Description: Data Engineer 

Position Overview:

Role Overview

We are seeking a skilled Python Data Engineer with expertise in designing and implementing data solutions using the AWS cloud platform. The ideal candidate will be responsible for building and maintaining scalable, efficient, and secure data pipelines while leveraging Python and AWS services to enable robust data analytics and decision-making processes.

 

Key Responsibilities

· Design, develop, and optimize data pipelines using Python and AWS services such as Glue, Lambda, S3, EMR, Redshift, Athena, and Kinesis.

· Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses).

· Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions.

· Monitor, troubleshoot, and enhance data workflows for performance and cost optimization.

· Ensure data quality and consistency by implementing validation and governance practices.

· Work on data security best practices in compliance with organizational policies and regulations.

· Automate repetitive data engineering tasks using Python scripts and frameworks.

· Leverage CI/CD pipelines for deployment of data workflows on AWS.

 

Required Skills and Qualifications

· Professional Experience: 5+ years of experience in data engineering or a related field.

· Programming: Strong proficiency in Python, with experience in libraries like pandas, pySpark, or boto3.

· AWS Expertise: Hands-on experience with core AWS services for data engineering, such as:

· AWS Glue for ETL/ELT.

· S3 for storage.

· Redshift or Athena for data warehousing and querying.

· Lambda for serverless compute.

· Kinesis or SNS/SQS for data streaming.

· IAM Roles for security.

· Databases: Proficiency in SQL and experience with relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., DynamoDB) databases.

· Data Processing: Knowledge of big data frameworks (e.g., Hadoop, Spark) is a plus.

· DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline.

· Version Control: Proficient with Git-based workflows.

· Problem Solving: Excellent analytical and debugging skills.

 

Optional Skills

· Knowledge of data modeling and data warehouse design principles.

· Experience with data visualization tools (e.g., Tableau, Power BI).

· Familiarity with containerization (e.g., Docker) and orchestration (e.g., Kubernetes).

· Exposure to other programming languages like Scala or Java.

 

Education

· Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.

 

Why Join Us?

· Opportunity to work on cutting-edge AWS technologies.

· Collaborative and innovative work environment.

 

 

Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Remote only
8 - 13 yrs
₹50L - ₹100L / yr
Data engineering
Mandatory (Experience 2) - Must have 7+ YOE in hands-on...
skill iconJava
Mandatory (Experience 3) - Must have recent 4+ YOE with...
Mandatory (Experience 4) - Must have strong experience in large...
+3 more

Ideal Candidate

10+ years of experience in software/data engineering, with at least 3+ years in a leadership role.

Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS.

Proficiency in SQL, Python, and Scala for data processing and analytics.

Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services.

Strong foundation and expertise in HLD and LLD, as well as design patterns, preferably using Spring Boot or Google Guice

Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks.

Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery

Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.).

Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB.

Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK.

Proven ability to drive technical strategy and align it with business objectives.

Strong leadership, communication, and stakeholder management skills

Read more
Deqode

at Deqode

1 recruiter
Shraddha Katare
Posted by Shraddha Katare
Pune
2 - 5 yrs
₹3L - ₹10L / yr
PySpark
skill iconAmazon Web Services (AWS)
AWS Lambda
SQL
Data engineering
+2 more


Here is the Job Description - 


Location -- Viman Nagar, Pune

Mode - 5 Days Working


Required Tech Skills:


 ● Strong at PySpark, Python

 ● Good understanding of Data Structure 

 ● Good at SQL query/optimization 

 ● Strong fundamentals of OOPs programming 

 ● Good understanding of AWS Cloud, Big Data. 

 ● Data Lake, AWS Glue, Athena, S3, Kinesis, SQL/NoSQL DB  


Read more
Remote only
7 - 12 yrs
₹20L - ₹35L / yr
Snowflake
Looker / LookML
Data engineering
skill iconAmazon Web Services (AWS)
Data modeling

Role & Responsibilities

Data Organization and Governance: Define and maintain governance standards that span multiple systems (AWS, Fivetran, Snowflake, PostgreSQL, Salesforce/nCino, Looker), ensuring that data remains accurate, accessible, and organized across the organization.

Solve Data Problems Proactively: Address recurring data issues that sidetrack operational and strategic initiatives by implementing processes and tools to anticipate, identify, and resolve root causes effectively.

System Integration: Lead the integration of diverse systems into a cohesive data environment, optimizing workflows and minimizing manual intervention.

Hands-On Problem Solving: Take a hands-on approach to resolving reporting issues and troubleshooting data challenges when necessary, ensuring minimal disruption to business operations.

Collaboration Across Teams: Work closely with business and technical stakeholders to understand and solve our biggest challenges

Mentorship and Leadership: Guide and mentor team members, fostering a culture of accountability and excellence in data management practices.

Strategic Data Support: Ensure that marketing, analytics, and other strategic initiatives are not derailed by data integrity issues, enabling the organization to focus on growth and innovation.

Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Remote only
8 - 13 yrs
₹70L - ₹90L / yr
Data engineering
Apache Spark
Apache Kafka
skill iconJava
skill iconPython
+6 more

Role & Responsibilities

Lead and mentor a team of data engineers, ensuring high performance and career growth.

Architect and optimize scalable data infrastructure, ensuring high availability and reliability.

Drive the development and implementation of data governance frameworks and best practices.

Work closely with cross-functional teams to define and execute a data roadmap.

Optimize data processing workflows for performance and cost efficiency.

Ensure data security, compliance, and quality across all data platforms.

Foster a culture of innovation and technical excellence within the data team.

Read more
Adesso

Adesso

Agency job
via HashRoot by Maheswari M
Kochi (Cochin), Chennai, Pune
3 - 6 yrs
₹4L - ₹24L / yr
Data engineering
skill iconAmazon Web Services (AWS)
Windows Azure
Snowflake
Data Transformation Tool (DBT)
+3 more

We are seeking a skilled Cloud Data Engineer who has experience with cloud data platforms like AWS or Azure and especially Snowflake and dbt to join our dynamic team. As a consultant, you will be responsible for developing new data platforms and create the data processes. You will collaborate with cross-functional teams to design, develop, and deploy high-quality frontend solutions. 

Responsibilities:

Customer consulting: You develop data-driven products in the Snowflake Cloud and connect data & analytics with specialist departments. You develop ELT processes using dbt (data build tool) 

Specifying requirements: You develop concrete requirements for future-proof cloud data architectures.

Develop data routes: You design scalable and powerful data management processes.

Analyze data: You derive sound findings from data sets and present them in an understandable way.

Requirements:

Requirements management and project experience: You successfully implement cloud-based data & analytics projects.

Data architectures: You are proficient in DWH/data lake concepts and modeling with Data Vault 2.0.

Cloud expertise: You have extensive knowledge of Snowflake, dbt and other cloud technologies (e.g. MS Azure, AWS, GCP).

SQL know-how: You have a sound and solid knowledge of SQL.

Data management: You are familiar with topics such as master data management and data quality.

Bachelor's degree in computer science, or a related field.

Strong communication and collaboration abilities to work effectively in a team environment.

 

Skills & Requirements

Cloud Data Engineering, AWS, Azure, Snowflake, dbt, ELT processes, Data-driven consulting, Cloud data architectures, Scalable data management, Data analysis, Requirements management, Data warehousing, Data lake, Data Vault 2.0, SQL, Master data management, Data quality, GCP, Strong communication, Collaboration.

Read more
Adesso India

Adesso India

Agency job
via HashRoot by Deepak S
Remote only
3 - 11 yrs
₹6L - ₹27L / yr
Data engineering
Data architecture
skill iconAmazon Web Services (AWS)
Windows Azure
Data Transformation Tool (DBT)
+3 more

Overview

adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.

Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.


Job Description

We are seeking a skilled Cloud Data Engineer who has experience with cloud data platforms like AWS or Azure and especially Snowflake and dbt to join our dynamic team. As a consultant, you will be responsible for developing new data platforms and create the data processes. You will collaborate with cross-functional teams to design, develop, and deploy high-quality frontend solutions. 


Responsibilities:

Customer consulting: You develop data-driven products in the Snowflake Cloud and connect data & analytics with specialist departments. You develop ELT processes using dbt (data build tool) 

Specifying requirements: You develop concrete requirements for future-proof cloud data architectures.

Develop data routes: You design scalable and powerful data management processes.

Analyze data: You derive sound findings from data sets and present them in an understandable way.


Requirements:

Requirements management and project experience: You successfully implement cloud-based data & analytics projects.

Data architectures: You are proficient in DWH/data lake concepts and modeling with Data Vault 2.0.

Cloud expertise: You have extensive knowledge of Snowflake, dbt and other cloud technologies (e.g. MS Azure, AWS, GCP).

SQL know-how: You have a sound and solid knowledge of SQL.

Data management: You are familiar with topics such as master data management and data quality.

Bachelor's degree in computer science, or a related field.

Strong communication and collaboration abilities to work effectively in a team environment.

 

Skills & Requirements

Cloud Data Engineering, AWS, Azure, Snowflake, dbt, ELT processes, Data-driven consulting, Cloud data architectures, Scalable data management, Data analysis, Requirements management, Data warehousing, Data lake, Data Vault 2.0, SQL, Master data management, Data quality, GCP, Strong communication, Collaboration.

Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Remote only
11 - 18 yrs
₹70L - ₹80L / yr
skill iconJava
skill iconGo Programming (Golang)
skill iconNodeJS (Node.js)
skill iconPython
Apache Kafka
+7 more

Role & Responsibilities

Lead and mentor a team of data engineers, ensuring high performance and career growth.

Architect and optimize scalable data infrastructure, ensuring high availability and reliability.

Drive the development and implementation of data governance frameworks and best practices.

Work closely with cross-functional teams to define and execute a data roadmap.

Optimize data processing workflows for performance and cost efficiency.

Ensure data security, compliance, and quality across all data platforms.

Foster a culture of innovation and technical excellence within the data team.

Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Remote only
11 - 18 yrs
₹50L - ₹70L / yr
skill iconJava
Data engineering
skill iconNodeJS (Node.js)
skill iconPython
skill iconGo Programming (Golang)
+5 more

Role & Responsibilities

Lead and mentor a team of data engineers, ensuring high performance and career growth.

Architect and optimize scalable data infrastructure, ensuring high availability and reliability.

Drive the development and implementation of data governance frameworks and best practices.

Work closely with cross-functional teams to define and execute a data roadmap.

Optimize data processing workflows for performance and cost efficiency.

Ensure data security, compliance, and quality across all data platforms.

Foster a culture of innovation and technical excellence within the data team.

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort