Cutshort logo
Indigrators solutions logo
Data Engineer
Indigrators solutions's logo

Data Engineer

Afzal Mohammed's profile picture
Posted by Afzal Mohammed
5 - 8 yrs
₹18L - ₹24L / yr
Hyderabad
Skills
skill iconPython
PySpark
Palantir Foundry
Palantir
Foundry

Job Description


Job Title: Data Engineer

Location: Hyderabad, India

Job Type: Full Time

Experience: 5 – 8 Years

Working Model: On-Site (No remote or work-from-home options available)

Work Schedule: Mountain Time Zone (3:00 PM to 11:00 PM IST)

Role Overview

The Data Engineer will be responsible for designing and implementing scalable backend systems, leveraging Python and PySpark to build high-performance solutions. The role requires a proactive and detail-orientated individual who can solve complex data engineering challenges while collaborating with cross-functional teams to deliver quality results.

Key Responsibilities

  • Develop and maintain backend systems using Python and PySpark.
  • Optimise and enhance system performance for large-scale data processing.
  • Collaborate with cross-functional teams to define requirements and deliver solutions.
  • Debug, troubleshoot, and resolve system issues and bottlenecks.
  • Follow coding best practices to ensure code quality and maintainability.
  • Utilise tools like Palantir Foundry for data management workflows (good to have).

Qualifications

  • Strong proficiency in Python backend development.
  • Hands-on experience with PySpark for data engineering.
  • Excellent problem-solving skills and attention to detail.
  • Good communication skills for effective team collaboration.
  • Experience with Palantir Foundry or similar platforms is a plus.

Preferred Skills

  • Experience with large-scale data processing and pipeline development.
  • Familiarity with agile methodologies and development tools.
  • Ability to optimise and streamline backend processes effectively.


Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

About Indigrators solutions

Founded :
2023
Type :
Products & Services
Size :
0-20
Stage :
Profitable

About

Indigrators Solutions Pvt. Ltd. is a Software Product Development company in Hyderabad, India that offers a range of services like Software Development, Enterprise solutions implementation, Global Customer Support across different time zones and industries. We work with companies worldwide to build scalable, results-driven software development teams in Hyderabad. At Indigrators, * We believe in promoting an environment that values transparency, empowerment and inclusivity. * We encourage open and honest communication, without fear of judgement, to move forward as a unified team. * We recognize and reward each individual's contribution to our mission, empowering them to grow and progress with us. * We value diversity and inclusion, making sure that everyone, no matter where they come from, feels like a part of Indigrators family. Come Join us.
Read more

Company social profiles

linkedin

Similar jobs

Deqode
at Deqode
1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Pune
5 - 6 yrs
₹4L - ₹10L / yr
Windows Azure
skill iconPython
PySpark
ADF
databricks
+2 more

🚀 Hiring: Data Engineer ( Azure ) at Deqode

⭐ Experience: 5+ Years

📍 Location: Pune, Bhopal, Jaipur, Gurgaon, Delhi, Banglore,

⭐ Work Mode:- Hybrid

⏱️ Notice Period: Immediate Joiners

(Only immediate joiners & candidates serving notice period)


⭐ Hiring: Databricks Data Engineer – Lakeflow | Streaming | DBSQL | Data Intelligence

We are looking for a Databricks Data Engineer ( Azure ) to build reliable, scalable, and governed data pipelines powering analytics, operational reporting, and the Data Intelligence Layer.


🔹 Key Responsibilities

✅ Build optimized batch pipelines using Delta Lake (partitioning, OPTIMIZE, Z-ORDER, VACUUM)

✅ Implement incremental ingestion using Databricks Autoloader with schema evolution & checkpointing

✅ Develop Structured Streaming pipelines with watermarking, late data handling & restart safety

✅ Implement declarative pipelines using Lakeflow

✅ Design idempotent, replayable pipelines with safe backfills

✅ Optimize Spark workloads (AQE, skew handling, shuffle & join tuning)

✅ Build curated datasets for Databricks SQL (DBSQL), dashboards & downstream applications

✅ Package and deploy using Databricks Repos & Asset Bundles (CI/CD)

Ensure governance using Unity Catalog and embedded data quality checks


✅ Mandatory Skills (Must Have)

👉 Databricks & Delta Lake (Advanced Optimization & Performance Tuning)

👉 Structured Streaming & Autoloader Implementation

👉 Databricks SQL (DBSQL) & Data Modeling for Analytics

Read more
Deqode
at Deqode
1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Gurugram, Delhi, Noida, Ghaziabad, Faridabad
6 - 10 yrs
₹5L - ₹15L / yr
Google Cloud Platform (GCP)
skill iconPython
PySpark
skill icon.NET
skill iconScala

🚀 Hiring: Data Engineer | GCP + Spark + Python + .NET |

| 6–10 Yrs | Gurugram (Hybrid)


We’re looking for a skilled Data Engineer with strong hands-on experience in GCP, Spark-Scala, Python, and .NET.


📍 Location: Suncity, Sector 54, Gurugram (Hybrid – 3 days onsite)

💼 Experience: 6–10 Years

⏱️ Notice Period :- Immediate Joiner


Required Skills:

  • 5+ years of experience in distributed computing (Spark) and software development.
  • 3+ years of experience in Spark-Scala
  • 5+ years of experience in Data Engineering.
  • 5+ years of experience in Python.
  • Fluency in working with databases (preferably Postgres).
  • Have a sound understanding of object-oriented programming and development principles.
  • Experience working in an Agile Scrum or Kanban development environment.
  • Experience working with version control software (preferably Git).
  • Experience with CI/CD pipelines.
  • Experience with automated testing, including integration/delta, Load, and Performance
Read more
Wissen Technology
at Wissen Technology
4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Mumbai, Pune
3 - 6 yrs
Best in industry
skill iconPython
PySpark
pandas
SQL
ADF
+2 more

* Python (3 to 6 years): Strong expertise in data workflows and automation

* Spark (PySpark): Hands-on experience with large-scale data processing

* Pandas: For detailed data analysis and validation

* Delta Lake: Managing structured and semi-structured datasets at scale

* SQL: Querying and performing operations on Delta tables

* Azure Cloud: Compute and storage services

* Orchestrator: Good experience with either ADF or Airflow

Read more
Wissen Technology
at Wissen Technology
4 recruiters
Rutuja Patil
Posted by Rutuja Patil
Mumbai
4 - 10 yrs
Best in industry
skill iconJava
J2EE
Hibernate (Java)
skill iconSpring Boot
Spring MVC
+2 more

Company Name – Wissen Technology

Group of companies in India – Wissen Technology & Wissen Infotech

Work Location - Senior Backend Developer – Java (with Python Exposure)- Mumbai


Experience - 4 to 10 years


Kindly revert over mail if you are interested.


Java Developer – Job Description


We are seeking a Senior Backend Developer with strong expertise in Java (Spring Boot) and working knowledge of Python. In this role, Java will be your primary development language, with Python used for scripting, automation, or selected service modules. You’ll be part of a collaborative backend team building scalable and high-performance systems.


Key Responsibilities


  • Design and develop robust backend services and APIs primarily using Java (Spring Boot)
  • Contribute to Python-based components where needed for automation, scripting, or lightweight services
  • Build, integrate, and optimize RESTful APIs and microservices
  • Work with relational and NoSQL databases
  • Write unit and integration tests (JUnit, PyTest)
  • Collaborate closely with DevOps, QA, and product teams
  • Participate in architecture reviews and design discussions
  • Help maintain code quality, organization, and automation


Required Skills & Qualifications

  • 4 to 10 years of hands-on Java development experience
  • Strong experience with Spring Boot, JPA/Hibernate, and REST APIs
  • At least 1–2 years of hands-on experience with Python (e.g., for scripting, automation, or small services)
  • Familiarity with Python frameworks like Flask or FastAPI is a plus
  • Experience with SQL/NoSQL databases (e.g., PostgreSQL, MongoDB)
  • Good understanding of OOPdesign patterns, and software engineering best practices
  • Familiarity with DockerGit, and CI/CD pipelines


Read more
Ganit Business Solutions
at Ganit Business Solutions
3 recruiters
Agency job
via hirezyai by HR Hirezyai
Bengaluru (Bangalore), Chennai, Mumbai
5.5 - 12 yrs
₹15L - ₹25L / yr
skill iconAmazon Web Services (AWS)
PySpark
SQL

Roles & Responsibilities

  • Data Engineering Excellence: Design and implement data pipelines using formats like JSON, Parquet, CSV, and ORC, utilizing batch and streaming ingestion.
  • Cloud Data Migration Leadership: Lead cloud migration projects, developing scalable Spark pipelines.
  • Medallion Architecture: Implement Bronze, Silver, and gold tables for scalable data systems.
  • Spark Code Optimization: Optimize Spark code to ensure efficient cloud migration.
  • Data Modeling: Develop and maintain data models with strong governance practices.
  • Data Cataloging & Quality: Implement cataloging strategies with Unity Catalog to maintain high-quality data.
  • Delta Live Table Leadership: Lead the design and implementation of Delta Live Tables (DLT) pipelines for secure, tamper-resistant data management.
  • Customer Collaboration: Collaborate with clients to optimize cloud migrations and ensure best practices in design and governance.

Educational Qualifications

  • Experience: Minimum 5 years of hands-on experience in data engineering, with a proven track record in complex pipeline development and cloud-based data migration projects.
  • Education: Bachelor’s or higher degree in Computer Science, Data Engineering, or a related field.
  • Skills
  • Must-have: Proficiency in Spark, SQL, Python, and other relevant data processing technologies. Strong knowledge of Databricks and its components, including Delta Live Table (DLT) pipeline implementations. Expertise in on-premises to cloud Spark code optimization and Medallion Architecture.

Good to Have

  • Familiarity with AWS services (experience with additional cloud platforms like GCP or Azure is a plus).

Soft Skills

  • Excellent communication and collaboration skills, with the ability to work effectively with clients and internal teams.
  • Certifications
  • AWS/GCP/Azure Data Engineer Certification.


Read more
Deqode
at Deqode
1 recruiter
Shraddha Katare
Posted by Shraddha Katare
Bengaluru (Bangalore), Pune, Chennai, Mumbai, Gurugram
5 - 7 yrs
₹5L - ₹19L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
PySpark
SQL
redshift

Profile: AWS Data Engineer

Mode- Hybrid

Experience- 5+7 years

Locations - Bengaluru, Pune, Chennai, Mumbai, Gurugram


Roles and Responsibilities

  • Design and maintain ETL pipelines using AWS Glue and Python/PySpark
  • Optimize SQL queries for Redshift and Athena
  • Develop Lambda functions for serverless data processing
  • Configure AWS DMS for database migration and replication
  • Implement infrastructure as code with CloudFormation
  • Build optimized data models for performance
  • Manage RDS databases and AWS service integrations
  • Troubleshoot and improve data processing efficiency
  • Gather requirements from business stakeholders
  • Implement data quality checks and validation
  • Document data pipelines and architecture
  • Monitor workflows and implement alerting
  • Keep current with AWS services and best practices


Required Technical Expertise:

  • Python/PySpark for data processing
  • AWS Glue for ETL operations
  • Redshift and Athena for data querying
  • AWS Lambda and serverless architecture
  • AWS DMS and RDS management
  • CloudFormation for infrastructure
  • SQL optimization and performance tuning
Read more
Wissen Technology
at Wissen Technology
4 recruiters
Nishita Bangera
Posted by Nishita Bangera
Bengaluru (Bangalore)
4 - 8 yrs
Best in industry
skill iconPython
SQL
PySpark
skill iconDjango

Key Responsibilities

  • Develop and maintain Python-based applications.
  • Design and optimize SQL queries and databases.
  • Collaborate with cross-functional teams to define, design, and ship new features.
  • Write clean, maintainable, and efficient code.
  • Troubleshoot and debug applications.
  • Participate in code reviews and contribute to team knowledge sharing.

Qualifications and Required Skills

  • Strong proficiency in Python programming.
  • Experience with SQL and database management.
  • Experience with web frameworks such as Django or Flask.
  • Knowledge of front-end technologies like HTML, CSS, and JavaScript.
  • Familiarity with version control systems like Git.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and teamwork skills.

Good to Have Skills

  • Experience with cloud platforms like AWS or Azure.
  • Knowledge of containerization technologies like Docker.
  • Familiarity with continuous integration and continuous deployment (CI/CD) pipelines


Read more
NA
NA
Agency job
via Method Hub by Sampreetha Pai
anywhere in India
4 - 5 yrs
₹18L - ₹22L / yr
SQL Azure
Apache Spark
DevOps
PySpark
skill iconPython
+1 more

Azure DE

Primary Responsibilities -

  • Create and maintain data storage solutions including Azure SQL Database, Azure Data Lake, and Azure Blob Storage.
  • Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure Create data models for analytics purposes
  • Utilizing Azure Data Factory or comparable technologies, create and maintain ETL (Extract, Transform, Load) operations
  • Use Azure Data Factory and Databricks to assemble large, complex data sets
  • Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data.
  • Ensure data security and compliance
  • Collaborate with data engineers, and other stakeholders to understand requirements and translate them into scalable and reliable data platform architectures

Required skills:

  • Blend of technical expertise, analytical problem-solving, and collaboration with cross-functional teams
  • Azure DevOps
  • Apache Spark, Python
  • SQL proficiency
  • Azure Databricks knowledge
  • Big data technologies


The DEs should be well versed in coding, spark core and data ingestion using Azure. Moreover, they need to be decent in terms of communication skills. They should also have core Azure DE skills and coding skills (pyspark, python and SQL).

Out of the 7 open demands, 5 of The Azure Data Engineers should have minimum 5 years of relevant Data Engineering experience.


Read more
Persistent System Ltd
Persistent System Ltd
Agency job
via Milestone Hr Consultancy by Haina khan
Pune, Bengaluru (Bangalore), Hyderabad
4 - 9 yrs
₹8L - ₹27L / yr
skill iconPython
PySpark
skill iconAmazon Web Services (AWS)
Spark
skill iconScala
Greetings..

We have urgent requirement of Data Engineer/Sr Data Engineer for reputed MNC company.

Exp: 4-9yrs

Location: Pune/Bangalore/Hyderabad

Skills: We need candidate either Python AWS or Pyspark AWS or Spark Scala
Read more
MNC
Remote, Bengaluru (Bangalore)
4 - 12 yrs
₹15L - ₹30L / yr
PySpark
Pyspark Developer
skill iconScala
DevOps

EXP-Developer-4 to 12 years

 

 

         Must have low-level design and development skills.  Should able to design a solution for given use cases. 

  • Agile delivery-  Person must able to show design and code on a daily basis
  • Must be an experienced PySpark developer and  Scala coding.   Primary skill is PySpark
  • Must have experience in designing job orchestration, sequence, metadata design, Audit trail, dynamic parameter passing and error/exception handling
  • Good experience with unit, integration and UAT support
  • Able to design and code reusable components and functions
  • Should able to review design, code & provide review comments with justification
  • Zeal to learn new tool/technologies and adoption
  • Good to have experience with Devops and CICD
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos