

Job Description
Job Title: Data Engineer
Location: Hyderabad, India
Job Type: Full Time
Experience: 5 – 8 Years
Working Model: On-Site (No remote or work-from-home options available)
Work Schedule: Mountain Time Zone (3:00 PM to 11:00 PM IST)
Role Overview
The Data Engineer will be responsible for designing and implementing scalable backend systems, leveraging Python and PySpark to build high-performance solutions. The role requires a proactive and detail-orientated individual who can solve complex data engineering challenges while collaborating with cross-functional teams to deliver quality results.
Key Responsibilities
- Develop and maintain backend systems using Python and PySpark.
- Optimise and enhance system performance for large-scale data processing.
- Collaborate with cross-functional teams to define requirements and deliver solutions.
- Debug, troubleshoot, and resolve system issues and bottlenecks.
- Follow coding best practices to ensure code quality and maintainability.
- Utilise tools like Palantir Foundry for data management workflows (good to have).
Qualifications
- Strong proficiency in Python backend development.
- Hands-on experience with PySpark for data engineering.
- Excellent problem-solving skills and attention to detail.
- Good communication skills for effective team collaboration.
- Experience with Palantir Foundry or similar platforms is a plus.
Preferred Skills
- Experience with large-scale data processing and pipeline development.
- Familiarity with agile methodologies and development tools.
- Ability to optimise and streamline backend processes effectively.

About Indigrators solutions
About
Similar jobs
Company Name – Wissen Technology
Group of companies in India – Wissen Technology & Wissen Infotech
Work Location - Senior Backend Developer – Java (with Python Exposure)- Mumbai
Experience - 4 to 10 years
Kindly revert over mail if you are interested.
Java Developer – Job Description
We are seeking a Senior Backend Developer with strong expertise in Java (Spring Boot) and working knowledge of Python. In this role, Java will be your primary development language, with Python used for scripting, automation, or selected service modules. You’ll be part of a collaborative backend team building scalable and high-performance systems.
Key Responsibilities
- Design and develop robust backend services and APIs primarily using Java (Spring Boot)
- Contribute to Python-based components where needed for automation, scripting, or lightweight services
- Build, integrate, and optimize RESTful APIs and microservices
- Work with relational and NoSQL databases
- Write unit and integration tests (JUnit, PyTest)
- Collaborate closely with DevOps, QA, and product teams
- Participate in architecture reviews and design discussions
- Help maintain code quality, organization, and automation
Required Skills & Qualifications
- 4 to 10 years of hands-on Java development experience
- Strong experience with Spring Boot, JPA/Hibernate, and REST APIs
- At least 1–2 years of hands-on experience with Python (e.g., for scripting, automation, or small services)
- Familiarity with Python frameworks like Flask or FastAPI is a plus
- Experience with SQL/NoSQL databases (e.g., PostgreSQL, MongoDB)
- Good understanding of OOP, design patterns, and software engineering best practices
- Familiarity with Docker, Git, and CI/CD pipelines

About the Role
We are looking for a highly motivated Project Manager with a strong background in cloud technologies, big data ecosystems, and software development lifecycles to lead cross-functional teams in delivering high-impact projects. The ideal candidate will combine excellent project management skills with technical acumen in GCP, DevOps, and Python-based applications.
Key Responsibilities
- Lead end-to-end project planning, execution, and delivery, ensuring alignment with business goals and timelines.
- Create and maintain project documentation including detailed timelines, sprint boards, risk logs, and weekly status reports.
- Facilitate Agile ceremonies: daily stand-ups, sprint planning, retrospectives, and backlog grooming.
- Actively manage risks, scope changes, resource allocation, and project dependencies to ensure delivery without disruptions.
- Ensure compliance with QA processes and security/compliance standards throughout the SDLC.
- Collaborate with stakeholders and senior leadership to communicate progress, blockers, and key milestones.
- Provide mentorship and support to cross-functional team members to drive continuous improvement and team performance.
- Coordinate with clients and act as a key point of contact for requirement gathering, updates, and escalations.
Required Skills & Experience
Cloud & DevOps
- Proficient in Google Cloud Platform (GCP) services: Compute, Storage, Networking, IAM.
- Hands-on experience with cloud deployments and infrastructure as code.
- Strong working knowledge of CI/CD pipelines, Docker, Kubernetes, and Terraform (or similar tools).
Big Data & Data Engineering
- Experience with large-scale data processing using tools like PySpark, Hadoop, Hive, HDFS, and Spark Streaming (preferred).
- Proven experience in managing and optimizing big data pipelines and ensuring high performance.
Programming & Frameworks
- Strong proficiency in Python with experience in Django (REST APIs, ORM, deployment workflows).
- Familiarity with Git and version control best practices.
- Basic knowledge of Linux administration and shell scripting.
Nice to Have
- Knowledge or prior experience in the Media & Advertising domain.
- Experience in client-facing roles and handling stakeholder communications.
- Proven ability to manage technical teams (5–6 members).
Why Join Us?
- Work on cutting-edge cloud and data engineering projects
- Collaborate with a talented, fast-paced team
- Flexible work setup and culture of ownership
- Continuous learning and upskilling environment
- Inclusive health benefits included


Job title - Python developer
Exp – 4 to 6 years
Location – Pune/Mum/B’lore
PFB JD
Requirements:
- Proven experience as a Python Developer
- Strong knowledge of core Python and Pyspark concepts
- Experience with web frameworks such as Django or Flask
- Good exposure to any cloud platform (GCP Preferred)
- CI/CD exposure required
- Solid understanding of RESTful APIs and how to build them
- Experience working with databases like Oracle DB and MySQL
- Ability to write efficient SQL queries and optimize database performance
- Strong problem-solving skills and attention to detail
- Strong SQL programing (stored procedure, functions)
- Excellent communication and interpersonal skill
Roles and Responsibilities
- Design, develop, and maintain data pipelines and ETL processes using pyspark
- Work closely with data scientists and analysts to provide them with clean, structured data.
- Optimize data storage and retrieval for performance and scalability.
- Collaborate with cross-functional teams to gather data requirements.
- Ensure data quality and integrity through data validation and cleansing processes.
- Monitor and troubleshoot data-related issues to ensure data pipeline reliability.
- Stay up to date with industry best practices and emerging technologies in data engineering.



🚀 Hiring: Data Engineer | GCP + Spark + Python + .NET |
| 6–10 Yrs | Gurugram (Hybrid)
We’re looking for a skilled Data Engineer with strong hands-on experience in GCP, Spark-Scala, Python, and .NET.
📍 Location: Suncity, Sector 54, Gurugram (Hybrid – 3 days onsite)
💼 Experience: 6–10 Years
⏱️ Notice Period :- Immediate Joiner
Required Skills:
- 5+ years of experience in distributed computing (Spark) and software development.
- 3+ years of experience in Spark-Scala
- 5+ years of experience in Data Engineering.
- 5+ years of experience in Python.
- Fluency in working with databases (preferably Postgres).
- Have a sound understanding of object-oriented programming and development principles.
- Experience working in an Agile Scrum or Kanban development environment.
- Experience working with version control software (preferably Git).
- Experience with CI/CD pipelines.
- Experience with automated testing, including integration/delta, Load, and Performance

Profile: AWS Data Engineer
Mode- Hybrid
Experience- 5+7 years
Locations - Bengaluru, Pune, Chennai, Mumbai, Gurugram
Roles and Responsibilities
- Design and maintain ETL pipelines using AWS Glue and Python/PySpark
- Optimize SQL queries for Redshift and Athena
- Develop Lambda functions for serverless data processing
- Configure AWS DMS for database migration and replication
- Implement infrastructure as code with CloudFormation
- Build optimized data models for performance
- Manage RDS databases and AWS service integrations
- Troubleshoot and improve data processing efficiency
- Gather requirements from business stakeholders
- Implement data quality checks and validation
- Document data pipelines and architecture
- Monitor workflows and implement alerting
- Keep current with AWS services and best practices
Required Technical Expertise:
- Python/PySpark for data processing
- AWS Glue for ETL operations
- Redshift and Athena for data querying
- AWS Lambda and serverless architecture
- AWS DMS and RDS management
- CloudFormation for infrastructure
- SQL optimization and performance tuning

Azure DE
Primary Responsibilities -
- Create and maintain data storage solutions including Azure SQL Database, Azure Data Lake, and Azure Blob Storage.
- Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure Create data models for analytics purposes
- Utilizing Azure Data Factory or comparable technologies, create and maintain ETL (Extract, Transform, Load) operations
- Use Azure Data Factory and Databricks to assemble large, complex data sets
- Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data.
- Ensure data security and compliance
- Collaborate with data engineers, and other stakeholders to understand requirements and translate them into scalable and reliable data platform architectures
Required skills:
- Blend of technical expertise, analytical problem-solving, and collaboration with cross-functional teams
- Azure DevOps
- Apache Spark, Python
- SQL proficiency
- Azure Databricks knowledge
- Big data technologies
The DEs should be well versed in coding, spark core and data ingestion using Azure. Moreover, they need to be decent in terms of communication skills. They should also have core Azure DE skills and coding skills (pyspark, python and SQL).
Out of the 7 open demands, 5 of The Azure Data Engineers should have minimum 5 years of relevant Data Engineering experience.


· The Objective:
You will play a crucial role in designing, implementing, and maintaining our data infrastructure, run tests and update the systems
· Job function and requirements
o Expert in Python, Pandas and Numpy with knowledge of Python web Framework such as Django and Flask.
o Able to integrate multiple data sources and databases into one system.
o Basic understanding of frontend technologies like HTML, CSS, JavaScript.
o Able to build data pipelines.
o Strong unit test and debugging skills.
o Understanding of fundamental design principles behind a scalable application
o Good understanding of RDBMS databases among Mysql or Postgresql.
o Able to analyze and transform raw data.
· About us
Mitibase helps companies find warm prospects every month that are most relevant, and then helps their team to act on those with automation. We do so by automatically tracking key accounts and contacts for job changes and relationships triggers and surfaces them as warm leads in your sales pipeline.


We have urgent requirement of Data Engineer/Sr Data Engineer for reputed MNC company.
Exp: 4-9yrs
Location: Pune/Bangalore/Hyderabad
Skills: We need candidate either Python AWS or Pyspark AWS or Spark Scala

EXP-Developer-4 to 12 years
Must have low-level design and development skills. Should able to design a solution for given use cases.
- Agile delivery- Person must able to show design and code on a daily basis
- Must be an experienced PySpark developer and Scala coding. Primary skill is PySpark
- Must have experience in designing job orchestration, sequence, metadata design, Audit trail, dynamic parameter passing and error/exception handling
- Good experience with unit, integration and UAT support
- Able to design and code reusable components and functions
- Should able to review design, code & provide review comments with justification
- Zeal to learn new tool/technologies and adoption
- Good to have experience with Devops and CICD

