
Job Description
Job Title: Data Engineer
Location: Hyderabad, India
Job Type: Full Time
Experience: 5 – 8 Years
Working Model: On-Site (No remote or work-from-home options available)
Work Schedule: Mountain Time Zone (3:00 PM to 11:00 PM IST)
Role Overview
The Data Engineer will be responsible for designing and implementing scalable backend systems, leveraging Python and PySpark to build high-performance solutions. The role requires a proactive and detail-orientated individual who can solve complex data engineering challenges while collaborating with cross-functional teams to deliver quality results.
Key Responsibilities
- Develop and maintain backend systems using Python and PySpark.
- Optimise and enhance system performance for large-scale data processing.
- Collaborate with cross-functional teams to define requirements and deliver solutions.
- Debug, troubleshoot, and resolve system issues and bottlenecks.
- Follow coding best practices to ensure code quality and maintainability.
- Utilise tools like Palantir Foundry for data management workflows (good to have).
Qualifications
- Strong proficiency in Python backend development.
- Hands-on experience with PySpark for data engineering.
- Excellent problem-solving skills and attention to detail.
- Good communication skills for effective team collaboration.
- Experience with Palantir Foundry or similar platforms is a plus.
Preferred Skills
- Experience with large-scale data processing and pipeline development.
- Familiarity with agile methodologies and development tools.
- Ability to optimise and streamline backend processes effectively.

About Indigrators solutions
About
Similar jobs
Job title - Python developer
Exp – 4 to 6 years
Location – Pune/Mum/B’lore
PFB JD
Requirements:
- Proven experience as a Python Developer
- Strong knowledge of core Python and Pyspark concepts
- Experience with web frameworks such as Django or Flask
- Good exposure to any cloud platform (GCP Preferred)
- CI/CD exposure required
- Solid understanding of RESTful APIs and how to build them
- Experience working with databases like Oracle DB and MySQL
- Ability to write efficient SQL queries and optimize database performance
- Strong problem-solving skills and attention to detail
- Strong SQL programing (stored procedure, functions)
- Excellent communication and interpersonal skill
Roles and Responsibilities
- Design, develop, and maintain data pipelines and ETL processes using pyspark
- Work closely with data scientists and analysts to provide them with clean, structured data.
- Optimize data storage and retrieval for performance and scalability.
- Collaborate with cross-functional teams to gather data requirements.
- Ensure data quality and integrity through data validation and cleansing processes.
- Monitor and troubleshoot data-related issues to ensure data pipeline reliability.
- Stay up to date with industry best practices and emerging technologies in data engineering.
🚀 Hiring: Data Engineer | GCP + Spark + Python + .NET |
| 6–10 Yrs | Gurugram (Hybrid)
We’re looking for a skilled Data Engineer with strong hands-on experience in GCP, Spark-Scala, Python, and .NET.
📍 Location: Suncity, Sector 54, Gurugram (Hybrid – 3 days onsite)
💼 Experience: 6–10 Years
⏱️ Notice Period :- Immediate Joiner
Required Skills:
- 5+ years of experience in distributed computing (Spark) and software development.
- 3+ years of experience in Spark-Scala
- 5+ years of experience in Data Engineering.
- 5+ years of experience in Python.
- Fluency in working with databases (preferably Postgres).
- Have a sound understanding of object-oriented programming and development principles.
- Experience working in an Agile Scrum or Kanban development environment.
- Experience working with version control software (preferably Git).
- Experience with CI/CD pipelines.
- Experience with automated testing, including integration/delta, Load, and Performance
Profile: AWS Data Engineer
Mode- Hybrid
Experience- 5+7 years
Locations - Bengaluru, Pune, Chennai, Mumbai, Gurugram
Roles and Responsibilities
- Design and maintain ETL pipelines using AWS Glue and Python/PySpark
- Optimize SQL queries for Redshift and Athena
- Develop Lambda functions for serverless data processing
- Configure AWS DMS for database migration and replication
- Implement infrastructure as code with CloudFormation
- Build optimized data models for performance
- Manage RDS databases and AWS service integrations
- Troubleshoot and improve data processing efficiency
- Gather requirements from business stakeholders
- Implement data quality checks and validation
- Document data pipelines and architecture
- Monitor workflows and implement alerting
- Keep current with AWS services and best practices
Required Technical Expertise:
- Python/PySpark for data processing
- AWS Glue for ETL operations
- Redshift and Athena for data querying
- AWS Lambda and serverless architecture
- AWS DMS and RDS management
- CloudFormation for infrastructure
- SQL optimization and performance tuning
Company Name – Wissen Technology
Group of companies in India – Wissen Technology & Wissen Infotech
Work Location - Senior Backend Developer – Java (with Python Exposure)- Mumbai
Experience - 4 to 10 years
Kindly revert over mail if you are interested.
Java Developer – Job Description
We are seeking a Senior Backend Developer with strong expertise in Java (Spring Boot) and working knowledge of Python. In this role, Java will be your primary development language, with Python used for scripting, automation, or selected service modules. You’ll be part of a collaborative backend team building scalable and high-performance systems.
Key Responsibilities
- Design and develop robust backend services and APIs primarily using Java (Spring Boot)
- Contribute to Python-based components where needed for automation, scripting, or lightweight services
- Build, integrate, and optimize RESTful APIs and microservices
- Work with relational and NoSQL databases
- Write unit and integration tests (JUnit, PyTest)
- Collaborate closely with DevOps, QA, and product teams
- Participate in architecture reviews and design discussions
- Help maintain code quality, organization, and automation
Required Skills & Qualifications
- 4 to 10 years of hands-on Java development experience
- Strong experience with Spring Boot, JPA/Hibernate, and REST APIs
- At least 1–2 years of hands-on experience with Python (e.g., for scripting, automation, or small services)
- Familiarity with Python frameworks like Flask or FastAPI is a plus
- Experience with SQL/NoSQL databases (e.g., PostgreSQL, MongoDB)
- Good understanding of OOP, design patterns, and software engineering best practices
- Familiarity with Docker, Git, and CI/CD pipelines
Key Responsibilities
- Develop and maintain Python-based applications.
- Design and optimize SQL queries and databases.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Write clean, maintainable, and efficient code.
- Troubleshoot and debug applications.
- Participate in code reviews and contribute to team knowledge sharing.
Qualifications and Required Skills
- Strong proficiency in Python programming.
- Experience with SQL and database management.
- Experience with web frameworks such as Django or Flask.
- Knowledge of front-end technologies like HTML, CSS, and JavaScript.
- Familiarity with version control systems like Git.
- Strong problem-solving skills and attention to detail.
- Excellent communication and teamwork skills.
Good to Have Skills
- Experience with cloud platforms like AWS or Azure.
- Knowledge of containerization technologies like Docker.
- Familiarity with continuous integration and continuous deployment (CI/CD) pipelines
About Corridor Platforms
Corridor Platforms is a leader in next-generation risk decisioning and responsible AI governance, empowering banks and lenders to build transparent, compliant, and data-driven solutions. Our platforms combine advanced analytics, real-time data integration, and GenAI to support complex financial decision workflows for regulated industries.
Role Overview
As a Backend Engineer at Corridor Platforms, you will:
- Architect, develop, and maintain backend components for our Risk Decisioning Platform.
- Build and orchestrate scalable backend services that automate, optimize, and monitor high-value credit and risk decisions in real time.
- Integrate with ORM layers – such as SQLAlchemy – and multi RDBMS solutions (Postgres, MySQL, Oracle, MSSQL, etc) to ensure data integrity, scalability, and compliance.
- Collaborate closely with Product Team, Data Scientists, QA Teams to create extensible APIs, workflow automation, and AI governance features.
- Architect workflows for privacy, auditability, versioned traceability, and role-based access control, ensuring adherence to regulatory frameworks.
- Take ownership from requirements to deployment, seeing your code deliver real impact in the lives of customers and end users.
Technical Skills
- Languages: Python 3.9+, SQL, JavaScript/TypeScript, Angular
- Frameworks: Flask, SQLAlchemy, Celery, Marshmallow, Apache Spark
- Databases: PostgreSQL, Oracle, SQL Server, Redis
- Tools: pytest, Docker, Git, Nx
- Cloud: Experience with AWS, Azure, or GCP preferred
- Monitoring: Familiarity with OpenTelemetry and logging frameworks
Why Join Us?
- Cutting-Edge Tech: Work hands-on with the latest AI, cloud-native workflows, and big data tools—all within a single compliant platform.
- End-to-End Impact: Contribute to mission-critical backend systems, from core data models to live production decision services.
- Innovation at Scale: Engineer solutions that process vast data volumes, helping financial institutions innovate safely and effectively.
- Mission-Driven: Join a passionate team advancing fair, transparent, and compliant risk decisioning at the forefront of fintech and AI governance.
What We’re Looking For
- Proficiency in Python, SQLAlchemy (or similar ORM), and SQL databases.
- Experience developing and maintaining scalable backend services, including API, data orchestration, ML workflows, and workflow automation.
- Solid understanding of data modeling, distributed systems, and backend architecture for regulated environments.
- Curiosity and drive to work at the intersection of AI/ML, fintech, and regulatory technology.
- Experience mentoring and guiding junior developers.
Ready to build backends that shape the future of decision intelligence and responsible AI?
Apply now and become part of the innovation at Corridor Platforms!
Azure DE
Primary Responsibilities -
- Create and maintain data storage solutions including Azure SQL Database, Azure Data Lake, and Azure Blob Storage.
- Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure Create data models for analytics purposes
- Utilizing Azure Data Factory or comparable technologies, create and maintain ETL (Extract, Transform, Load) operations
- Use Azure Data Factory and Databricks to assemble large, complex data sets
- Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data.
- Ensure data security and compliance
- Collaborate with data engineers, and other stakeholders to understand requirements and translate them into scalable and reliable data platform architectures
Required skills:
- Blend of technical expertise, analytical problem-solving, and collaboration with cross-functional teams
- Azure DevOps
- Apache Spark, Python
- SQL proficiency
- Azure Databricks knowledge
- Big data technologies
The DEs should be well versed in coding, spark core and data ingestion using Azure. Moreover, they need to be decent in terms of communication skills. They should also have core Azure DE skills and coding skills (pyspark, python and SQL).
Out of the 7 open demands, 5 of The Azure Data Engineers should have minimum 5 years of relevant Data Engineering experience.
· The Objective:
You will play a crucial role in designing, implementing, and maintaining our data infrastructure, run tests and update the systems
· Job function and requirements
o Expert in Python, Pandas and Numpy with knowledge of Python web Framework such as Django and Flask.
o Able to integrate multiple data sources and databases into one system.
o Basic understanding of frontend technologies like HTML, CSS, JavaScript.
o Able to build data pipelines.
o Strong unit test and debugging skills.
o Understanding of fundamental design principles behind a scalable application
o Good understanding of RDBMS databases among Mysql or Postgresql.
o Able to analyze and transform raw data.
· About us
Mitibase helps companies find warm prospects every month that are most relevant, and then helps their team to act on those with automation. We do so by automatically tracking key accounts and contacts for job changes and relationships triggers and surfaces them as warm leads in your sales pipeline.
We have urgent requirement of Data Engineer/Sr Data Engineer for reputed MNC company.
Exp: 4-9yrs
Location: Pune/Bangalore/Hyderabad
Skills: We need candidate either Python AWS or Pyspark AWS or Spark Scala
EXP-Developer-4 to 12 years
Must have low-level design and development skills. Should able to design a solution for given use cases.
- Agile delivery- Person must able to show design and code on a daily basis
- Must be an experienced PySpark developer and Scala coding. Primary skill is PySpark
- Must have experience in designing job orchestration, sequence, metadata design, Audit trail, dynamic parameter passing and error/exception handling
- Good experience with unit, integration and UAT support
- Able to design and code reusable components and functions
- Should able to review design, code & provide review comments with justification
- Zeal to learn new tool/technologies and adoption
- Good to have experience with Devops and CICD










