
Job Description
Job Title: Data Engineer
Location: Hyderabad, India
Job Type: Full Time
Experience: 5 – 8 Years
Working Model: On-Site (No remote or work-from-home options available)
Work Schedule: Mountain Time Zone (3:00 PM to 11:00 PM IST)
Role Overview
The Data Engineer will be responsible for designing and implementing scalable backend systems, leveraging Python and PySpark to build high-performance solutions. The role requires a proactive and detail-orientated individual who can solve complex data engineering challenges while collaborating with cross-functional teams to deliver quality results.
Key Responsibilities
- Develop and maintain backend systems using Python and PySpark.
- Optimise and enhance system performance for large-scale data processing.
- Collaborate with cross-functional teams to define requirements and deliver solutions.
- Debug, troubleshoot, and resolve system issues and bottlenecks.
- Follow coding best practices to ensure code quality and maintainability.
- Utilise tools like Palantir Foundry for data management workflows (good to have).
Qualifications
- Strong proficiency in Python backend development.
- Hands-on experience with PySpark for data engineering.
- Excellent problem-solving skills and attention to detail.
- Good communication skills for effective team collaboration.
- Experience with Palantir Foundry or similar platforms is a plus.
Preferred Skills
- Experience with large-scale data processing and pipeline development.
- Familiarity with agile methodologies and development tools.
- Ability to optimise and streamline backend processes effectively.

About Indigrators solutions
About
Similar jobs
About Corridor Platforms
Corridor Platforms is a leader in next-generation risk decisioning and responsible AI governance, empowering banks and lenders to build transparent, compliant, and data-driven solutions. Our platforms combine advanced analytics, real-time data integration, and GenAI to support complex financial decision workflows for regulated industries.
Role Overview
As a Backend Engineer at Corridor Platforms, you will:
- Architect, develop, and maintain backend components for our Risk Decisioning Platform.
- Build and orchestrate scalable backend services that automate, optimize, and monitor high-value credit and risk decisions in real time.
- Integrate with ORM layers – such as SQLAlchemy – and multi RDBMS solutions (Postgres, MySQL, Oracle, MSSQL, etc) to ensure data integrity, scalability, and compliance.
- Collaborate closely with Product Team, Data Scientists, QA Teams to create extensible APIs, workflow automation, and AI governance features.
- Architect workflows for privacy, auditability, versioned traceability, and role-based access control, ensuring adherence to regulatory frameworks.
- Take ownership from requirements to deployment, seeing your code deliver real impact in the lives of customers and end users.
Technical Skills
- Languages: Python 3.9+, SQL, JavaScript/TypeScript, Angular
- Frameworks: Flask, SQLAlchemy, Celery, Marshmallow, Apache Spark
- Databases: PostgreSQL, Oracle, SQL Server, Redis
- Tools: pytest, Docker, Git, Nx
- Cloud: Experience with AWS, Azure, or GCP preferred
- Monitoring: Familiarity with OpenTelemetry and logging frameworks
Why Join Us?
- Cutting-Edge Tech: Work hands-on with the latest AI, cloud-native workflows, and big data tools—all within a single compliant platform.
- End-to-End Impact: Contribute to mission-critical backend systems, from core data models to live production decision services.
- Innovation at Scale: Engineer solutions that process vast data volumes, helping financial institutions innovate safely and effectively.
- Mission-Driven: Join a passionate team advancing fair, transparent, and compliant risk decisioning at the forefront of fintech and AI governance.
What We’re Looking For
- Proficiency in Python, SQLAlchemy (or similar ORM), and SQL databases.
- Experience developing and maintaining scalable backend services, including API, data orchestration, ML workflows, and workflow automation.
- Solid understanding of data modeling, distributed systems, and backend architecture for regulated environments.
- Curiosity and drive to work at the intersection of AI/ML, fintech, and regulatory technology.
- Experience mentoring and guiding junior developers.
Ready to build backends that shape the future of decision intelligence and responsible AI?
Apply now and become part of the innovation at Corridor Platforms!
🚀 Hiring: Data Engineer | GCP + Spark + Python + .NET |
| 6–10 Yrs | Gurugram (Hybrid)
We’re looking for a skilled Data Engineer with strong hands-on experience in GCP, Spark-Scala, Python, and .NET.
📍 Location: Suncity, Sector 54, Gurugram (Hybrid – 3 days onsite)
💼 Experience: 6–10 Years
⏱️ Notice Period :- Immediate Joiner
Required Skills:
- 5+ years of experience in distributed computing (Spark) and software development.
- 3+ years of experience in Spark-Scala
- 5+ years of experience in Data Engineering.
- 5+ years of experience in Python.
- Fluency in working with databases (preferably Postgres).
- Have a sound understanding of object-oriented programming and development principles.
- Experience working in an Agile Scrum or Kanban development environment.
- Experience working with version control software (preferably Git).
- Experience with CI/CD pipelines.
- Experience with automated testing, including integration/delta, Load, and Performance
Roles & Responsibilities
- Data Engineering Excellence: Design and implement data pipelines using formats like JSON, Parquet, CSV, and ORC, utilizing batch and streaming ingestion.
- Cloud Data Migration Leadership: Lead cloud migration projects, developing scalable Spark pipelines.
- Medallion Architecture: Implement Bronze, Silver, and gold tables for scalable data systems.
- Spark Code Optimization: Optimize Spark code to ensure efficient cloud migration.
- Data Modeling: Develop and maintain data models with strong governance practices.
- Data Cataloging & Quality: Implement cataloging strategies with Unity Catalog to maintain high-quality data.
- Delta Live Table Leadership: Lead the design and implementation of Delta Live Tables (DLT) pipelines for secure, tamper-resistant data management.
- Customer Collaboration: Collaborate with clients to optimize cloud migrations and ensure best practices in design and governance.
Educational Qualifications
- Experience: Minimum 5 years of hands-on experience in data engineering, with a proven track record in complex pipeline development and cloud-based data migration projects.
- Education: Bachelor’s or higher degree in Computer Science, Data Engineering, or a related field.
- Skills
- Must-have: Proficiency in Spark, SQL, Python, and other relevant data processing technologies. Strong knowledge of Databricks and its components, including Delta Live Table (DLT) pipeline implementations. Expertise in on-premises to cloud Spark code optimization and Medallion Architecture.
Good to Have
- Familiarity with AWS services (experience with additional cloud platforms like GCP or Azure is a plus).
Soft Skills
- Excellent communication and collaboration skills, with the ability to work effectively with clients and internal teams.
- Certifications
- AWS/GCP/Azure Data Engineer Certification.
Supercharge Your Career as a Technical Lead - Python at Technoidentity!
Are you ready to solve people challenges that fuel business growth? At Technoidentity, we’re a Data+AI product engineering company building cutting-edge solutions in the FinTech domain for over 13 years—and we’re expanding globally. It’s the perfect time to join our
team of tech innovators and leave your mark!
At Technoidentity, we’re a Data + AI product engineering company trusted to deliver scalable and modern enterprise solutions. Join us as a Senior Python Developer and Technical Lead, where you'll guide high-performing engineering teams, design complex systems, and deliver
clean, scalable backend solutions using Python and modern data technologies. Your leadership will directly shape the architecture and execution of enterprise projects, with added strength in understanding database logic including PL/SQL and PostgreSQL/AlloyDB.
What’s in it for You?
• Modern Python Stack – Python 3.x, FastAPI, Pandas, NumPy, SQLAlchemy, PostgreSQL/AlloyDB, PL/pgSQL.
• Tech Leadership – Drive technical decision-making, mentor developers, and ensure code quality and scalability.
• Scalable Projects – Architect and optimize data-intensive backend services for highthroughput and distributed systems.
• Engineering Best Practices – Enforce clean architecture, code reviews, testing strategies, and SDLC alignment.
• Cross-Functional Collaboration – Lead conversations across engineering, QA, product, and DevOps to ensure delivery excellence.
What Will You Be Doing?
Technical Leadership
• Lead a team of developers through design, code reviews, and technical mentorship.
• Set architectural direction and ensure scalability, modularity, and code quality.
• Work with stakeholders to translate business goals into robust technical solutions.
Backend Development & Data Engineering
• Design and build clean, high-performance backend services using FastAPI and Python
best practices.
• Handle row- and column-level data transformation using Pandas and NumPy.
• Apply data wrangling, cleansing, and preprocessing techniques across microservices and pipelines.
Database & Performance Optimization
• Write performant queries, procedures, and triggers using PostgreSQL and PL/pgSQL.
• Understand legacy logic in PL/SQL and participate in rewriting or modernizing it for PostgreSQL-based systems.
• Tune both backend and database performance, including memory, indexing, and query optimization.
Parallelism & Communication
• Implement multithreading, multiprocessing, and parallel data flows in Python.
• Integrate Kafka, RabbitMQ, or Pub/Sub systems for real-time and async message
processing.
Engineering Excellence
• Drive adherence to Agile, Git-based workflows, CI/CD, and DevOps pipelines.
• Promote testing (unit/integration), monitoring, and observability for all backend systems.
• Stay current with Python ecosystem evolution and introduce tools that improve productivity and performance.
What Makes You the Perfect Fit?
• 6–10 years of proven experience in Python development, with strong expertise in designing and delivering scalable backend solutions
Job title - Python developer
Exp – 4 to 6 years
Location – Pune/Mum/B’lore
PFB JD
Requirements:
- Proven experience as a Python Developer
- Strong knowledge of core Python and Pyspark concepts
- Experience with web frameworks such as Django or Flask
- Good exposure to any cloud platform (GCP Preferred)
- CI/CD exposure required
- Solid understanding of RESTful APIs and how to build them
- Experience working with databases like Oracle DB and MySQL
- Ability to write efficient SQL queries and optimize database performance
- Strong problem-solving skills and attention to detail
- Strong SQL programing (stored procedure, functions)
- Excellent communication and interpersonal skill
Roles and Responsibilities
- Design, develop, and maintain data pipelines and ETL processes using pyspark
- Work closely with data scientists and analysts to provide them with clean, structured data.
- Optimize data storage and retrieval for performance and scalability.
- Collaborate with cross-functional teams to gather data requirements.
- Ensure data quality and integrity through data validation and cleansing processes.
- Monitor and troubleshoot data-related issues to ensure data pipeline reliability.
- Stay up to date with industry best practices and emerging technologies in data engineering.
Profile: AWS Data Engineer
Mode- Hybrid
Experience- 5+7 years
Locations - Bengaluru, Pune, Chennai, Mumbai, Gurugram
Roles and Responsibilities
- Design and maintain ETL pipelines using AWS Glue and Python/PySpark
- Optimize SQL queries for Redshift and Athena
- Develop Lambda functions for serverless data processing
- Configure AWS DMS for database migration and replication
- Implement infrastructure as code with CloudFormation
- Build optimized data models for performance
- Manage RDS databases and AWS service integrations
- Troubleshoot and improve data processing efficiency
- Gather requirements from business stakeholders
- Implement data quality checks and validation
- Document data pipelines and architecture
- Monitor workflows and implement alerting
- Keep current with AWS services and best practices
Required Technical Expertise:
- Python/PySpark for data processing
- AWS Glue for ETL operations
- Redshift and Athena for data querying
- AWS Lambda and serverless architecture
- AWS DMS and RDS management
- CloudFormation for infrastructure
- SQL optimization and performance tuning
Key Responsibilities
- Develop and maintain Python-based applications.
- Design and optimize SQL queries and databases.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Write clean, maintainable, and efficient code.
- Troubleshoot and debug applications.
- Participate in code reviews and contribute to team knowledge sharing.
Qualifications and Required Skills
- Strong proficiency in Python programming.
- Experience with SQL and database management.
- Experience with web frameworks such as Django or Flask.
- Knowledge of front-end technologies like HTML, CSS, and JavaScript.
- Familiarity with version control systems like Git.
- Strong problem-solving skills and attention to detail.
- Excellent communication and teamwork skills.
Good to Have Skills
- Experience with cloud platforms like AWS or Azure.
- Knowledge of containerization technologies like Docker.
- Familiarity with continuous integration and continuous deployment (CI/CD) pipelines
· The Objective:
You will play a crucial role in designing, implementing, and maintaining our data infrastructure, run tests and update the systems
· Job function and requirements
o Expert in Python, Pandas and Numpy with knowledge of Python web Framework such as Django and Flask.
o Able to integrate multiple data sources and databases into one system.
o Basic understanding of frontend technologies like HTML, CSS, JavaScript.
o Able to build data pipelines.
o Strong unit test and debugging skills.
o Understanding of fundamental design principles behind a scalable application
o Good understanding of RDBMS databases among Mysql or Postgresql.
o Able to analyze and transform raw data.
· About us
Mitibase helps companies find warm prospects every month that are most relevant, and then helps their team to act on those with automation. We do so by automatically tracking key accounts and contacts for job changes and relationships triggers and surfaces them as warm leads in your sales pipeline.
We have urgent requirement of Data Engineer/Sr Data Engineer for reputed MNC company.
Exp: 4-9yrs
Location: Pune/Bangalore/Hyderabad
Skills: We need candidate either Python AWS or Pyspark AWS or Spark Scala
EXP-Developer-4 to 12 years
Must have low-level design and development skills. Should able to design a solution for given use cases.
- Agile delivery- Person must able to show design and code on a daily basis
- Must be an experienced PySpark developer and Scala coding. Primary skill is PySpark
- Must have experience in designing job orchestration, sequence, metadata design, Audit trail, dynamic parameter passing and error/exception handling
- Good experience with unit, integration and UAT support
- Able to design and code reusable components and functions
- Should able to review design, code & provide review comments with justification
- Zeal to learn new tool/technologies and adoption
- Good to have experience with Devops and CICD











