Cutshort logo

50+ ETL Jobs in India

Apply to 50+ ETL Jobs on CutShort.io. Find your next job, effortlessly. Browse ETL Jobs and apply today!

icon
QAgile Services

at QAgile Services

1 recruiter
Radhika Chotai
Posted by Radhika Chotai
Remote only
2 - 4 yrs
₹3L - ₹5L / yr
PowerBI
Data modeling
ETL
Spark
SQL
+1 more

Microsoft Fabric, Power BI, Data modelling, ETL, Spark SQL

Remote work- 5-7 hours

450 Rs hourly charges

Read more
Remote only
3 - 8 yrs
₹20L - ₹30L / yr
ETL
Google Cloud Platform (GCP)
skill iconPython
Pipeline management
BigQuery

About Us:


CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services, and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.


Job Summary:


We are seeking a highly skilled and motivated Data Engineer to join our Development POD for the Integration Project. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to ingest, clean, transform, and integrate diverse public datasets into our knowledge graph. This role requires a strong understanding of Cloud Platform (GCP) services, data engineering best practices, and a commitment to data quality and scalability.


Key Responsibilities:


  • ETL Development: Design, develop, and optimize data ingestion, cleaning, and transformation pipelines for various data sources (e.g., CSV, API, XLS, JSON, SDMX) using Cloud Platform services (Cloud Run, Dataflow) and Python.
  • Schema Mapping & Modeling: Work with LLM-based auto-schematization tools to map source data to our schema.org vocabulary, defining appropriate Statistical Variables (SVs) and generating MCF/TMCF files.
  • Entity Resolution & ID Generation: Implement processes for accurately matching new entities with existing IDs or generating unique, standardized IDs for new entities.
  • Knowledge Graph Integration: Integrate transformed data into the Knowledge Graph, ensuring proper versioning and adherence to existing standards. 
  • API Development: Develop and enhance REST and SPARQL APIs via Apigee to enable efficient access to integrated data for internal and external stakeholders.
  • Data Validation & Quality Assurance: Implement comprehensive data validation and quality checks (statistical, schema, anomaly detection) to ensure data integrity, accuracy, and freshness. Troubleshoot and resolve data import errors.
  • Automation & Optimization: Collaborate with the Automation POD to leverage and integrate intelligent assets for data identification, profiling, cleaning, schema mapping, and validation, aiming for significant reduction in manual effort.
  • Collaboration: Work closely with cross-functional teams, including Managed Service POD, Automation POD, and relevant stakeholders.

Qualifications and Skills:


  • Education: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Technology, or a related quantitative field.
  • Experience: 3+ years of proven experience as a Data Engineer, with a strong portfolio of successfully implemented data pipelines.
  • Programming Languages: Proficiency in Python for data manipulation, scripting, and pipeline development.
  • Cloud Platforms and Tools: Expertise in Google Cloud Platform (GCP) services, including Cloud Storage, Cloud SQL, Cloud Run, Dataflow, Pub/Sub, BigQuery, and Apigee. Proficiency with Git-based version control.


Core Competencies:


  • Must Have - SQL, Python, BigQuery, (GCP DataFlow / Apache Beam), Google Cloud Storage (GCS)
  • Must Have - Proven ability in comprehensive data wrangling, cleaning, and transforming complex datasets from various formats (e.g., API, CSV, XLS, JSON)
  • Secondary Skills - SPARQL, Schema.org, Apigee, CI/CD (Cloud Build), GCP, Cloud Data Fusion, Data Modelling
  • Solid understanding of data modeling, schema design, and knowledge graph concepts (e.g., Schema.org, RDF, SPARQL, JSON-LD).
  • Experience with data validation techniques and tools.
  • Familiarity with CI/CD practices and the ability to work in an Agile framework.
  • Strong problem-solving skills and keen attention to detail.


Preferred Qualifications:


  • Experience with LLM-based tools or concepts for data automation (e.g., auto-schematization).
  • Familiarity with similar large-scale public dataset integration initiatives.
  • Experience with multilingual data integration.
Read more
Oddr Inc
Deepika Madgunki
Posted by Deepika Madgunki
Remote only
2 - 4 yrs
₹1L - ₹15L / yr
Integration
API
Microsoft Windows Azure
BOOMI
ETL

- Design and implement integration solutions using iPaaS tools.

- Collaborate with customers, product, engineering and business stakeholders to translate business requirements into robust and scalable integration processes.

- Develop and maintain SQL queries and scripts to facilitate data manipulation and integration.

- Utilize RESTful API design and consumption to ensure seamless data flow between various systems and applications.

- Lead the configuration, deployment, and ongoing management of integration projects.

- Troubleshoot and resolve technical issues related to integration solutions.

- Document integration processes and create user guides for internal and external users.

- Stay current with the latest developments in iPaaS technologies and best practices


Qualifications:

- Bachelor’s degree in Computer Science, Information Technology, or a related field.

- Minimum of 3 years’ experience in an integration engineering role with hands-on experience in an iPaaS tool, preferably Boomi.

- Proficiency in SQL and experience with database management and data integration patterns. - Strong understanding of integration patterns and solutions, API design, and cloud-based technologies.

- Good understanding of RESTful APIs and integration.

- Excellent problem-solving and analytical skills.

- Strong communication and interpersonal skills, with the ability to work effectively in a team environment.

- Experience with various integration protocols (REST, SOAP, FTP, etc.) and data formats (JSON, XML, etc.).


Preferred Skills:

- Boomi (or other iPaaS) certifications

- Experience with Intapp's Integration Builder is highly desirable but not mandatory.

- Certifications in Boomi or similar integration platforms.

- Experience with cloud services like MS Azure.

- Knowledge of additional programming languages (e.g., .NET, Java) is advantageous.


What we offer:

- Competitive salary and benefits package.

- Dynamic and innovative work environment.

- Opportunities for professional growth and advancement.

Read more
Lower Parel
2 - 4 yrs
₹6L - ₹7.2L / yr
skill iconReact.js
skill iconNextJs (Next.js)
skill iconNodeJS (Node.js)
GraphQL
RESTful APIs
+22 more

Senior Full Stack Developer – Analytics Dashboard

Job Summary

We are seeking an experienced Full Stack Developer to design and build a scalable, data-driven analytics dashboard platform. The role involves developing a modern web application that integrates with multiple external data sources, processes large datasets, and presents actionable insights through interactive dashboards.

The ideal candidate should be comfortable working across the full stack and have strong experience in building analytical or reporting systems.

Key Responsibilities

  • Design and develop a full-stack web application using modern technologies.
  • Build scalable backend APIs to handle data ingestion, processing, and storage.
  • Develop interactive dashboards and data visualisations for business reporting.
  • Implement secure user authentication and role-based access.
  • Integrate with third-party APIs using OAuth and REST protocols.
  • Design efficient database schemas for analytical workloads.
  • Implement background jobs and scheduled tasks for data syncing.
  • Ensure performance, scalability, and reliability of the system.
  • Write clean, maintainable, and well-documented code.
  • Collaborate with product and design teams to translate requirements into features.

Required Technical Skills

Frontend

  • Strong experience with React.js
  • Experience with Next.js
  • Knowledge of modern UI frameworks (Tailwind, MUI, Ant Design, etc.)
  • Experience building dashboards using chart libraries (Recharts, Chart.js, D3, etc.)

Backend

  • Strong experience with Node.js (Express or NestJS)
  • REST and/or GraphQL API development
  • Background job systems (cron, queues, schedulers)
  • Experience with OAuth-based integrations

Database

  • Strong experience with PostgreSQL
  • Data modelling and performance optimisation
  • Writing complex analytical SQL queries

DevOps / Infrastructure

  • Cloud platforms (AWS)
  • Docker and basic containerisation
  • CI/CD pipelines
  • Git-based workflows

Experience & Qualifications

  • 5+ years of professional full stack development experience.
  • Proven experience building production-grade web applications.
  • Prior experience with analytics, dashboards, or data platforms is highly preferred.
  • Strong problem-solving and system design skills.
  • Comfortable working in a fast-paced, product-oriented environment.

Nice to Have (Bonus Skills)

  • Experience with data pipelines or ETL systems.
  • Knowledge of Redis or caching systems.
  • Experience with SaaS products or B2B platforms.
  • Basic understanding of data science or machine learning concepts.
  • Familiarity with time-series data and reporting systems.
  • Familiarity with meta ads/Google ads API

Soft Skills

  • Strong communication skills.
  • Ability to work independently and take ownership.
  • Attention to detail and focus on code quality.
  • Comfortable working with ambiguous requirements.

Ideal Candidate Profile (Summary)

A senior-level full stack engineer who has built complex web applications, understands data-heavy systems, and enjoys creating analytical products with a strong focus on performance, scalability, and user experience.

Read more
Performio

Performio

Agency job
via maple green services by Elvin Johnson
Remote only
4 - 6 yrs
₹15L - ₹20L / yr
ETL
SQL

The Opportunity:


As a Technical Support Consultant, you will play a significant role in Performio providing world

class support to our customers. With our tried and tested onboarding process, you will soon

become familiar with the Performio product and company.

You will draw on previous support experience to monitor for new support requests in

Zendesk, provide initial triage with 1st and 2nd level support, ensuring the customer is kept up

to date and the request is completed within a timely manner.

You will collaborate with other teams to ensure more complex requests are managed

efficiently and will provide feedback to help improve product and solution knowledge as well

as processes.

Answers to questions asked by customers that are not in the knowledge base will be

reviewed and added to the knowledge base if appropriate. We’re looking for someone who

thinks ahead, recognising opportunities to help customers help themselves.

You will help out with configuration changes and testing, furthering your knowledge and

experience of Performio. You may also be expected to help out with Managed Service,

Implementation and Work Order related tasks from time to time.


About Performio:


Performio is the last ICM software you’ll ever need. It allows you to manage incentive

compensation complexity and change over the long run by combining a structured plan

builder and flexible data management, with a partner who will make you a customer for life.

Our people are highly-motivated and engaged professionals with a clear set of values and

behaviors. We prove these values matter to us by living them each day. This makes Performio

both a great place to work and a great company to do business with.

But a great team alone is not sufficient to win. We have solved the fundamental issue

widespread in our industry—overly-rigid applications that cannot adapt to your needs, or

overly-flexible ones that become impossible to maintain over time. Only Performio allows you

to manage incentive compensation complexity and change over the long run by combining a

structured plan builder and flexible data management. The component-based plan builder

makes it easier to understand, change, and self-manage than traditional formula or

rules-based solutions. Our ability to Import data from any source, in any format, and perform

in-app data transformations, eliminate the pain of external processing and provides

end-to-end data visibility. The combination of these two functions, allows us to deliver more


powerful reporting and insights. And while every vendor says they are a partner, we truly are

one. We not only get your implementation right the first time, we enable you and give you the

autonomy and control to make changes year after year. And unlike most, we support every

part of your unique configuration. Performio is a partner that will make you a customer for life.

We have a global customer base across Australia, Asia, Europe, and the US in 25+ industries

that includes many well-known companies like Toll Brothers, Abbott Labs, News Corp,

Johnson & Johnson, Nikon, and Uber Freight.


What will you be doing:


● Monitoring and triaging new Support requests submitted by customers using our

Zendesk Support Portal

● Providing 1st and 2nd line support for Support requests

● Investigate, reproduce and resolve Customer issues within the required Service Level

Agreements

● Maintain our evolving knowledge base

● Clear and concise documentation of root causes and resolution

● Assist with the implementation and testing of Change Requests and implementation

projects

● As your knowledge of the product grows, make recommendations for solutions based

on client’s requests

● Assist in educating our client's compensation administrators applying best practices


What we’re looking for:


● Passion for customer service with a communication style that can be adapted to suit

the audience

● A problem solver with a range of troubleshooting methodologies

● Experience in the Sales Compensation industry

● Familiar with basic database concepts, spreadsheets and experienced in working with

large datasets (Excel, Relational Database Tables, SQL, ETL or other types of

tools/languages)

● 4+ years of experience in a similar role (experience with ICM software preferred)

● Experience with implementation & support of ICM solutions like SAP Commissions,

Varicent, Xactly will be a big plus

● Positive Attitude - optimistic, cares deeply about company and customers

● High Emotional IQ - shows empathy, listens when appropriate, creates healthy

conversation dynamic

● Resourceful - has a "I'll figure it out" attitude if something they need doesn't exist


Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Mumbai, Trivandrum
4 - 7 yrs
Upto ₹30L / yr (Varies
)
Google Cloud Platform (GCP)
SQL
ETL
Datawarehousing
Data-flow analysis

We are looking for a skilled Data Engineer / Data Warehouse Engineer to design, develop, and maintain scalable data pipelines and enterprise data warehouse solutions. The role involves close collaboration with business stakeholders and BI teams to deliver high-quality data for analytics and reporting.


Key Responsibilities

  • Collaborate with business users and stakeholders to understand business processes and data requirements
  • Design and implement dimensional data models, including fact and dimension tables
  • Identify, design, and implement data transformation and cleansing logic
  • Build and maintain scalable, reliable, and high-performance ETL/ELT pipelines
  • Extract, transform, and load data from multiple source systems into the Enterprise Data Warehouse
  • Develop conceptual, logical, and physical data models, including metadata, data lineage, and technical definitions
  • Design, develop, and maintain ETL workflows and mappings using appropriate data load techniques
  • Provide high-level design, research, and effort estimates for data integration initiatives
  • Provide production support for ETL processes to ensure data availability and SLA adherence
  • Analyze and resolve data pipeline and performance issues
  • Partner with BI teams to design and develop reports and dashboards while ensuring data integrity and quality
  • Translate business requirements into well-defined technical data specifications
  • Work with data from ERP, CRM, HRIS, and other transactional systems for analytics and reporting
  • Define and document BI usage through use cases, prototypes, testing, and deployment
  • Support and enhance data governance and data quality processes
  • Identify trends, patterns, anomalies, and data quality issues, and recommend improvements
  • Train and support business users, IT analysts, and developers
  • Lead and collaborate with teams spread across multiple locations

Required Skills & Qualifications

  • Bachelor’s degree in Computer Science or a related field, or equivalent work experience
  • 3+ years of experience in Data Warehousing, Data Engineering, or Data Integration
  • Strong expertise in data warehousing concepts, tools, and best practices
  • Excellent SQL skills
  • Strong knowledge of relational databases such as SQL Server, PostgreSQL, and MySQL
  • Hands-on experience with Google Cloud Platform (GCP) services, including:
  1. BigQuery
  2. Cloud SQL
  3. Cloud Composer (Airflow)
  4. Dataflow
  5. Dataproc
  6. Cloud Functions
  7. Google Cloud Storage (GCS)
  • Experience with Informatica PowerExchange for Mainframe, Salesforce, and modern data sources
  • Strong experience integrating data using APIs, XML, JSON, and similar formats
  • In-depth understanding of OLAP, ETL frameworks, Data Warehousing, and Data Lakes
  • Solid understanding of SDLC, Agile, and Scrum methodologies
  • Strong problem-solving, multitasking, and organizational skills
  • Experience handling large-scale datasets and database design
  • Strong verbal and written communication skills
  • Experience leading teams across multiple locations

Good to Have

  • Experience with SSRS and SSIS
  • Exposure to AWS and/or Azure cloud platforms
  • Experience working with enterprise BI and analytics tools

Why Join Us

  • Opportunity to work on large-scale, enterprise data platforms
  • Exposure to modern cloud-native data engineering technologies
  • Collaborative environment with strong stakeholder interaction
  • Career growth and leadership opportunities
Read more
Intineri infosol Pvt Ltd

at Intineri infosol Pvt Ltd

2 candid answers
Adil Saifi
Posted by Adil Saifi
Remote only
5 - 8 yrs
₹5L - ₹12L / yr
ETL
EDI
HIPAA
PHI
Healthcare
+1 more

Key Responsibilities:

Design and develop ETL processes for claims, enrollment, provider, and member data

Handle EDI transactions (837, 835, 834) and health plan system integrations

Build data feeds for regulatory reporting (HEDIS, Stars, Risk Adjustment)

Troubleshoot data quality issues and implement data validation frameworks

Required Experience & Skills:

5+ years of ETL development experience

Minimum 3 years in Healthcare / Health Plan / Payer environment

Strong expertise in SQL and ETL tools (Informatica, SSIS, Talend)

Deep understanding of health plan operations (claims, eligibility, provider networks)

Experience with healthcare data standards (X12 EDI, HL7)

Strong knowledge of HIPAA compliance and PHI handling

Read more
Auxo AI
kusuma Gullamajji
Posted by kusuma Gullamajji
Mumbai, Bengaluru (Bangalore), Hyderabad, Gurugram
4 - 10 yrs
₹10L - ₹45L / yr
skill iconAmazon Web Services (AWS)
Amazon Redshift
ETL

AuxoAI is seeking a skilled and experienced Data Engineer to join our dynamic team. The ideal candidate will have 4 - 10 years of prior experience in data engineering, with a strong background in AWS (Amazon Web Services) technologies. This role offers an exciting opportunity to work on diverse projects, collaborating with cross-functional teams to design, build, and optimize data pipelines and infrastructure.

Experience : 4 - 10years

Notice : Immediate to 15days

Responsibilities :

  • Design, develop, and maintain scalable data pipelines and ETL processes leveraging AWS services such as S3, Glue, EMR, Lambda, and Redshift.
  • Collaborate with data scientists and analysts to understand data requirements and implement solutions that support analytics and machine learning initiatives.
  • Optimize data storage and retrieval mechanisms to ensure performance, reliability, and cost-effectiveness.
  • Implement data governance and security best practices to ensure compliance and data integrity.
  • Troubleshoot and debug data pipeline issues, providing timely resolution and proactive monitoring.
  • Stay abreast of emerging technologies and industry trends, recommending innovative solutions to enhance data engineering capabilities.

Qualifications :

  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
  • 4-10 years of prior experience in data engineering, with a focus on designing and building data pipelines.
  • Proficiency in AWS services, particularly S3, Glue, EMR, Lambda, and Redshift.
  • Strong programming skills in languages such as Python, Java, or Scala.
  • Experience with SQL and NoSQL databases, data warehousing concepts, and big data technologies.
  • Familiarity with containerization technologies (e.g., Docker, Kubernetes) and orchestration tools (e.g., Apache Airflow) is a plus.
Read more
Auxo AI
kusuma Gullamajji
Posted by kusuma Gullamajji
Bengaluru (Bangalore), Hyderabad, Mumbai, Gurugram
4 - 10 yrs
₹10L - ₹45L / yr
PowerBI
SQL server
ETL
DAX

AuxoAI seeking a skilled and experienced Power BI Specialist with 4-10 years of hands-on experience in developing and managing Business Intelligence (BI) solutions. As a Power BI Specialist, you will work closely with stakeholders to design, develop, and deploy interactive dashboards and reports, providing valuable insights and supporting data-driven decision-making processes. You should have a strong technical background in Power BI development, data modeling, and a deep understanding of business analytics.


Responsibilities:


Design, develop, and implement Power BI reports, dashboards, and data visualizations based on business requirements.

Create and maintain data models to ensure the accuracy and efficiency of reporting structures.

Extract, transform, and load (ETL) data from various sources (databases, flat files, web services, etc.) into Power BI.

Collaborate with business stakeholders to understand reporting requirements and deliver solutions that meet their needs.

Optimize Power BI reports and dashboards for better performance and usability.

Implement security measures such as row-level security (RLS) for reports and dashboards.

Develop and deploy advanced DAX (Data Analysis Expressions) formulas for complex data modeling and analysis.

Familiarity with SQL for data extraction and manipulation.

Strong analytical and problem-solving skills with a keen attention to detail.

Experience with Power BI Service and Power BI Workspaces for publishing and sharing reports.

Qualifications:


Bachelor’s degree in Computer Science, Information Technology, Data Analytics, or a related field.

5+ years of experience in Power BI report development and dashboard creation.

Strong proficiency in Power BI Desktop and Power BI Service.

Experience in data modeling, including relationships, hierarchies, and DAX formulas.

Expertise in Power Query for data transformation and ETL processes.

Knowledge of business processes and ability to translate business needs into technical solutions.

Excellent communication and collaboration skills to work effectively with business teams and IT professionals.

Ability to manage multiple projects and meet deadlines in a fast-paced environment.

Read more
MIC Global

at MIC Global

3 candid answers
1 product
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
5yrs+
Best in industry
skill iconPython
SQL
ETL
DBA
Windows Azure
+1 more

About Us

MIC Global is a full-stack micro-insurance provider, purpose-built to design and deliver embedded parametric micro-insurance solutions to platform companies. Our mission is to make insurance more accessible for new, emerging, and underserved risks using our MiIncome loss-of-income products, MiConnect, MiIdentity, Coverpoint technology, and more — backed by innovative underwriting capabilities as a Lloyd’s Coverholder and through our in-house reinsurer, MicRe.

We operate across 12+ countries, with our Global Operations Center in Bangalore supporting clients worldwide, including a leading global ride-hailing platform and a top international property rental marketplace. Our distributed teams across the UK, USA, and Asia collaborate to ensure that no one is beyond the reach of financial security.


About the Team 

We're seeking a mid-level Data Engineer with strong DBA experience to join our insurtech data analytics team. This role focuses on supporting various teams including infrastructure, reporting, and analytics. You'll be responsible for SQL performance optimization, building data pipelines, implementing data quality checks, and helping teams with database-related challenges. You'll work closely with the infrastructure team on production support, assist the reporting team with complex queries, and support the analytics team in building visualizations and dashboards.


Key Roles and Responsibilities 

Database Administration & Optimization

  • Support infrastructure team with production database issues and troubleshooting
  • Debug and resolve SQL performance issues, identify bottlenecks, and optimize queries
  • Optimize stored procedures, functions, and views for better performance
  • Perform query tuning, index optimization, and execution plan analysis
  • Design and develop complex stored procedures, functions, and views
  • Support the reporting team with complex SQL queries and database design

Data Engineering & Pipelines

  • Design and build ETL/ELT pipelines using Azure Data Factory and Python
  • Implement data quality checks and validation rules before data enters pipelines
  • Develop data integration solutions to connect various data sources and systems
  • Create automated data validation, quality monitoring, and alerting mechanisms
  • Develop Python scripts for data processing, transformation, and automation
  • Build and maintain data models to support reporting and analytics requirements

Support & Collaboration

  • Help data analytics team build visualizations and dashboards by providing data models and queries
  • Support reporting team with data extraction, transformation, and complex reporting queries
  • Collaborate with development teams to support application database requirements
  • Provide technical guidance and best practices for database design and query optimization

Azure & Cloud

  • Work with Azure services including Azure SQL Database, Azure Data Factory, Azure Storage, Azure Functions, and Azure ML
  • Implement cloud-based data solutions following Azure best practices
  • Support cloud database migrations and optimizations
  • Work with Agentic AI concepts and tools to build intelligent data solutions

Ideal Candidate Profile

Essential

  • 5-8 years of experience in data engineering and database administration
  • Strong expertise in MS SQL Server (2016+) administration and development
  • Proficient in writing complex SQL queries, stored procedures, functions, and views
  • Hands-on experience with Microsoft Azure services (Azure SQL Database, Azure Data Factory, Azure Storage)
  • Strong Python scripting skills for data processing and automation
  • Experience with ETL/ELT design and implementation
  • Knowledge of database performance tuning, query optimization, and indexing strategies
  • Experience with SQL performance debugging tools (XEvents, Profiler, or similar)
  • Understanding of data modeling and dimensional design concepts
  • Knowledge of Agile methodology and experience working in Agile teams
  • Strong problem-solving and analytical skills
  • Understanding of Agentic AI concepts and tools
  • Excellent communication skills and ability to work with cross-functional teams

Desirable

  • Knowledge of insurance or financial services domain
  • Experience with Azure ML and machine learning pipelines
  • Experience with Azure DevOps and CI/CD pipelines
  • Familiarity with data visualization tools (Power BI, Tableau)
  • Experience with NoSQL databases (Cosmos DB, MongoDB)
  • Knowledge of Spark, Databricks, or other big data technologies
  • Azure certifications (Azure Data Engineer Associate, Azure Database Administrator Associate)
  • Experience with version control systems (Git, Azure Repos)

Tech Stack

  • MS SQL Server 2016+, Azure SQL Database, Azure Data Factory, Azure ML, Azure Storage, Azure Functions, Python, T-SQL, Stored Procedures, ETL/ELT, SQL Performance Tools (XEvents, Profiler), Agentic AI Tools, Azure DevOps, Power BI, Agile, Git

Benefits

  • 33 days of paid holiday
  • Competitive compensation well above market average
  • Work in a high-growth, high-impact environment with passionate, talented peers
  • Clear path for personal growth and leadership development
Read more
Euphoric Thought Technologies
Noida
2 - 4 yrs
₹8L - ₹15L / yr
SQL
ETL
Data modeling
Business Intelligence (BI)

Position Overview:

As a BI (Business Intelligence) Developer, they will be responsible for designing,

developing, and maintaining the business intelligence solutions that support data

analysis and reporting. They will collaborate with business stakeholders, analysts, and

data engineers to understand requirements and translate them into efficient and

effective BI solutions. Their role will involve working with various data sources,

designing data models, assisting ETL (Extract, Transform, Load) processes, and

developing interactive dashboards and reports.

Key Responsibilities:

1. Requirement Gathering: Collaborate with business stakeholders to understand

their data analysis and reporting needs. Translate these requirements into

technical specifications and develop appropriate BI solutions.

2. Data Modelling: Design and develop data models that effectively represent

the underlying business processes and facilitate data analysis and reporting.

Ensure data integrity, accuracy, and consistency within the data models.

3. Dashboard and Report Development: Design, develop, and deploy interactive

dashboards and reports using Sigma computing.

4. Data Integration: Integrate data from various systems and sources to provide a

comprehensive view of business performance. Ensure data consistency and

accuracy across different data sets.

5. Performance Optimization: Identify performance bottlenecks in BI solutions and

optimize query performance, data processing, and report rendering. Continuously

monitor and fine-tune the performance of BI applications.

6. Data Governance: Ensure compliance with data governance policies and

standards. Implement appropriate security measures to protect sensitive data.

7. Documentation and Training: Document the technical specifications, data

models, ETL processes, and BI solution configurations.

8. Ensuring that the proposed solutions meet business needs and requirements.

9. Should be able to create and own Business/ Functional Requirement

Documents.

10. Monitor or track project milestones and deliverables.

11. Submit project deliverables, ensuring adherence to quality standards.

Qualifications and Skills:

1. Master/ Bachelor’s degree in IT or relevant and having a minimum of 2-4 years of

experience in Business Analysis or a related field

2. Proven experience as a BI Developer or similar role.

3. Fundamental analytical and conceptual thinking skills with demonstrated skills in

managing projects on implementation of Platform Solutions

4. Excellent planning, organizational and time management skills.

5. Strong understanding of data warehousing concepts, dimensional modelling, and

ETL processes.

6. Proficiency in SQL, Snowflake for data extraction, manipulation, and analysis.

7. Experience with one or more BI tools such as Sigma computing.

8. Knowledge of data visualization best practices and ability to create compelling

data visualizations.

9. Solid problem-solving and analytical skills with a detail-oriented mindset.

10. Strong communication and interpersonal skills to collaborate effectively with

different stakeholders.

11. Ability to work independently and manage multiple priorities in a fast-paced

environment.

12. Knowledge of data governance principles and security best practices.

13. Candidate having experience in managing implementation project of platform

solutions to the U.S. clients would be preferable.

14. Exposure to U.S debt collection industry is a plus.

Read more
leading digital testing boutique firm

leading digital testing boutique firm

Agency job
via Peak Hire Solutions by Dhara Thakkar
Delhi
5 - 8 yrs
₹11L - ₹15L / yr
SQL
Software Testing (QA)
Data modeling
ETL
Data extraction
+14 more

Review Criteria

  • Strong Data / ETL Test Engineer
  • 5+ years of overall experience in Testing/QA
  • 3+ years of hands-on end-to-end data testing/ETL testing experience, covering data extraction, transformation, loading validation, reconciliation, working across BI / Analytics / Data Warehouse / e-Governance platforms
  • Must have strong understanding and hands-on exposure to Data Warehouse concepts and processes, including fact & dimension tables, data models, data flows, aggregations, and historical data handling.
  • Must have experience in Data Migration Testing, including validation of completeness, correctness, reconciliation, and post-migration verification from legacy platforms to upgraded/cloud-based data platforms.
  • Must have independently handled test strategy, test planning, test case design, execution, defect management, and regression cycles for ETL and BI testing
  • Hands-on experience with ETL tools and SQL-based data validation is mandatory (Working knowledge or hands-on exposure to Redshift and/or Qlik will be considered sufficient)
  • Must hold a Bachelor’s degree B.E./B.Tech else should have master's in M.Tech/MCA/M.Sc/MS
  • Must demonstrate strong verbal and written communication skills, with the ability to work closely with business stakeholders, data teams, and QA leadership
  • Mandatory Location: Candidate must be based within Delhi NCR (100 km radius)


Preferred

  • Relevant certifications such as ISTQB or Data Analytics / BI certifications (Power BI, Snowflake, AWS, etc.)


Job Specific Criteria

  • CV Attachment is mandatory
  • Do you have experience working on Government projects/companies, mention brief about project?
  • Do you have experience working on enterprise projects/companies, mention brief about project?
  • Please mention the names of 2 key projects you have worked on related to Data Warehouse / ETL / BI testing?
  • Do you hold any ISTQB or Data / BI certifications (Power BI, Snowflake, AWS, etc.)?
  • Do you have exposure to BI tools such as Qlik?
  • Are you willing to relocate to Delhi and why (if not from Delhi)?
  • Are you available for a face-to-face round?


Role & Responsibilities

  • 5 years’ experience in Data Testing across BI Analytics platforms with at least 2 largescale enterprise Data Warehouse Analytics eGovernance programs
  • Proficiency in ETL Data Warehouse and BI report dashboard validation including test planning data reconciliation acceptance criteria definition defect triage and regression cycle management for BI landscapes
  • Proficient in analyzing business requirements and data mapping specifications BRDs Data Models Source to Target Mappings User Stories Reports Dashboards to define comprehensive test scenarios and test cases
  • Ability to review high level and low-level data models ETL workflows API specifications and business logic implementations to design test strategies ensuring accuracy consistency and performance of data pipelines
  • Ability to test and validate the migrated data from old platform to an upgraded platform and ensure the completeness and correctness of migration
  • Experience of conducting test of migrated data and defining test scenarios and test cases for the same
  • Experience with BI tools like Qlik ETL platforms Data Lake platforms Redshift to support end to end validation
  • Exposure to Data Quality Metadata Management and Data Governance frameworks ensuring KPIs metrics and dashboards align with business expectations


Ideal Candidate

  • 5 years’ experience in Data Testing across BI Analytics platforms with at least 2 largescale enterprise Data Warehouse Analytics eGovernance programs
  • Proficiency in ETL Data Warehouse and BI report dashboard validation including test planning data reconciliation acceptance criteria definition defect triage and regression cycle management for BI landscapes
  • Proficient in analyzing business requirements and data mapping specifications BRDs Data Models Source to Target Mappings User Stories Reports Dashboards to define comprehensive test scenarios and test cases
  • Ability to review high level and low-level data models ETL workflows API specifications and business logic implementations to design test strategies ensuring accuracy consistency and performance of data pipelines
  • Ability to test and validate the migrated data from old platform to an upgraded platform and ensure the completeness and correctness of migration
  • Experience of conducting test of migrated data and defining test scenarios and test cases for the same
  • Experience with BI tools like Qlik ETL platforms Data Lake platforms Redshift to support end to end validation
  • Exposure to Data Quality Metadata Management and Data Governance frameworks ensuring KPIs metrics and dashboards align with business expectations



Read more
Sonatype

at Sonatype

5 candid answers
Reshika Mendiratta
Posted by Reshika Mendiratta
Hyderabad
8yrs+
Upto ₹35L / yr (Varies
)
skill iconJava
ETL
skill iconSpring Boot
SQL
Spark
+4 more

Who We Are

At Sonatype, we help organizations build better, more secure software by enabling them to understand and control their software supply chains. Our products are trusted by thousands of engineering teams globally, providing critical insights into dependency health, license risk, and software security. We’re passionate about empowering developers—and we back it with data.


The Opportunity

We’re looking for a Data Engineer with full stack expertise to join our growing Data Platform team. This role blends data engineering, microservices, and full-stack development to deliver end-to-end services that power analytics, machine learning, and advanced search across Sonatype.

You will design and build data-driven microservices and workflows using Java, Python, and Spring Batch, implement frontends for data workflows, and deploy everything through CI/CD pipelines into AWS ECS/Fargate. You’ll also ensure services are monitorable, debuggable, and reliable at scale, while clearly documenting designs with Mermaid-based sequence and dataflow diagrams.

This is a hands-on engineering role for someone who thrives at the intersection of data systems, fullstack development, ML, and cloud-native platforms.


What You’ll Do

  • Design, build, and maintain data pipelines, ETL/ELT workflows, and scalable microservices.
  • Development of complex web scraping (Playwright) and realtime pipelines (Kafka/Queues/Flink).
  • Develop end-to-end microservices with backend (Java 5+, Python 5+, Spring Batch 2+) and frontend (React or any).
  • Deploy, publish, and operate services in AWS ECS/Fargate using CI/CD pipelines (Jenkins, GitOps).
  • Architect and optimize data storage models in SQL (MySQL, PostgreSQL) and NoSQL stores.
  • Implement web scraping and external data ingestion pipelines.
  • Enable Databricks and PySpark-based workflows for large-scale analytics.
  • Build advanced data search capabilities (fuzzy matching, vector similarity search, semantic retrieval).
  • Apply ML techniques (scikit-learn, classification algorithms, predictive modeling) to data-driven solutions.
  • Implement observability, debugging, monitoring, and alerting for deployed services.
  • Create Mermaid sequence diagrams, flowcharts, and dataflow diagrams to document system architecture and workflows.
  • Drive best practices in fullstack data service development, including architecture, testing, and documentation.


What We’re Looking For

  • 5+ years of experience as a Data Engineer or a Software Backend engineering role.
  • Strong programming skills in Python, Scala, or JavaHands-on experience with HBase or similar NoSQL columnar stores.
  • Hands-on experience with distributed data systems like Spark, Kafka, or Flink.
  • Proficient in writing complex SQL and optimizing queries for performance.
  • Experience building and maintaining robust ETL/ELT pipelines in production.
  • Familiarity with workflow orchestration tools (Airflow, Dagster, or similar).
  • Understanding of data modeling techniques (star schema, dimensional modeling, etc.).
  • Familiarity with CI/CD pipelines (Jenkins or similar).
  • Ability to visualize and communicate architectures using Mermaid diagrams.


Bonus Points

  • Experience working with Databricks, dbt, Terraform, or Kubernetes
  • Familiarity with streaming data pipelines or real-time processing
  • Exposure to data governance frameworks and tools
  • Experience supporting data products or ML pipelines in production
  • Strong understanding of data privacy, security, and compliance best practices


Why You’ll Love Working Here

  • Data with purpose: Work on problems that directly impact how the world builds secure software
  • Modern tooling: Leverage the best of open-source and cloud-native technologies
  • Collaborative culture: Join a passionate team that values learning, autonomy, and impact
Read more
Opteamix
Praveen KumarBK
Posted by Praveen KumarBK
Bengaluru (Bangalore)
6 - 9 yrs
₹18L - ₹20L / yr
SQL Server Integration Services (SSIS)
ETL

Position: Lead Software Engineer 


Job Overview:

Lead ETL Engineer with SSIS (SQL Server Integration Services) knowledge is primarily responsible for designing, developing, and maintaining ETL (Extract, Transform, Load) workflows and data integration processes to support enterprise data solutions. 


Key Responsibilities and Duties  

Lead the software development team: 

  • Lead a team of software engineers in designing, developing, and maintaining ETL Framework. 
  • Set and communicate clear expectations for the clients regarding project timelines, quality standards, and deliverables. 
  • Experience with Microsoft SQL Server Database, 
  • Hands-on experience in writing, optimizing and maintaining complex SQL queries, stored procedures and functions. 
  • Design and develop robust, scalable, and efficient database structures based on the organization's requirements. 

 

SSIS: 

  • Design, develop, test, deploy, and maintain SSIS packages for data migration, ETL processes, reporting and to integrate data between various systems. 
  • Develop complex SQL queries to extract data from various sources such as Oracle or SQL databases. 
  • Troubleshoot issues related to package execution failures and optimize performance of existing packages. 
  • Collaborate with stakeholders to understand business requirements and design solutions using SQL Server Integration Services (SSIS). 
  • Work closely with the team to ensure timely delivery of projects. 
  • Optimize existing integrations for improved performance and scalability. 
  • Design and implement complex SSIS packages for extracting, transforming, and loading data from various sources such as relational databases (SQL Server, Oracle, DB2), flat files, and XML files. 
  • Develop ETL best practices, including error handling, logging, checkpoints, and data quality assurance. 
  • Write efficient SQL queries, stored procedures, functions, and triggers to support ETL processes. 
  • Optimize SSIS packages and SQL queries for maximum performance using techniques such as parallelization, lookup caching, and bulk loading. 
  • Create and maintain design documentation such as data models, mapping documents, data flow diagrams, and architecture diagrams. 
  • Collaborate with business analysts, BI teams, and other stakeholders to understand requirements and deliver data solutions for reporting and analytics. 
  • Schedule and monitor SSIS workflows using SQL Server Agent or other scheduling tools, ensuring timely and error-free data loads. 
  • Troubleshoot, debug, and test SSIS packages and backend database processes to ensure data accuracy and system reliability. 
  • Participate in resource planning, task estimation, and code review processes. 

Collaborate with cross-functional teams: 

  • Work with other teams such as product management, quality assurance, and user experience to ensure that software applications meet business requirements and user needs. 
  • Communicate effectively with team members and stakeholders. 

Stay up to date with emerging trends and technologies: 

  • Keep up to date with emerging trends and technologies in the software development industry. 
  • Make recommendations for improvements to existing software applications and processes. 


Mandatory skills 

SSIS, understanding of ETL Framework, able to design and implement SSIS 

 

 

Qualifications 

  • Bachelor’s degree in computer science or related field. 
  • 6-9 years of experience in software development. 
  • Proven leadership experience in leading ETL teams. 
  • Knowledge of software development methodologies such as Agile or Scrum. 
  • Ability to mentor and train junior team members. 

 

Read more
Global digital transformation solutions provider

Global digital transformation solutions provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 9 yrs
₹15L - ₹25L / yr
Data engineering
Apache Kafka
skill iconPython
skill iconAmazon Web Services (AWS)
AWS Lambda
+11 more

Job Details

- Job Title: Lead I - Data Engineering 

- Industry: Global digital transformation solutions provider

- Domain - Information technology (IT)

- Experience Required: 6-9 years

- Employment Type: Full Time

- Job Location: Pune

- CTC Range: Best in Industry


Job Description

Job Title: Senior Data Engineer (Kafka & AWS)

Responsibilities:

  • Develop and maintain real-time data pipelines using Apache Kafka (MSK or Confluent) and AWS services.
  • Configure and manage Kafka connectors, ensuring seamless data flow and integration across systems.
  • Demonstrate strong expertise in the Kafka ecosystem, including producers, consumers, brokers, topics, and schema registry.
  • Design and implement scalable ETL/ELT workflows to efficiently process large volumes of data.
  • Optimize data lake and data warehouse solutions using AWS services such as Lambda, S3, and Glue.
  • Implement robust monitoring, testing, and observability practices to ensure reliability and performance of data platforms.
  • Uphold data security, governance, and compliance standards across all data operations.

 

Requirements:

  • Minimum of 5 years of experience in Data Engineering or related roles.
  • Proven expertise with Apache Kafka and the AWS data stack (MSK, Glue, Lambda, S3, etc.).
  • Proficient in coding with Python, SQL, and Java — with Java strongly preferred.
  • Experience with Infrastructure-as-Code (IaC) tools (e.g., CloudFormation) and CI/CD pipelines.
  • Excellent problem-solvingcommunication, and collaboration skills.
  • Flexibility to write production-quality code in both Python and Java as required.

 

Skills: Aws, Kafka, Python


Must-Haves

Minimum of 5 years of experience in Data Engineering or related roles.

Proven expertise with Apache Kafka and the AWS data stack (MSK, Glue, Lambda, S3, etc.).

Proficient in coding with Python, SQL, and Java — with Java strongly preferred.

Experience with Infrastructure-as-Code (IaC) tools (e.g., CloudFormation) and CI/CD pipelines.

Excellent problem-solving, communication, and collaboration skills.

Flexibility to write production-quality code in both Python and Java as required.

Skills: Aws, Kafka, Python

Notice period - 0 to 15days only

Read more
Ekloud INC
ashwini rathod
Posted by ashwini rathod
Remote only
6 - 20 yrs
₹20L - ₹30L / yr
CPQ
Billing
Sales
DevOps
APL
+3 more

Hiring : Salesforce CPQ Developer 


Experience : 6+ years

Shift timings : 7:30PM to 3:30AM

Location : India (Remote) 


Key skills: CPQ steel brick , Billing sales cloud , LWC ,Apex integrations, Devops, APL, ETL Tool


Design scalable and efficient Salesforce Sales Cloud solutions that meet best practices and business requirements.


Lead the technical design and implementation of Sales Cloud features, including CPQ, Partner Portal, Lead, Opportunity and Quote Management.


Provide technical leadership and mentorship to development teams. Review and approve technical designs, code, and configurations.

Work with business stakeholders to gather requirements, provide guidance, and ensure that solutions meet their needs. Translate business requirements into technical specifications.


Oversee and guide the development of custom Salesforce applications, including custom objects, workflows, triggers, and LWC/ Apex code.

Ensure data quality, integrity, and security within the Salesforce platform. Implement data migration strategies and manage data integrations.


Establish and enforce Salesforce development standards, best practices, and governance processes. Monitor and optimize the performance of Salesforce solutions, including addressing performance issues and ensuring efficient use of resources.


Stay up-to-date with Salesforce updates and new features. Propose and implement innovative solutions to enhance Salesforce capabilities and improve business processes.

Document design, code consistently throughout the design/development process

Diagnose, resolve, and document system issues to support project team.


Research questions with respect to both maintenance and development activities. 

Perform post-migration system review and ongoing support.

Prepare and deliver demonstrations/presentations to client audiences, professional seniors/peers


Adhere to best practices constantly around code/data source control, ticket tracking, etc. during the course of an assignment

 

Skills/Experience:


Bachelor’s degree in Computer Science, Information Systems, or related field.

6+ years of experience in architecting and designing full stack solutions on the Salesforce Platform.

Must have 3+ years of Experience in architecting, designing and developing Salesforce CPQ (SteelBrick CPQ) and Billing solutions.

Minimum 3+ years of Lightning Framework development experience (Aura & LWC).

CPQ Specialist and Salesforce Platform Developer II certification is required.


Extensive development experience with Apex Classes, Triggers, Visualforce, Lightning, Batch Apex, Salesforce DX, Apex Enterprise Patterns, Apex Mocks, Force.com API, Visual Flows, Platform Events, SOQL, Salesforce APIs, and other programmatic solutions on the Salesforce platform.


Experience in debugging APEX CPU Error, SOQL queries Exceptions, Refactoring code and working with complex implementations involving features like asynchronous processing


Clear insight of Salesforce platform best practices, coding and design guidelines and governor limits.

Experience with Development Tools and Technologies: Visual Studio Code, GIT, and DevOps Setup to automate deployment/releases.

Knowledge of integration architecture as well as third-party integration tools and ETL (Such as Informatica, Workato, Boomi, Mulesoft etc.) with Salesforce


Experience in Agile development, iterative development, and proof of concepts (POCs).

Excellent written and verbal communication skills with ability to lead technical projects and manage multiple priorities in a fast-paced environment.

Read more
Techno Wise
Ishita Panwar
Posted by Ishita Panwar
Pune
6 - 10 yrs
₹30L - ₹35L / yr
Microsoft Windows Azure
SQL
Informatica MDM
skill iconAmazon Web Services (AWS)
Informatica PowerCenter
+2 more

Profile: Senior Data Engineer (Informatica MDM)


Primary Purpose:

The Senior Data Engineer will be responsible for building new segments in a Customer Data Platform (CDP), maintaining the segments, understanding the data requirements for use cases, data integrity, data quality and data sources involved to build the specific use cases. The resource should also have an understanding of ETL processes. This position will have an understanding of integrations with cloud service providers like Microsoft Azure, Azure Data Lake Services, Azure Data Factory and cloud data warehouse platforms in addition to Enterprise Data Ware house environments. The ideal candidate will also have proven experience in data analysis and management, with excellent analytical and problem-solving abilities.


Major Functions/Responsibilities

• Design, develop and implement robust and extensible solutions to build segmentations using Customer Data Platform.

• Work closely with subject matter experts to identify and document based on the business requirements, functional specs and translate them into appropriate technical solutions.

• Responsible for estimating, planning, and managing the user stories, tasks and reports on Agile Projects.

• Develop advanced SQL Procedures, Functions and SQL jobs.

• Performance tuning and optimization of ETL Jobs, SQL Queries and Scripts.

• Configure and maintain scheduled ETL jobs, data segments and refresh.

• Support exploratory data analysis, statistical analysis, and predictive analytics.

• Support production issues and maintain existing data systems by researching and trouble shooting any issues/problems in a timely manner.

• Proactive, great attention to detail, results-oriented problem solver.


Preferred Experience

• 6+ years of experience in writing SQL queries and stored procedures to extract, manipulate and load data.

• 6+ years’ experience with design, build, test, and maintain data integrations for data marts and data warehouses.

• 3+ years of experience in integrations Azure / AWS Data Lakes, Azure Data Factory & IDMC (Informatica Cloud Services).

• In depth understanding of database management systems, online analytical processing (OLAP) and ETL (Extract, transform, load) framework.

• Excellent verbal and written communication skills

• Collaboration with both onshore and offshore development teams.

• Good Understanding of Marketing tools like Sales Force Marketing cloud, Adobe Marketing or Microsoft Customer Insights Journey and Customer Data Platform will be important to this role. Communication

• Facilitate project team meetings effectively.

• Effectively communicate relevant project information to superiors

• Deliver engaging, informative, well-organized presentations that are effectively tailored to the intended audience.

• Serve as a technical liaison with development partner.

• Serve as a communication bridge between applications team, developers and infrastructure team members to facilitate understanding of current systems

• Resolve and/or escalate issues in a timely fashion.

• Understand how to communicate difficult/sensitive information tactfully.

• Works under the direction of Technical Data Lead / Data architect. Education

Bachelor’s Degree or higher in Engineering, Technology or related field experience required. 

Read more
Oneture Technologies

at Oneture Technologies

1 recruiter
Eman Khan
Posted by Eman Khan
Mumbai
2 - 4 yrs
₹6L - ₹12L / yr
PySpark
ETL
ELT
skill iconPython
Flink
+2 more

About Oneture Technologies 

Oneture Technologies is a cloud-first digital engineering company helping enterprises and high-growth startups build modern, scalable, and data-driven solutions. Our teams work on cutting-edge big data, cloud, analytics, and platform engineering engagements where ownership, innovation, and continuous learning are core values. 


Role Overview 

We are looking for an experienced Data Engineer with 2-4 years of hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate must have strong expertise in PySpark and exposure to real-time or streaming frameworks such as Apache Flink. You will work closely with architects, data scientists, and product teams to design and deliver robust, high-performance data solutions. 


Key Responsibilities

  • Design, develop, and maintain scalable ETL/ELT data pipelines using PySpark
  • Implement real-time or near real-time data processing using Apache Flink
  • Optimize data workflows for performance, scalability, and reliability
  • Work with large-scale data platforms and distributed environments
  • Collaborate with cross-functional teams to integrate data solutions into products and analytics platforms
  • Ensure data quality, integrity, and governance across pipelines
  • Conduct performance tuning, debugging, and root-cause analysis of data processes
  • Write clean, modular, and well-documented code following best engineering practices


Primary Skills

  • Strong hands-on experience in PySpark (RDD, DataFrame API, Spark SQL)
  • Experience with Apache Flink, Spark or Kafka (streaming or batch)
  • Solid understanding of distributed computing concepts
  • Proficiency in Python for data engineering workflows
  • Strong SQL skills for data manipulation and transformation
  • Experience with data pipeline orchestration tools (Airflow, Step Functions, etc.)


Secondary Skills

  • Experience with cloud platforms (AWS, Azure, or GCP)
  • Knowledge of data lakes, lakehouse architectures, and modern data stack tools
  • Familiarity with Delta Lake, Iceberg, or Hudi
  • Experience with CI/CD pipelines for data workflows
  • Understanding of messaging and streaming systems (Kafka, Kinesis)
  • Knowledge of DevOps and containerization tools (Docker)


Soft Skills

  • Strong analytical and problem-solving capabilities
  • Ability to work independently and as part of a collaborative team
  • Good communication and documentation skills
  • Ownership mindset with a willingness to learn and adapt


Education

  • Bachelor’s or Master’s degree in Computer Science, Engineering, Information Technology, or a related field


Why Join Oneture Technologies?

  • Opportunity to work on high-impact, cloud-native data engineering projects
  • Collaborative team environment with a strong learning culture
  • Exposure to modern data platforms, scalable architectures, and real-time data systems
  • Growth-oriented role with hands-on ownership across end-to-end data engineering initiatives
Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
7 - 9 yrs
₹15L - ₹28L / yr
databricks
skill iconPython
SQL
PySpark
skill iconAmazon Web Services (AWS)
+9 more

Role Proficiency:

This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required.


Skill Examples:

  1. Proficiency in SQL Python or other programming languages used for data manipulation.
  2. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF.
  3. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery).
  4. Conduct tests on data pipelines and evaluate results against data quality and performance specifications.
  5. Experience in performance tuning.
  6. Experience in data warehouse design and cost improvements.
  7. Apply and optimize data models for efficient storage retrieval and processing of large datasets.
  8. Communicate and explain design/development aspects to customers.
  9. Estimate time and resource requirements for developing/debugging features/components.
  10. Participate in RFP responses and solutioning.
  11. Mentor team members and guide them in relevant upskilling and certification.

 

Knowledge Examples:

  1. Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF.
  2. Proficient in SQL for analytics and windowing functions.
  3. Understanding of data schemas and models.
  4. Familiarity with domain-related data.
  5. Knowledge of data warehouse optimization techniques.
  6. Understanding of data security concepts.
  7. Awareness of patterns frameworks and automation practices.


 

Additional Comments:

# of Resources: 22 Role(s): Technical Role Location(s): India Planned Start Date: 1/1/2026 Planned End Date: 6/30/2026

Project Overview:

Role Scope / Deliverables: We are seeking highly skilled Data Engineer with strong experience in Databricks, PySpark, Python, SQL, and AWS to join our data engineering team on or before 1st week of Dec, 2025.

The candidate will be responsible for designing, developing, and optimizing large-scale data pipelines and analytics solutions that drive business insights and operational efficiency.

Design, build, and maintain scalable data pipelines using Databricks and PySpark.

Develop and optimize complex SQL queries for data extraction, transformation, and analysis.

Implement data integration solutions across multiple AWS services (S3, Glue, Lambda, Redshift, EMR, etc.).

Collaborate with analytics, data science, and business teams to deliver clean, reliable, and timely datasets.

Ensure data quality, performance, and reliability across data workflows.

Participate in code reviews, data architecture discussions, and performance optimization initiatives.

Support migration and modernization efforts for legacy data systems to modern cloud-based solutions.


Key Skills:

Hands-on experience with Databricks, PySpark & Python for building ETL/ELT pipelines.

Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).

Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).

Experience with data modeling, schema design, and performance optimization.

Familiarity with CI/CD pipelines, version control (Git), and workflow orchestration (Airflow preferred).

Excellent problem-solving, communication, and collaboration skills.

 

Skills: Databricks, Pyspark & Python, Sql, Aws Services

 

Must-Haves

Python/PySpark (5+ years), SQL (5+ years), Databricks (3+ years), AWS Services (3+ years), ETL tools (Informatica, Glue, DataProc) (3+ years)

Hands-on experience with Databricks, PySpark & Python for ETL/ELT pipelines.

Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).

Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).

Experience with data modeling, schema design, and performance optimization.

Familiarity with CI/CD pipelines, Git, and workflow orchestration (Airflow preferred).


******

Notice period - Immediate to 15 days

Location: Bangalore

Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Kochi (Cochin), Trivandrum, Hyderabad, Thiruvananthapuram
8 - 10 yrs
₹10L - ₹25L / yr
Business Analysis
Data Visualization
PowerBI
SQL
Tableau
+18 more

Job Description – Senior Technical Business Analyst

Location: Trivandrum (Preferred) | Open to any location in India

Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST

 

About the Role

We are seeking highly motivated and analytically strong Senior Technical Business Analysts who can work seamlessly with business and technology stakeholders to convert a one-line problem statement into a well-defined project or opportunity. This role is ideal for fresh graduates who have a strong foundation in data analytics, data engineering, data visualization, and data science, along with a strong drive to learn, collaborate, and grow in a dynamic, fast-paced environment.

As a Technical Business Analyst, you will be responsible for translating complex business challenges into actionable user stories, analytical models, and executable tasks in Jira. You will work across the entire data lifecycle—from understanding business context to delivering insights, solutions, and measurable outcomes.

 

Key Responsibilities

Business & Analytical Responsibilities

  • Partner with business teams to understand one-line problem statements and translate them into detailed business requirementsopportunities, and project scope.
  • Conduct exploratory data analysis (EDA) to uncover trends, patterns, and business insights.
  • Create documentation including Business Requirement Documents (BRDs)user storiesprocess flows, and analytical models.
  • Break down business needs into concise, actionable, and development-ready user stories in Jira.

Data & Technical Responsibilities

  • Collaborate with data engineering teams to design, review, and validate data pipelinesdata models, and ETL/ELT workflows.
  • Build dashboards, reports, and data visualizations using leading BI tools to communicate insights effectively.
  • Apply foundational data science concepts such as statistical analysispredictive modeling, and machine learning fundamentals.
  • Validate and ensure data quality, consistency, and accuracy across datasets and systems.

Collaboration & Execution

  • Work closely with product, engineering, BI, and operations teams to support the end-to-end delivery of analytical solutions.
  • Assist in development, testing, and rollout of data-driven solutions.
  • Present findings, insights, and recommendations clearly and confidently to both technical and non-technical stakeholders.

 

Required Skillsets

Core Technical Skills

  • 6+ years of Technical Business Analyst experience within an overall professional experience of 8+ years
  • Data Analytics: SQL, descriptive analytics, business problem framing.
  • Data Engineering (Foundational): Understanding of data warehousing, ETL/ELT processes, cloud data platforms (AWS/GCP/Azure preferred).
  • Data Visualization: Experience with Power BI, Tableau, or equivalent tools.
  • Data Science (Basic/Intermediate): Python/R, statistical methods, fundamentals of ML algorithms.

 

Soft Skills

  • Strong analytical thinking and structured problem-solving capability.
  • Ability to convert business problems into clear technical requirements.
  • Excellent communication, documentation, and presentation skills.
  • High curiosity, adaptability, and eagerness to learn new tools and techniques.

 

Educational Qualifications

  • BE/B.Tech or equivalent in:
  • Computer Science / IT
  • Data Science

 

What We Look For

  • Demonstrated passion for data and analytics through projects and certifications.
  • Strong commitment to continuous learning and innovation.
  • Ability to work both independently and in collaborative team environments.
  • Passion for solving business problems using data-driven approaches.
  • Proven ability (or aptitude) to convert a one-line business problem into a structured project or opportunity.

 

Why Join Us?

  • Exposure to modern data platforms, analytics tools, and AI technologies.
  • A culture that promotes innovation, ownership, and continuous learning.
  • Supportive environment to build a strong career in data and analytics.

 

Skills: Data Analytics, Business Analysis, Sql


Must-Haves

Technical Business Analyst (6+ years), SQL, Data Visualization (Power BI, Tableau), Data Engineering (ETL/ELT, cloud platforms), Python/R

 

******

Notice period - 0 to 15 days (Max 30 Days)

Educational Qualifications: BE/B.Tech or equivalent in: (Computer Science / IT) /Data Science

Location: Trivandrum (Preferred) | Open to any location in India

Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST

Read more
Financial Services Company

Financial Services Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Delhi
3 - 6 yrs
₹10L - ₹25L / yr
Project Management
SQL
JIRA
SQL Query Analyzer
confluence
+23 more

Required Skills: Excellent Communication Skills, Project Management, SQL queries, Expertise with Tools such as Jira, Confluence etc.


Criteria:

  • Candidate must have Project management experience.
  • Candidate must have strong experience in accounting principles, financial workflows, and R2R (Record to Report) processes.
  • Candidate should have an academic background in Commerce or MBA Finance.
  • Candidates must be from a Fintech/ Financial service only.
  • Good experience with SQL and must have MIS experience.
  • Must have experience in Treasury Module.
  • 3+ years of implementation experience is required.
  • Candidate should have Hands-on experience with tools such as Jira, Confluence, Excel, and project management platforms.
  • Need candidate from Bangalore and Delhi/NCR ONLY.
  • Need Immediate joiner or candidate with up to 30 Days’ Notice period.

 

Description

Position Overview

We are looking for an experienced Implementation Lead with deep expertise in financial workflows, R2R processes, and treasury operations to drive client onboarding and end-to-end implementations. The ideal candidate will bring a strong Commerce / MBA Finance background, proven project management experience, and technical skills in SQL and ETL to ensure seamless deployments for fintech and financial services clients.


Key Responsibilities

  • Lead end-to-end implementation projects for enterprise fintech clients
  • Translate client requirements into detailed implementation plans and configure solutions accordingly.
  • Write and optimize complex SQL queries for data analysis, validation, and integration
  • Oversee ETL processes – extract, transform, and load financial data across systems
  • Collaborate with cross-functional teams including Product, Engineering, and Support
  • Ensure timely, high-quality delivery across multiple stakeholders and client touchpoints
  • Document processes, client requirements, and integration flows in detail.
  • Configure and deploy company solutions for R2R, treasury, and reporting workflows.


Required Qualifications

  • Bachelor’s degree Commerce background / MBA Finance (mandatory).
  • 3+ years of hands-on implementation/project management experience
  • Proven experience delivering projects in Fintech, SaaS, or ERP environments
  • Strong expertise in accounting principles, R2R (Record-to-Report), treasury, and financial workflows.
  • Hands-on SQL experience, including the ability to write and debug complex queries (joins, CTEs, subqueries)
  • Experience working with ETL pipelines or data migration processes
  • Proficiency in tools like Jira, Confluence, Excel, and project tracking systems
  • Strong communication and stakeholder management skills
  • Ability to manage multiple projects simultaneously and drive client success


Qualifications

  • Prior experience implementing financial automation tools (e.g., SAP, Oracle, Anaplan, Blackline)
  • Familiarity with API integrations and basic data mapping
  • Experience in agile/scrum-based implementation environments
  • Exposure to reconciliation, book closure, AR/AP, and reporting systems
  • PMP, CSM, or similar certifications



Skills & Competencies

Functional Skills

  • Financial process knowledge (e.g., reconciliation, accounting, reporting)
  • Business analysis and solutioning
  • Client onboarding and training
  • UAT coordination
  • Documentation and SOP creation

 

Project Skills

  • Project planning and risk management
  • Task prioritization and resource coordination
  • KPI tracking and stakeholder reporting

 

Soft Skills

  • Cross-functional collaboration
  • Communication with technical and non-technical teams
  • Attention to detail and customer empathy
  • Conflict resolution and crisis management


What We Offer

  • An opportunity to shape fintech implementations across fast-growing companies
  • Work in a dynamic environment with cross-functional experts
  • Competitive compensation and rapid career growth
  • A collaborative and meritocratic culture
Read more
Navi Mumbai
4 - 8 yrs
₹8L - ₹10L / yr
Oracle SQL Developer
MySQL
ETL
Database Design
SQL
+1 more

Company Name : Enlink Managed Services

Company Website : https://enlinkit.com/

Location : Turbhe , Navi Mumbai

Shift Time : 12 pm to 9:30 pm

Working Days : 5 Days Working(Sat-Sun Fixed Off)

SQL Developer 

Roles & Responsibilities :

Designing Database, writing stored procedures, complex and dynamic queries in SQL

Creating Indexes, Views, complex Triggers, effective Functions, and appropriate store procedures to facilitate efficient data manipulation and data consistency

Implementing database architecture, ETL and development activities

Troubleshooting data load, ETL and application support related issues

Demonstrates ability to communicate effectively in both technical and business environments

Troubleshooting failed batch jobs, correcting outstanding issues and resubmitting scheduled jobs to ensure completion

Troubleshoot, optimize, and tune SQL processes and complex SQL queries

Required Qualifications/Experience

4+ years of experience in the design and optimization of MySQL databases

General database development using MySQL

Advanced level of writing stored procedures, reading query plans, tuning indexes and troubleshooting performance bottlenecks

Troubleshoot, optimize, and tune SQL processes and complex SQL queries

Experienced and versed in creating sophisticated MySQL Server databases to quickly handle complex queries

Problem solving, analytical and fluent communication

Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹50L - ₹75L / yr
Ansible
Terraform
skill iconAmazon Web Services (AWS)
Platform as a Service (PaaS)
CI/CD
+30 more

ROLE & RESPONSIBILITIES:

We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.


KEY RESPONSIBILITIES:

1.     Cloud Security (AWS)-

  • Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
  • Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
  • Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
  • Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
  • Ensure encryption of data at rest/in transit across all cloud services.

 

2.     DevOps Security (IaC, CI/CD, Kubernetes, Linux)-

Infrastructure as Code & Automation Security:

  • Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
  • Enforce misconfiguration scanning and automated remediation.

CI/CD Security:

  • Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
  • Implement secure build, artifact signing, and deployment workflows.

Containers & Kubernetes:

  • Harden Docker images, private registries, runtime policies.
  • Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
  • Apply CIS Benchmarks for Kubernetes and Linux.

Monitoring & Reliability:

  • Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
  • Ensure audit logging across cloud/platform layers.


3.     MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-

Pipeline & Workflow Security:

  • Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
  • Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.

ML Platform Security:

  • Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
  • Control model access, artifact protection, model registry security, and ML metadata integrity.

Data Security:

  • Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
  • Enforce data versioning security, lineage tracking, PII protection, and access governance.

ML Observability:

  • Implement drift detection (data drift/model drift), feature monitoring, audit logging.
  • Integrate ML monitoring with Grafana/Prometheus/CloudWatch.


4.     Network & Endpoint Security-

  • Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
  • Conduct vulnerability assessments, penetration test coordination, and network segmentation.
  • Secure remote workforce connectivity and internal office networks.


5.     Threat Detection, Incident Response & Compliance-

  • Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
  • Build security alerts, automated threat detection, and incident workflows.
  • Lead incident containment, forensics, RCA, and remediation.
  • Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
  • Maintain security policies, procedures, RRPs (Runbooks), and audits.


IDEAL CANDIDATE:

  • 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
  • Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
  • Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
  • Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
  • Strong Linux security (CIS hardening, auditing, intrusion detection).
  • Proficiency in Python, Bash, and automation/scripting.
  • Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
  • Understanding of microservices, API security, serverless security.
  • Strong understanding of vulnerability management, penetration testing practices, and remediation plans.


EDUCATION:

  • Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
  • Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.


PERKS, BENEFITS AND WORK CULTURE:

  • Competitive Salary Package
  • Generous Leave Policy
  • Flexible Working Hours
  • Performance-Based Bonuses
  • Health Care Benefits
Read more
Inteliment Technologies

at Inteliment Technologies

2 candid answers
Ariba Khan
Posted by Ariba Khan
Pune
3 - 5 yrs
Upto ₹16L / yr (Varies
)
SQL
skill iconPython
ETL
skill iconAmazon Web Services (AWS)
Azure
+1 more

About the company:

Inteliment is a niche business analytics company with almost 2 decades proven track record of partnering with hundreds of fortunes 500 global companies. Inteliment operates its ISO certified development centre in Pune, India and has business operations in multiple countries through subsidiaries in Singapore, Europe and headquarter in India.


About the Role:

As a Data Engineer, you will contribute to cutting-edge global projects and innovative product initiatives, delivering impactful solutions for our Fortune clients. In this role, you will take ownership of the entire data pipeline and infrastructure development lifecycle—from ideation and design to implementation and ongoing optimization. Your efforts will ensure the delivery of high-performance, scalable, and reliable data solutions. Join us to become a driving force in shaping the future of data infrastructure and innovation, paving the way for transformative advancements in the data ecosystem.


Qualifications:

  • Bachelor’s or master’s degree in computer science, Information Technology, or a related field.
  • Certifications with related field will be an added advantage.


Key Competencies:

  • Must have experience with SQL, Python and Hadoop
  • Good to have experience with Cloud Computing Platforms (AWS, Azure, GCP, etc.), DevOps Practices, Agile Development Methodologies
  • ETL or other similar technologies will be an advantage.
  • Core Skills: Proficiency in SQL, Python, or Scala for data processing and manipulation
  • Data Platforms: Experience with cloud platforms such as AWS, Azure, or Google Cloud.
  • Tools: Familiarity with tools like Apache Spark, Kafka, and modern data warehouses (e.g., Snowflake, Big Query, Redshift).
  • Soft Skills: Strong problem-solving abilities, collaboration, and communication skills to work effectively with technical and non-technical teams.
  • Additional: Knowledge of SAP would be an advantage 


Key Responsibilities:

  • Data Pipeline Development: Build, maintain, and optimize ETL/ELT pipelines for seamless data flow.
  • Data Integration: Consolidate data from various sources into unified systems.
  • Database Management: Design and optimize scalable data storage solutions.
  • Data Quality Assurance: Ensure data accuracy, consistency, and completeness.
  • Collaboration: Work with analysts, scientists, and stakeholders to meet data needs.
  • Performance Optimization: Enhance pipeline efficiency and database performance.
  • Data Security: Implement and maintain robust data security and governance policies
  • Innovation: Adopt new tools and design scalable solutions for future growth.
  • Monitoring: Continuously monitor and maintain data systems for reliability.
  • Data Engineers ensure reliable, high-quality data infrastructure for analytics and decision-making.
Read more
PRODUCT DEVELOPMENT COMPANY

PRODUCT DEVELOPMENT COMPANY

Agency job
Remote only
7 - 16 yrs
₹15L - ₹20L / yr
skill iconData Analytics
Data Warehouse (DWH)
Business Intelligence (BI)
Data governance
BI/DW
+6 more

EMPLOYMENT TYPE: Full-Time, Permanent


LOCATION: Remote


SHIFT TIMINGS: 11.00 AM - 8:00 PM IST


Role : Lead Data Analyst


Qualifications:


● Bachelor’s or Master’s degree in Computer Science, Data Analytics, Information Systems, or a related field.


● 7–10 years of experience in data operations, data management, or analytics.


● Strong understanding of data governance, ETL processes, and quality control methodologies.


● Hands-on experience with SQL, Excel/Google Sheets, and data visualization tools


● Experience with automation tools like Python script is a plus.


● Must be capable of working independently and delivering stable, efficient and reliable software.


● Excellent written and verbal communication skills in English.


● Experience supporting and working with cross-functional teams in a dynamic environment



Preferred Skills:


● Experience in SaaS, B2B data, or lead intelligence industry.


● Exposure to data privacy regulations (GDPR, CCPA) and compliance practices.


● Ability to work effectively in cross-functional, global, and remote environments.

Read more
Tecblic Private LImited
Ahmedabad
5 - 6 yrs
₹5L - ₹15L / yr
Windows Azure
skill iconPython
SQL
Data Warehouse (DWH)
Data modeling
+5 more

Job Description: Data Engineer

Location: Ahmedabad

Experience: 5 to 6 years

Employment Type: Full-Time



We are looking for a highly motivated and experienced Data Engineer to join our  team. As a Data Engineer, you will play a critical role in designing, building, and optimizing data pipelines that ensure the availability, reliability, and performance of our data infrastructure. You will collaborate closely with data scientists, analysts, and cross-functional teams to provide timely and efficient data solutions.



Responsibilities


● Design and optimize data pipelines for various data sources


● Design and implement efficient data storage and retrieval mechanisms


● Develop data modelling solutions and data validation mechanisms


● Troubleshoot data-related issues and recommend process improvements


● Collaborate with data scientists and stakeholders to provide data-driven insights and solutions


● Coach and mentor junior data engineers in the team




Skills Required: 


● Minimum 4 years of experience in data engineering or related field


● Proficient in designing and optimizing data pipelines and data modeling


● Strong programming expertise in Python


● Hands-on experience with big data technologies such as Hadoop, Spark, and Hive


● Extensive experience with cloud data services such as AWS, Azure, and GCP


● Advanced knowledge of database technologies like SQL, NoSQL, and data warehousing


● Knowledge of distributed computing and storage systems


● Familiarity with DevOps practices and power automate and Microsoft Fabric will be an added advantage


● Strong analytical and problem-solving skills with outstanding communication and collaboration abilities




Qualifications


  • Bachelor's degree in Computer Science, Data Science, or a Computer related field


Read more
Financial Services Company

Financial Services Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
4 - 8 yrs
₹10L - ₹13L / yr
SQL
databricks
PowerBI
Windows Azure
Data engineering
+9 more

Review Criteria

  • Strong Senior Data Engineer profile
  • 4+ years of hands-on Data Engineering experience
  • Must have experience owning end-to-end data architecture and complex pipelines
  • Must have advanced SQL capability (complex queries, large datasets, optimization)
  • Must have strong Databricks hands-on experience
  • Must be able to architect solutions, troubleshoot complex data issues, and work independently
  • Must have Power BI integration experience
  • CTC has 80% fixed and 20% variable in their ctc structure


Preferred

  • Worked on Call center data, understand nuances of data generated in call centers
  • Experience implementing data governance, quality checks, or lineage frameworks
  • Experience with orchestration tools (Airflow, ADF, Glue Workflows), Python, Delta Lake, Lakehouse architecture


Job Specific Criteria

  • CV Attachment is mandatory
  • Are you Comfortable integrating with Power BI datasets?
  • We have an alternate Saturdays working. Are you comfortable to WFH on 1st and 4th Saturday?


Role & Responsibilities

We are seeking a highly experienced Senior Data Engineer with strong architectural capability, excellent optimisation skills, and deep hands-on experience in modern data platforms. The ideal candidate will have advanced SQL skills, strong expertise in Databricks, and practical experience working across cloud environments such as AWS and Azure. This role requires end-to-end ownership of complex data engineering initiatives, including architecture design, data governance implementation, and performance optimisation. You will collaborate with cross-functional teams to build scalable, secure, and high-quality data solutions.

 

Key Responsibilities-

  • Lead the design and implementation of scalable data architectures, pipelines, and integration frameworks.
  • Develop, optimise, and maintain complex SQL queries, transformations, and Databricks-based data workflows.
  • Architect and deliver high-performance ETL/ELT processes across cloud platforms.
  • Implement and enforce data governance standards, including data quality, lineage, and access control.
  • Partner with analytics, BI (Power BI), and business teams to enable reliable, governed, and high-value data delivery.
  • Optimise large-scale data processing, ensuring efficiency, reliability, and cost-effectiveness.
  • Monitor, troubleshoot, and continuously improve data pipelines and platform performance.
  • Mentor junior engineers and contribute to engineering best practices, standards, and documentation.


Ideal Candidate

  • Proven industry experience as a Senior Data Engineer, with ownership of high-complexity projects.
  • Advanced SQL skills with experience handling large, complex datasets.
  • Strong expertise with Databricks for data engineering workloads.
  • Hands-on experience with major cloud platforms — AWS and Azure.
  • Deep understanding of data architecture, data modelling, and optimisation techniques.
  • Familiarity with BI and reporting environments such as Power BI.
  • Strong analytical and problem-solving abilities with a focus on data quality and governance
  • Proficiency in python or another programming language in a plus.
Read more
Non-Banking Financial Company

Non-Banking Financial Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
1 - 2 yrs
₹5L - ₹6.1L / yr
SQL
databricks
PowerBI
Data engineering
ETL
+6 more

ROLES AND RESPONSIBILITIES:

We are looking for a Junior Data Engineer who will work under guidance to support data engineering tasks, perform basic coding, and actively learn modern data platforms and tools. The ideal candidate should have foundational SQL knowledge, basic exposure to Databricks. This role is designed for early-career professionals who are eager to grow into full data engineering responsibilities while contributing to data pipeline operations and analytical support.


Key Responsibilities-

  • Support the development and maintenance of data pipelines and ETL/ELT workflows under mentorship.
  • Write basic SQL queries, transformations, and assist with Databricks notebook tasks.
  • Help troubleshoot data issues and contribute to ensuring pipeline reliability.
  • Work with senior engineers and analysts to understand data requirements and deliver small tasks.
  • Assist in maintaining documentation, data dictionaries, and process notes.
  • Learn and apply data engineering best practices, coding standards, and cloud fundamentals.
  • Support basic tasks related to Power BI data preparation or integrations as needed.


IDEAL CANDIDATE:

  • Foundational SQL skills with the ability to write and understand basic queries.
  • Basic exposure to Databricks, data transformation concepts, or similar data tools.
  • Understanding of ETL/ELT concepts, data structures, and analytical workflows.
  • Eagerness to learn modern data engineering tools, technologies, and best practices.
  • Strong problem-solving attitude and willingness to work under guidance.
  • Good communication and collaboration skills to work with senior engineers and analysts.


PERKS, BENEFITS AND WORK CULTURE:

Our people define our passion and our audacious, incredibly rewarding achievements. Bajaj Finance Limited is one of India’s most diversified Non-banking financial companies, and among Asia’s top 10 Large workplaces. If you have the drive to get ahead, we can help find you an opportunity at any of the 500+ locations we’re present in India.

Read more
Financial Services Company

Financial Services Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
2 - 5 yrs
₹8L - ₹10.7L / yr
SQL Azure
databricks
ETL
SQL
Data modeling
+4 more

ROLES AND RESPONSIBILITIES:

We are seeking a skilled Data Engineer who can work independently on data pipeline development, troubleshooting, and optimisation tasks. The ideal candidate will have strong SQL skills, hands-on experience with Databricks, and familiarity with cloud platforms such as AWS and Azure. You will be responsible for building and maintaining reliable data workflows, supporting analytical teams, and ensuring high-quality, secure, and accessible data across the organisation.


KEY RESPONSIBILITIES:

  • Design, develop, and maintain scalable data pipelines and ETL/ELT workflows.
  • Build, optimise, and troubleshoot SQL queries, transformations, and Databricks data processes.
  • Work with large datasets to deliver efficient, reliable, and high-performing data solutions.
  • Collaborate closely with analysts, data scientists, and business teams to support data requirements.
  • Ensure data quality, availability, and security across systems and workflows.
  • Monitor pipeline performance, diagnose issues, and implement improvements.
  • Contribute to documentation, standards, and best practices for data engineering processes.


IDEAL CANDIDATE:

  • Proven experience as a Data Engineer or in a similar data-focused role (3+ years).
  • Strong SQL skills with experience writing and optimising complex queries.
  • Hands-on experience with Databricks for data engineering tasks.
  • Experience with cloud platforms such as AWS and Azure.
  • Understanding of ETL/ELT concepts, data modelling, and pipeline orchestration.
  • Familiarity with Power BI and data integration with BI tools.
  • Strong analytical and troubleshooting skills, with the ability to work independently.
  • Experience working end-to-end on data engineering workflows and solutions.


PERKS, BENEFITS AND WORK CULTURE:

Our people define our passion and our audacious, incredibly rewarding achievements. The company is one of India’s most diversified Non-banking financial companies, and among Asia’s top 10 Large workplaces. If you have the drive to get ahead, we can help find you an opportunity at any of the 500+ locations we’re present in India.

Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Pune
2 - 5 yrs
₹8L - ₹11L / yr
Data modeling
ETL

Strong Data engineer profile

Mandatory (Experience 1): Must have 2+ years of hands-on Data Engineering experience.

Mandatory (Experience 2): Must have end-to-end experience in building & maintaining ETL/ELT pipelines (not just BI/reporting).

Mandatory (Technical 1): Must have strong SQL capability (complex queries + optimization).

Mandatory (Technical 2): Must have hands-on Databricks experience.

Mandatory (Role Requirement): Must be able to work independently, troubleshoot data issues, and manage large datasets.

Read more
MyOperator - VoiceTree Technologies

at MyOperator - VoiceTree Technologies

1 video
3 recruiters
Vijay Muthu
Posted by Vijay Muthu
Remote only
3 - 5 yrs
₹12L - ₹20L / yr
skill iconPython
skill iconDjango
MySQL
skill iconPostgreSQL
Microservices architecture
+26 more

About Us:

MyOperator and Heyo are India’s leading conversational platforms, empowering 40,000+ businesses with Call and WhatsApp-based engagement. We’re a product-led SaaS company scaling rapidly, and we’re looking for a skilled Software Developer to help build the next generation of scalable backend systems.


Role Overview:

We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.


Key Responsibilities:

  • Develop robust backend services using Python, Django, and FastAPI
  • Design and maintain a scalable microservices architecture
  • Integrate LangChain/LLMs into AI-powered features
  • Write clean, tested, and maintainable code with pytest
  • Manage and optimize databases (MySQL/Postgres)
  • Deploy and monitor services on AWS
  • Collaborate across teams to define APIs, data flows, and system architecture

Must-Have Skills:

  • Python and Django
  • MySQL or Postgres
  • Microservices architecture
  • AWS (EC2, RDS, Lambda, etc.)
  • Unit testing using pytest
  • LangChain or Large Language Models (LLM)
  • Strong grasp of Data Structures & Algorithms
  • AI coding assistant tools (e.g., Chat GPT & Gemini)

Good to Have:

  • MongoDB or ElasticSearch
  • Go or PHP
  • FastAPI
  • React, Bootstrap (basic frontend support)
  • ETL pipelines, Jenkins, Terraform

Why Join Us?

  • 100% Remote role with a collaborative team
  • Work on AI-first, high-scale SaaS products
  • Drive real impact in a fast-growing tech company
  • Ownership and growth from day one


Read more
Tech AI startup in Bangalore

Tech AI startup in Bangalore

Agency job
via Recruit Square by Priyanka choudhary
Remote only
4 - 8 yrs
₹12L - ₹18L / yr
pandas
NumPy
MLOps
SQL
ETL
+1 more

Data Engineer – Validation & Quality


Responsibilities

  • Build rule-based and statistical validation frameworks using Pandas / NumPy.
  • Implement contradiction detection, reconciliation, and anomaly flagging.
  • Design and compute confidence metrics for each evidence record.
  • Automate schema compliance, sampling, and checksum verification across data sources.
  • Collaborate with the Kernel to embed validation results into every output artifact.

Requirements

  • 5 + years in data engineering, data quality, or MLOps validation.
  • Strong SQL optimization and ETL background.
  • Familiarity with data lineage, DQ frameworks, and regulatory standards (SOC 2 / GDPR).
Read more
Agentic AI Platform

Agentic AI Platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Gurugram
3 - 6 yrs
₹10L - ₹25L / yr
DevOps
skill iconPython
Google Cloud Platform (GCP)
Linux/Unix
CI/CD
+21 more

Review Criteria

  • Strong DevOps /Cloud Engineer Profiles
  • Must have 3+ years of experience as a DevOps / Cloud Engineer
  • Must have strong expertise in cloud platforms – AWS / Azure / GCP (any one or more)
  • Must have strong hands-on experience in Linux administration and system management
  • Must have hands-on experience with containerization and orchestration tools such as Docker and Kubernetes
  • Must have experience in building and optimizing CI/CD pipelines using tools like GitHub Actions, GitLab CI, or Jenkins
  • Must have hands-on experience with Infrastructure-as-Code tools such as Terraform, Ansible, or CloudFormation
  • Must be proficient in scripting languages such as Python or Bash for automation
  • Must have experience with monitoring and alerting tools like Prometheus, Grafana, ELK, or CloudWatch
  • Top tier Product-based company (B2B Enterprise SaaS preferred)


Preferred

  • Experience in multi-tenant SaaS infrastructure scaling.
  • Exposure to AI/ML pipeline deployments or iPaaS / reverse ETL connectors.


Role & Responsibilities

We are seeking a DevOps Engineer to design, build, and maintain scalable, secure, and resilient infrastructure for our SaaS platform and AI-driven products. The role will focus on cloud infrastructure, CI/CD pipelines, container orchestration, monitoring, and security automation, enabling rapid and reliable software delivery.


Key Responsibilities:

  • Design, implement, and manage cloud-native infrastructure (AWS/Azure/GCP).
  • Build and optimize CI/CD pipelines to support rapid release cycles.
  • Manage containerization & orchestration (Docker, Kubernetes).
  • Own infrastructure-as-code (Terraform, Ansible, CloudFormation).
  • Set up and maintain monitoring & alerting frameworks (Prometheus, Grafana, ELK, etc.).
  • Drive cloud security automation (IAM, SSL, secrets management).
  • Partner with engineering teams to embed DevOps into SDLC.
  • Troubleshoot production issues and drive incident response.
  • Support multi-tenant SaaS scaling strategies.


Ideal Candidate

  • 3–6 years' experience as DevOps/Cloud Engineer in SaaS or enterprise environments.
  • Strong expertise in AWS, Azure, or GCP.
  • Strong expertise in LINUX Administration.
  • Hands-on with Kubernetes, Docker, CI/CD tools (GitHub Actions, GitLab, Jenkins).
  • Proficient in Terraform/Ansible/CloudFormation.
  • Strong scripting skills (Python, Bash).
  • Experience with monitoring stacks (Prometheus, Grafana, ELK, CloudWatch).
  • Strong grasp of cloud security best practices.



Read more
Data Havn

Data Havn

Agency job
via Infinium Associate by Toshi Srivastava
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2.5 - 4.5 yrs
₹10L - ₹20L / yr
skill iconPython
SQL
Google Cloud Platform (GCP)
SQL server
ETL
+9 more

About the Role:


We are seeking a talented Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.

Responsibilities:

  • Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
  • Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
  • Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
  • Team Management: Able to handle team.
  • Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
  • Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
  • Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
  • Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.

 

 Skills:

  • Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
  • Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
  • Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
  • Understanding of data modeling and data architecture concepts.
  • Experience with ETL/ELT tools and frameworks.
  • Excellent problem-solving and analytical skills.
  • Ability to work independently and as part of a team.

Preferred Qualifications:

  • Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
  • Knowledge of machine learning and artificial intelligence concepts.
  • Experience with data visualization tools (e.g., Tableau, Power BI).
  • Certification in cloud platforms or data engineering.
Read more
Loyalty Juggernaut Inc

at Loyalty Juggernaut Inc

2 recruiters
Shraddha Dhavle
Posted by Shraddha Dhavle
Hyderabad
3 - 5 yrs
₹5L - ₹15L / yr
ETL
ETL architecture
skill iconPython
Data engineering

At Loyalty Juggernaut, we’re on a mission to revolutionize customer loyalty through AI-driven SaaS solutions. We are THE JUGGERNAUTS, driving innovation and impact in the loyalty ecosystem with GRAVTY®, our SaaS Product that empowers multinational enterprises to build deeper customer connections. Designed for scalability and personalization, GRAVTY® delivers cutting-edge loyalty solutions that transform customer engagement across diverse industries including Airlines, Airport, Retail, Hospitality, Banking, F&B, Telecom, Insurance and Ecosystem.


Our Impact:

  • 400+ million members connected through our platform.
  • Trusted by 100+ global brands/partners, driving loyalty and brand devotion worldwide.


Proud to be a Three-Time Champion for Best Technology Innovation in Loyalty!!


Explore more about us at www.lji.io.


What you will OWN:

  • Build the infrastructure required for optimal extraction, transformation, and loading of data from various sources using SQL and AWS ‘big data’ technologies.
  • Create and maintain optimal data pipeline architecture.
  • Identify, design, and implement internal process improvements, automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Work with stakeholders, including the Technical Architects, Developers, Product Owners, and Executives, to assist with data-related technical issues and support their data infrastructure needs.
  • Create tools for data management and data analytics that can assist them in building and optimizing our product to become an innovative industry leader.


You would make a GREAT FIT if you have:

  • Have 2 to 5 years of relevant backend development experience, with solid expertise in Python.
  • Possess strong skills in Data Structures and Algorithms, and can write optimized, maintainable code.
  • Are familiar with database systems, and can comfortably work with PostgreSQL, as well as NoSQL solutions like MongoDB or DynamoDB.
  • Hands-on experience using Cloud Dataware houses like AWS Redshift, GBQ, etc.
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift, and AWS Batch would be an added advantage.
  • Have a solid understanding of ETL processes and tools and can build or modify ETL pipelines effectively.
  • Have experience managing or building data pipelines and architectures at scale.
  • Understand the nuances of data ingestion, transformation, storage, and analytics workflows.
  • Communicate clearly and work collaboratively across engineering, product.


Why Choose US?

  • This opportunity offers a dynamic and supportive work environment where you'll have the chance to not just collaborate with talented technocrats but also work with globally recognized brands, gain exposure, and carve your own career path.
  • You will get to innovate and dabble in the future of technology -Enterprise Cloud Computing, Blockchain, Machine Learning, AI, Mobile, Digital Wallets, and much more.


Read more
JK Technosoft Ltd
Akanksh Gupta
Posted by Akanksh Gupta
Bengaluru (Bangalore), Delhi, Gurugram, Noida, Ghaziabad, Faridabad
6 - 10 yrs
₹30L - ₹42L / yr
Generative AI
GenAI
skill iconPython
skill iconFlask
FastAPI
+3 more

We are looking for a Technical Lead - GenAI with a strong foundation in Python, Data Analytics, Data Science or Data Engineering, system design, and practical experience in building and deploying Agentic Generative AI systems. The ideal candidate is passionate about solving complex problems using LLMs, understands the architecture of modern AI agent frameworks like LangChain/LangGraph, and can deliver scalable, cloud-native back-end services with a GenAI focus.


Key Responsibilities :


- Design and implement robust, scalable back-end systems for GenAI agent-based platforms.


- Work closely with AI researchers and front-end teams to integrate LLMs and agentic workflows into production services.


- Develop and maintain services using Python (FastAPI/Django/Flask), with best practices in modularity and performance.


- Leverage and extend frameworks like LangChain, LangGraph, and similar to orchestrate tool-augmented AI agents.


- Design and deploy systems in Azure Cloud, including usage of serverless functions, Kubernetes, and scalable data services.


- Build and maintain event-driven / streaming architectures using Kafka, Event Hubs, or other messaging frameworks.


- Implement inter-service communication using gRPC and REST.


- Contribute to architectural discussions, especially around distributed systems, data flow, and fault tolerance.


Required Skills & Qualifications :


- Strong hands-on back-end development experience in Python along with Data Analytics or Data Science.


- Strong track record on platforms like LeetCode or in real-world algorithmic/system problem-solving.


- Deep knowledge of at least one Python web framework (e.g., FastAPI, Flask, Django).


- Solid understanding of LangChain, LangGraph, or equivalent LLM agent orchestration tools.


- 2+ years of hands-on experience in Generative AI systems and LLM-based platforms.


- Proven experience with system architecture, distributed systems, and microservices.


- Strong familiarity with Any Cloud infrastructure and deployment practices.


- Should know about any Data Engineering or Analytics expertise (Preferred) e.g. Azure Data Factory, Snowflake, Databricks, ETL tools Talend, Informatica or Power BI, Tableau, Data modelling, Datawarehouse development.


Read more
Remote only
6 - 10 yrs
₹8L - ₹15L / yr
Informatica IICS/IDMC
Informatica PowerCenter
ETL
SQL
Data migration
+1 more

Job Title : Informatica Cloud Developer / Migration Specialist

Experience : 6 to 10 Years

Location : Remote

Notice Period : Immediate


Job Summary :

We are looking for an experienced Informatica Cloud Developer with strong expertise in Informatica IDMC/IICS and experience in migrating from PowerCenter to Cloud.

The candidate will be responsible for designing, developing, and maintaining ETL workflows, data warehouses, and performing data integration across multiple systems.


Mandatory Skills :

Informatica IICS/IDMC, Informatica PowerCenter, ETL Development, SQL, Data Migration (PowerCenter to IICS), and Performance Tuning.


Key Responsibilities :

  • Design, develop, and maintain ETL processes using Informatica IICS/IDMC.
  • Work on migration projects from Informatica PowerCenter to IICS Cloud.
  • Troubleshoot and resolve issues related to mappings, mapping tasks, and taskflows.
  • Analyze business requirements and translate them into technical specifications.
  • Conduct unit testing, performance tuning, and ensure data quality.
  • Collaborate with cross-functional teams for data integration and reporting needs.
  • Prepare and maintain technical documentation.

Required Skills :

  • 4 to 5 years of hands-on experience in Informatica Cloud (IICS/IDMC).
  • Strong experience with Informatica PowerCenter.
  • Proficiency in SQL and data warehouse concepts.
  • Good understanding of ETL performance tuning and debugging.
  • Excellent communication and problem-solving skills.
Read more
Estuate Software

at Estuate Software

1 candid answer
Deekshith K Naidu
Posted by Deekshith K Naidu
Hyderabad
5 - 12 yrs
₹5L - ₹35L / yr
Google Cloud Platform (GCP)
Apache Airflow
ETL
skill iconPython
Big query
+1 more

Job Title: Data Engineer / Integration Engineer

 

Job Summary:

We are seeking a highly skilled Data Engineer / Integration Engineer to join our team. The ideal candidate will have expertise in Python, workflow orchestration, cloud platforms (GCP/Google BigQuery), big data frameworks (Apache Spark or similar), API integration, and Oracle EBS. The role involves designing, developing, and maintaining scalable data pipelines, integrating various systems, and ensuring data quality and consistency across platforms. Knowledge of Ascend.io is a plus.

Key Responsibilities:

  • Design, build, and maintain scalable data pipelines and workflows.
  • Develop and optimize ETL/ELT processes using Python and workflow automation tools.
  • Implement and manage data integration between various systems, including APIs and Oracle EBS.
  • Work with Google Cloud Platform (GCP) or Google BigQuery (GBQ) for data storage, processing, and analytics.
  • Utilize Apache Spark or similar big data frameworks for efficient data processing.
  • Develop robust API integrations for seamless data exchange between applications.
  • Ensure data accuracy, consistency, and security across all systems.
  • Monitor and troubleshoot data pipelines, identifying and resolving performance issues.
  • Collaborate with data analysts, engineers, and business teams to align data solutions with business goals.
  • Document data workflows, processes, and best practices for future reference.

Required Skills & Qualifications:

  • Strong proficiency in Python for data engineering and workflow automation.
  • Experience with workflow orchestration tools (e.g., Apache Airflow, Prefect, or similar).
  • Hands-on experience with Google Cloud Platform (GCP) or Google BigQuery (GBQ).
  • Expertise in big data processing frameworks, such as Apache Spark.
  • Experience with API integrations (REST, SOAP, GraphQL) and handling structured/unstructured data.
  • Strong problem-solving skills and ability to optimize data pipelines for performance.
  • Experience working in an agile environment with CI/CD processes.
  • Strong communication and collaboration skills.

Preferred Skills & Nice-to-Have:

  • Experience with Ascend.io platform for data pipeline automation.
  • Knowledge of SQL and NoSQL databases.
  • Familiarity with Docker and Kubernetes for containerized workloads.
  • Exposure to machine learning workflows is a plus.

Why Join Us?

  • Opportunity to work on cutting-edge data engineering projects.
  • Collaborative and dynamic work environment.
  • Competitive compensation and benefits.
  • Professional growth opportunities with exposure to the latest technologies.

How to Apply:

Interested candidates can apply by sending their resume to [your email/contact].

 

Read more
Inncircles
Gangadhar M
Posted by Gangadhar M
Hyderabad
3 - 5 yrs
Best in industry
PySpark
Spark
skill iconPython
ETL
Amazon EMR
+7 more


We are looking for a highly skilled Sr. Big Data Engineer with 3-5 years of experience in

building large-scale data pipelines, real-time streaming solutions, and batch/stream

processing systems. The ideal candidate should be proficient in Spark, Kafka, Python, and

AWS Big Data services, with hands-on experience in implementing CDC (Change Data

Capture) pipelines and integrating multiple data sources and sinks.


Responsibilities

  • Design, develop, and optimize batch and streaming data pipelines using Apache Spark and Python.
  • Build and maintain real-time data ingestion pipelines leveraging Kafka and AWS Kinesis.
  • Implement CDC (Change Data Capture) pipelines using Kafka Connect, Debezium or similar frameworks.
  • Integrate data from multiple sources and sinks (databases, APIs, message queues, file systems, cloud storage).
  • Work with AWS Big Data ecosystem: Glue, EMR, Kinesis, Athena, S3, Lambda, Step Functions.
  • Ensure pipeline scalability, reliability, and performance tuning of Spark jobs and EMR clusters.
  • Develop data transformation and ETL workflows in AWS Glue and manage schema evolution.
  • Collaborate with data scientists, analysts, and product teams to deliver reliable and high-quality data solutions.
  • Implement monitoring, logging, and alerting for critical data pipelines.
  • Follow best practices for data security, compliance, and cost optimization in cloud environments.


Required Skills & Experience

  • Programming: Strong proficiency in Python (PySpark, data frameworks, automation).
  • Big Data Processing: Hands-on experience with Apache Spark (batch & streaming).
  • Messaging & Streaming: Proficient in Kafka (brokers, topics, partitions, consumer groups) and AWS Kinesis.
  • CDC Pipelines: Experience with Debezium / Kafka Connect / custom CDC frameworks.
  • AWS Services: AWS Glue, EMR, S3, Athena, Lambda, IAM, CloudWatch.
  • ETL/ELT Workflows: Strong knowledge of data ingestion, transformation, partitioning, schema management.
  • Databases: Experience with relational databases (MySQL, Postgres, Oracle) and NoSQL (MongoDB, DynamoDB, Cassandra).
  • Data Formats: JSON, Parquet, Avro, ORC, Delta/Iceberg/Hudi.
  • Version Control & CI/CD: Git, GitHub/GitLab, Jenkins, or CodePipeline.
  • Monitoring/Logging: CloudWatch, Prometheus, ELK/Opensearch.
  • Containers & Orchestration (nice-to-have): Docker, Kubernetes, Airflow/Step
  • Functions for workflow orchestration.


Preferred Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.
  • Experience in large-scale data lake / lake house architectures.
  • Knowledge of data warehousing concepts and query optimisation.
  • Familiarity with data governance, lineage, and cataloging tools (Glue Data Catalog, Apache Atlas).
  • Exposure to ML/AI data pipelines is a plus.


Tools & Technologies (must-have exposure)

  • Big Data & Processing: Apache Spark, PySpark, AWS EMR, AWS Glue
  • Streaming & Messaging: Apache Kafka, Kafka Connect, Debezium, AWS Kinesis
  • Cloud & Storage: AWS (S3, Athena, Lambda, IAM, CloudWatch)
  • Programming & Scripting: Python, SQL, Bash
  • Orchestration: Airflow / Step Functions
  • Version Control & CI/CD: Git, Jenkins/CodePipeline
  • Data Formats: Parquet, Avro, ORC, JSON, Delta, Iceberg, Hudi
Read more
Bluecopa
Mumbai, Bengaluru (Bangalore), Delhi
3 - 6 yrs
₹14L - ₹15L / yr
JIRA
ETL
confluence
R2R
Financial analysis
+3 more

Required Qualifications

  • Bachelor’s degree Commerce background / MBA Finance (mandatory).
  • 3+ years of hands-on implementation/project management experience
  • Proven experience delivering projects in Fintech, SaaS, or ERP environments
  • Strong expertise in accounting principles, R2R (Record-to-Report), treasury, and financial workflows.
  • Hands-on SQL experience, including the ability to write and debug complex queries (joins, CTEs, subqueries)
  • Experience working with ETL pipelines or data migration processes
  • Proficiency in tools like Jira, Confluence, Excel, and project tracking systems
  • Strong communication and stakeholder management skills
  • Ability to manage multiple projects simultaneously and drive client success

Preferred Qualifications

  • Prior experience implementing financial automation tools (e.g., SAP, Oracle, Anaplan, Blackline)
  • Familiarity with API integrations and basic data mapping
  • Experience in agile/scrum-based implementation environments
  • Exposure to reconciliation, book closure, AR/AP, and reporting systems
  • PMP, CSM, or similar certifications
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Bharanidharan K
Posted by Bharanidharan K
Mumbai
7 - 12 yrs
Best in industry
SQL
SQL server
Databases
Performance tuning
Stored Procedures
+2 more

Required Skills and Qualifications :


  • Bachelor’s degree in Computer Science, Information Technology, or a related field. 
  • Proven experience as a Data Modeler or in a similar role at a asset manager or financial firm. 
  • Strong Understanding of various business concepts related to buy side financial firms. Understanding of Private Markets (Private Credit, Private Equity, Real Estate, Alternatives) is required. 
  • Strong understanding of database design principles and data modeling techniques (e.g., ER modeling, dimensional modeling). 
  • Knowledge of SQL and experience with relational databases (e.g., Oracle, SQL Server, MySQL). 
  • Familiarity with NoSQL databases is a plus. 
  • Excellent analytical and problem-solving skills. 
  • Strong communication skills and the ability to work collaboratively. 


Preferred Qualifications: 

  • Experience in data warehousing and business intelligence. 
  • Knowledge of data governance practices. 
  • Certification in data modeling or related fields.
  •  

Key Responsibilities :

  • Design and develop conceptual, logical, and physical data models based on business requirements. 
  • Collaborate with stakeholders in finance, operations, risk, legal, compliance and front offices to gather and analyze data requirements. 
  • Ensure data models adhere to best practices for data integrity, performance, and security. 
  • Create and maintain documentation for data models, including data dictionaries and metadata. 
  • Conduct data profiling and analysis to identify data quality issues. 
  • Conduct detailed meetings and discussions with business to translate broad business functionality requirements into data concepts, data models and data products.


Read more
Nyx Wolves
Remote only
5 - 8 yrs
₹11L - ₹13L / yr
Denodo VDP
Denodo Scheduler
Denodo Data Catalog
SQL server
Query optimization
+4 more


💡 Transform Banking Data with Us!


We’re on the lookout for a Senior Denodo Developer (Remote) to shape the future of data virtualization in the banking domain. If you’re passionate about turning complex financial data into actionable insights, this role is for you! 🚀


What You’ll Do:

✔ Build cutting-edge Denodo-based data virtualization solutions

✔ Collaborate with banking SMEs, architects & analysts

✔ Design APIs, data services & scalable models

✔ Ensure compliance with global banking standards

✔ Mentor juniors & drive best practices


💼 What We’re Looking For:

🔹 6+ years of IT experience (3+ years in Denodo)

🔹 Strong in Denodo VDP, Scheduler & Data Catalog

🔹 Skilled in SQL, optimization & performance tuning

🔹 Banking/Financial services domain expertise (CBS, Payments, KYC/AML, Risk & Compliance)

🔹 Cloud knowledge (AWS, Azure, GCP)

📍 Location: Remote


🎯 Experience: 6+ years

🌟 Catchline for candidates:


👉 “If you thrive in the world of data and want to make banking smarter, faster, and more secure — this is YOUR chance!”


📩 Apply Now:

  • Connect with me here on Cutshort and share your resume/message directly.


Let’s build something great together 🚀


#WeAreHiring #DenodoDeveloper #BankingJobs #RemoteWork #DataVirtualization #FinTechCareers #DataIntegration #TechTalent

Read more
Tata Consultancy Services
Agency job
via Risk Resources LLP hyd by susmitha o
Chennai, Hyderabad, Kolkata, Delhi, Pune, Bengaluru (Bangalore)
5 - 8 yrs
₹7L - ₹30L / yr
Informatica MDM
MDM
ETL
Big Data

• Technical expertise in the area of development of Master Data Management, data extraction, transformation, and load (ETL) applications, big data using existing and emerging technology platforms and cloud architecture

• Functions as lead developer• Support System Analysis, Technical/Data design, development, unit testing, and oversee end-to-end data solution.

• Technical SME in Master Data Management application, ETL, big data and cloud technologies                                                                                               

• Collaborate with IT teams to ensure technical designs and implementations account for requirements, standards, and best practices                                                                                           

• Performance tuning of end-to-end MDM, database, ETL, Big data processes or in the source/target database endpoints as needed.                                                               

• Mentor and advise junior members of team to provide guidance.                                                  

• Perform a technical lead and solution lead role for a team of onshore and offshore developers

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Archana M
Posted by Archana M
Mumbai
5 - 7 yrs
Best in industry
ETL
skill iconPython
Apache Spark

📢 DATA SOURCING & ANALYSIS EXPERT (L3 Support) – Mumbai 📢

Are you ready to supercharge your Data Engineering career in the financial domain?

We’re seeking a seasoned professional (5–7 years experience) to join our Mumbai team and lead in data sourcing, modelling, and analysis. If you’re passionate about solving complex challenges in Relational & Big Data ecosystems, this role is for you.

What You’ll Be Doing

  • Translate business needs into robust data models, program specs, and solutions
  • Perform advanced SQL optimization, query tuning, and L3-level issue resolution
  • Work across the entire data stack: ETL, Python / Spark, Autosys, and related systems
  • Debug, monitor, and improve data pipelines in production
  • Collaborate with business, analytics, and engineering teams to deliver dependable data services

What You Should Bring

  • 5+ years in financial / fintech / capital markets environment
  • Proven expertise in relational databases and big data technologies
  • Strong command over SQL tuning, query optimization, indexing, partitioning
  • Hands-on experience with ETL pipelines, Spark / PySpark, Python scripting, job scheduling (e.g. Autosys)
  • Ability to troubleshoot issues at the L3 level, root cause analysis, performance tuning
  • Good communication skills — you’ll coordinate with business users, analytics, and tech teams


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Manasa S
Posted by Manasa S
Mumbai
6 - 12 yrs
Best in industry
Informatica
Stored Procedures
SQL
ETL

We are looking for an experienced DB2 developer/DBA who has worked in a critical application

with large sized Database. The role requires the candidate to understand the landscape of the

application and the data including its topology across the Online Data store and the Data

Warehousing counter parts. The challenges we strive to solve include scalability/performance

related to dealing with very large data sets and multiple data sources.


The role involves collaborating with global team members and provides a unique opportunity to

network with a diverse group of people.


The candidate who fills this role of a Database developer in our team will be involved in building

and creating solutions from the requirements stage through deployment. A successful candidate

is self-motivated, innovative, thinks outside the box, has excellent communication skills and can

work with clients and stakeholders from both the business and technology with

ease.


Required Skills:

Expertise in writing complex data retrieval queries, stored procs and performance tuning

Experience in migrating large scale database from Sybase to a new tech stack

Expertise in relational DB: Sybase, AZURE SQL Server, DB2 and nosgl databases

Strong knowledge in Linux Shell Scripting

Working knowledge of Python programming

Working knowledge of Informatica

Good knowledge of Autosys or any such scheduling tool

Detail oriented, ability to turn deliverables around quickly with high degree of accuracy Strong

analytical skills, ability to interpret business requirements and produce functional and technical

design documents

Good time management skills - ability to prioritize and multi-task, handling multiple efforts at

once

Strong desire to understand and learn domain.


Desired Skills:

Experience in Sybase, AZURE SQL Server, DB2

Experience in migrating relational database to modern tech stack

Experience in a financial services/banking industry

Read more
Pluginlive

at Pluginlive

1 recruiter
Harsha Saggi
Posted by Harsha Saggi
Mumbai, Chennai
1 - 3 yrs
₹5L - ₹8L / yr
skill iconPython
SQL
Data Structures
ETL
Dashboard
+3 more

About Us:

PluginLive is an all-in-one tech platform that bridges the gap between all its stakeholders - Corporates, Institutes Students, and Assessment & Training Partners. This ecosystem helps Corporates in brand building/positioning with colleges and the student community to scale its human capital, at the same time increasing student placements for Institutes, and giving students a real time perspective of the corporate world to help upskill themselves into becoming more desirable candidates.


Role Overview:

Entry-level Data Engineer position focused on building and maintaining data pipelines while developing visualization skills. You'll work alongside senior engineers to support our data infrastructure and create meaningful insights through data visualization.


Responsibilities:

  • Assist in building and maintaining ETL/ELT pipelines for data processing
  • Write SQL queries to extract and analyze data from various sources
  • Support data quality checks and basic data validation processes
  • Create simple dashboards and reports using visualization tools
  • Learn and work with Oracle Cloud services under guidance
  • Use Python for basic data manipulation and cleaning tasks
  • Document data processes and maintain data dictionaries
  • Collaborate with team members to understand data requirements
  • Participate in troubleshooting data issues with senior support
  • Contribute to data migration tasks as needed


Qualifications:

Required:

  • Bachelor's degree in Computer Science, Information Systems, or related field
  • around 2 years of experience in data engineering or related field
  • Strong SQL knowledge and database concepts
  • Comfortable with Python programming
  • Understanding of data structures and ETL concepts
  • Problem-solving mindset and attention to detail
  • Good communication skills
  • Willingness to learn cloud technologies


Preferred:

  • Exposure to Oracle Cloud or any cloud platform (AWS/GCP)
  • Basic knowledge of data visualization tools (Tableau, Power BI, or Python libraries like Matplotlib)
  • Experience with Pandas for data manipulation
  • Understanding of data warehousing concepts
  • Familiarity with version control (Git)
  • Academic projects or internships involving data processing


Nice-to-Have:

  • Knowledge of dbt, BigQuery, or Snowflake
  • Exposure to big data concepts
  • Experience with Jupyter notebooks
  • Comfort with AI-assisted coding tools (Copilot, GPTs)
  • Personal projects showcasing data work


What We Offer:

  • Mentorship from senior data engineers
  • Hands-on learning with modern data stack
  • Access to paid AI tools and learning resources
  • Clear growth path to mid-level engineer
  • Direct impact on product and data strategy
  • No unnecessary meetings — focused execution
  • Strong engineering culture with continuous learning opportunities
Read more
Cymetrix Software

at Cymetrix Software

2 candid answers
Netra Shettigar
Posted by Netra Shettigar
Remote only
3 - 7 yrs
₹8L - ₹20L / yr
Google Cloud Platform (GCP)
ETL
skill iconPython
Big Data
SQL
+4 more

Must have skills:

1. GCP - GCS, PubSub, Dataflow or DataProc, Bigquery, Airflow/Composer, Python(preferred)/Java

2. ETL on GCP Cloud - Build pipelines (Python/Java) + Scripting, Best Practices, Challenges

3. Knowledge of Batch and Streaming data ingestion, build End to Data pipelines on GCP

4. Knowledge of Databases (SQL, NoSQL), On-Premise and On-Cloud, SQL vs No SQL, Types of No-SQL DB (At least 2 databases)

5. Data Warehouse concepts - Beginner to Intermediate level


Role & Responsibilities:

● Work with business users and other stakeholders to understand business processes.

● Ability to design and implement Dimensional and Fact tables

● Identify and implement data transformation/cleansing requirements

● Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data

from various systems to the Enterprise Data Warehouse

● Develop conceptual, logical, and physical data models with associated metadata including data lineage and technical

data definitions

● Design, develop and maintain ETL workflows and mappings using the appropriate data load technique

● Provide research, high-level design, and estimates for data transformation and data integration from source

applications to end-user BI solutions.

● Provide production support of ETL processes to ensure timely completion and availability of data in the data

warehouse for reporting use.

● Analyze and resolve problems and provide technical assistance as necessary. Partner with the BI team to evaluate,

design, develop BI reports and dashboards according to functional specifications while maintaining data integrity and

data quality.

● Work collaboratively with key stakeholders to translate business information needs into well-defined data

requirements to implement the BI solutions.

● Leverage transactional information, data from ERP, CRM, HRIS applications to model, extract and transform into

reporting & analytics.

● Define and document the use of BI through user experience/use cases, prototypes, test, and deploy BI solutions.

● Develop and support data governance processes, analyze data to identify and articulate trends, patterns, outliers,

quality issues, and continuously validate reports, dashboards and suggest improvements.

● Train business end-users, IT analysts, and developers.

Read more
Aceis Services

at Aceis Services

2 candid answers
Anushi Mishra
Posted by Anushi Mishra
Remote only
2 - 10 yrs
₹8.6L - ₹30.2L / yr
CI/CD
Apache Spark
PySpark
MLOps
skill iconMachine Learning (ML)
+6 more

We are hiring freelancers to work on advanced Data & AI projects using Databricks. If you are passionate about cloud platforms, machine learning, data engineering, or architecture, and want to work with cutting-edge tools on real-world challenges, this is the opportunity for you!

Key Details

  • Work Type: Freelance / Contract
  • Location: Remote
  • Time Zones: IST / EST only
  • Domain: Data & AI, Cloud, Big Data, Machine Learning
  • Collaboration: Work with industry leaders on innovative projects

🔹 Open Roles

1. Databricks – Senior Consultant

  • Skills: Data Warehousing, Python, Java, Scala, ETL, SQL, AWS, GCP, Azure
  • Experience: 6+ years

2. Databricks – ML Engineer

  • Skills: CI/CD, MLOps, Machine Learning, Spark, Hadoop
  • Experience: 4+ years

3. Databricks – Solution Architect

  • Skills: Azure, GCP, AWS, CI/CD, MLOps
  • Experience: 7+ years

4. Databricks – Solution Consultant

  • Skills: SQL, Spark, BigQuery, Python, Scala
  • Experience: 2+ years

What We Offer

  • Opportunity to work with top-tier professionals and clients
  • Exposure to cutting-edge technologies and real-world data challenges
  • Flexible remote work environment aligned with IST / EST time zones
  • Competitive compensation and growth opportunities

📌 Skills We Value

Cloud Computing | Data Warehousing | Python | Java | Scala | ETL | SQL | AWS | GCP | Azure | CI/CD | MLOps | Machine Learning | Spark |

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Anurag Sinha
Posted by Anurag Sinha
Bengaluru (Bangalore), Mumbai, Pune
4 - 8 yrs
Best in industry
skill iconPython
API
RESTful APIs
skill iconFlask
ETL
+1 more
  • 4= years of experience
  • Proficiency in Python programming.
  • Experience with Python Service Development (RestAPI/FlaskAPI)
  • Basic knowledge of front-end development.
  • Basic knowledge of Data manipulation and analysis libraries
  • Code versioning and collaboration. (Git)
  • Knowledge for Libraries for extracting data from websites.
  • Knowledge of SQL and NoSQL databases
  • Familiarity with Cloud (Azure /AWS) technologies


Read more
Bluecopa

Bluecopa

Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Hyderabad, Bengaluru (Bangalore)
3 - 6 yrs
₹10L - ₹15L / yr
Project Management
SQL Query Analyzer
JIRA
confluence
Implementation
+5 more

Role: Technical Lead - Finance Solutions

Exp: 3 - 6 Years

CTC: up to 20 LPA



Required Qualifications

  • Bachelor’s degree in Finance, Business Administration, Information Systems, or related field
  • 3+ years of hands-on implementation/project management experience
  • Proven experience delivering projects in Fintech, SaaS, or ERP environments
  • Strong understanding of accounting principles and financial workflows
  • Hands-on SQL experience, including the ability to write and debug complex queries (joins, CTEs, subqueries)
  • Experience working with ETL pipelines or data migration processes
  • Proficiency in tools like Jira, Confluence, Excel, and project tracking systems
  • Strong communication and stakeholder management skills
  • Ability to manage multiple projects simultaneously and drive client success


Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort