Cutshort logo

50+ ETL Jobs in India

Apply to 50+ ETL Jobs on CutShort.io. Find your next job, effortlessly. Browse ETL Jobs and apply today!

icon
Financial Services Co

Financial Services Co

Agency job
via Vikash Technologies by Rishika Teja
Mumbai, Navi Mumbai
8 - 10 yrs
₹15L - ₹21L / yr
FIS CLS
ACBS
Manual testing
Test Automation (QA)
SQL
+1 more

Data Migration – Quality Analyst (Senior QA)


Exp : 8 - 10 yrs

Edu : Any Graduates

Work Location : Navi Mumbai ( WFO )


Skills :

Strong domain experience in Commercial Lending / Loans

Hands-on experience with FIS CLS or ACBS (Mandatory) 

Test case design & execution (Manual + Automation)

Strong SQL, Excel, and Power BI skills

Experience with ETL / Data Migration tools 



Read more
Adsremedy Media LLP
Soumya Kshirsagar
Posted by Soumya Kshirsagar
Remote, Mumbai
1 - 3 yrs
₹3L - ₹10L / yr
ETL

About the Role:

We are seeking a skilled Data Engineer to join our growing AdTech team. In this role, you will design, build, and maintain high-performance ETL pipelines and large-scale data processing systems. You will work with massive datasets and distributed frameworks to power Adsremedy’s data-driven advertising solutions across Programmatic, In-App, CTV, and DOOH platforms.


What You’ll Do:

  • Design, develop, and maintain scalable ETL pipelines on self-managed infrastructure
  • Process and optimize large-scale datasets (terabytes of data) with high reliability and performance
  • Build robust data processing workflows using Apache Spark (preferred) and/or Apache Flink
  • Integrate, clean, and transform data from multiple internal and external sources
  • Partner closely with data scientists, analysts, and business stakeholders to enable actionable insights
  • Monitor, troubleshoot, and optimize data pipelines for operational excellence
  • Ensure data quality, consistency, and performance across all data workflows
  • Participate in code reviews and uphold best practices in data engineering
  • Collaborate with QA teams to deliver production-ready, reliable systems
  • Mentor junior engineers and promote knowledge sharing within the team
  • Stay current with emerging data engineering tools, frameworks, and industry trends


What You’ll Need:

  • 2+ years of experience building ETL pipelines using Apache Spark and/or Apache Flink
  • Hands-on experience with big data caching solutions such as ScyllaDB, Aerospike, or similar
  • Strong understanding of data lake architectures and tools like Delta Lake
  • Proven experience handling terabytes of data in distributed environments
  • Proficiency in Scala, Python, or Java
  • Experience working with cloud data platforms (AWS S3, Azure Data Lake, Google BigQuery)
  • Strong knowledge of SQL, data modeling, and data warehousing concepts
  • Familiarity with Git and CI/CD workflows
  • Excellent problem-solving skills and ability to work in a fast-paced, collaborative environment

Nice to Have

  • Experience with Apache Kafka for real-time data streaming
  • Familiarity with Apache Airflow or similar orchestration tools


Read more
The Client is in AI, data, and cloud solutions.

The Client is in AI, data, and cloud solutions.

Agency job
via HyrHub by Neha Koshy
Bengaluru (Bangalore)
8 - 13 yrs
₹20L - ₹22L / yr
skill iconPython
SQL
Backend testing
API
SQL Azure
+5 more
  • 10+ years of software development experience
  • 3+ years in a technical leadership role
  • Strong expertise in Python and SQL
  • Experience building scalable APIs and backend systems
  • Solid understanding of database design and performance tuning
  • Experience with Azure cloud services (AWS familiarity preferred)
  • Working knowledge of ML/AI integration in enterprise systems
  • Experience in client-facing or consulting environments preferred
  • Experience with Databricks or modern data platforms
  • Exposure to ETL tools such as Talend
  • Experience with BI tools (e.g., Power BI)
  • Exposure to regulated domains such as Pharma, Healthcare
Read more
Tops Infosolutions
Zurin Momin
Posted by Zurin Momin
Ahmedabad
3 - 8 yrs
₹12L - ₹18L / yr
Data engineering
skill iconPython
AWS Lambda
skill iconAmazon Web Services (AWS)
ETL
+1 more

Job Title: Data Engineer


About the Role

We are looking for a highly motivated Data Engineer to join our growing team and play

a critical role in shaping the data foundation of different software platforms. This role sits

at the intersection of data engineering, product, and business stakeholders, and is

responsible for building reliable data pipelines, delivering actionable insights, and

ensuring data quality across systems.

You will work closely with internal teams and external partners to translate business

requirements into scalable data solutions, while maintaining high standards for data

integrity, performance, and usability.


Key Responsibilities


Data Engineering & Architecture


 Design, build, and maintain scalable data pipelines and ETL/ELT processes

 Develop and optimize data models in PostgreSQL and cloud-native

architectures

 Work within AWS ecosystem (e.g., S3, Lambda, RDS, Glue, Redshift, etc.) to

support data workflows

 Ensure efficient ingestion and processing of large-scale datasets


Business & Partner Integration


 Collaborate directly with business stakeholders and external partners to

gather requirements and deliver reporting solutions

 Translate ambiguous business needs into structured data models and

dashboards

 Integrate with third-party APIs and other external data sources


Data Quality & Governance


 Implement robust data validation, monitoring, and QA processes

 Ensure consistency, accuracy, and reliability of data across the platform

 Troubleshoot and resolve data discrepancies proactively


Reporting & Analytics Enablement

 Build datasets and pipelines that power dashboards and reporting tools

 Support internal teams with ad hoc analysis and data requests

 Partner with product and engineering teams to embed data into the SaaS product experience


Performance & Scalability

 Optimize queries, pipelines, and storage for performance and cost efficiency

 Continuously improve system scalability as data volume and complexity grow


Required Qualifications


 3–6+ years of experience in Data Engineering or related role

 Strong proficiency in Python for data processing and scripting

 Advanced experience with PostgreSQL (query optimization, schema design)

 Hands-on experience with AWS data architecture (S3, RDS, Lambda, Glue,

Redshift, etc.)

 Experience integrating with external APIs

 Solid understanding of ETL/ELT pipelines, data modeling, and warehousing

concepts

 Experience working cross-functionally with business stakeholders


Preferred Qualifications

 Experience in AdTech, eCommerce, or SaaS platforms

 Familiarity with BI tools (e.g., Looker, Tableau, Power BI)

 Experience with workflow orchestration tools (e.g., Airflow)

 Understanding of data governance and compliance best practices

 Exposure to real-time or streaming data pipelines


What We’re Looking For


 Strong problem-solver who can operate in a fast-paced, ambiguous

environment

 Ability to balance technical depth with business context

 Excellent communication skills — able to work directly with non-technical

stakeholders

 Ownership mindset with a focus on execution and quality

Read more
AI Industry

AI Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai, Bengaluru (Bangalore), Hyderabad, Gurugram
6 - 10 yrs
₹32L - ₹42L / yr
ETL
SQL
Google Cloud Platform (GCP)
Data engineering
ELT
+17 more

Role & Responsibilities:

We are looking for a strong Data Engineer to join our growing team. The ideal candidate brings solid ETL fundamentals, hands-on pipeline experience, and cloud platform proficiency — with a preference for GCP / BigQuery expertise.


Responsibilities:

  • Design, build, and maintain scalable data pipelines and ETL/ELT workflows
  • Work with Dataform or DBT to implement transformation logic and data models
  • Develop and optimize data solutions on GCP (BigQuery, GCS) or AWS/Azure
  • Support data migration initiatives and data mesh architecture patterns
  • Collaborate with analysts, scientists, and business stakeholders to deliver reliable data products
  • Apply data governance and quality best practices across the data lifecycle
  • Troubleshoot pipeline issues and drive proactive monitoring and resolution


Ideal Candidate:

  • Strong Data Engineer Profile
  • Must have 6+ years of hands-on experience in Data Engineering, with strong ownership of end-to-end data pipeline development.
  • Must have strong experience in ETL/ELT pipeline design, transformation logic, and data workflow orchestration.
  • Must have hands-on experience with any one of the following: Dataform, dbt, or BigQuery, with practical exposure to data transformation, modeling, or cloud data warehousing.
  • Must have working experience on any cloud platform: GCP (preferred), AWS, or Azure, including object storage (GCS, S3, ADLS).
  • Must have strong SQL skills with experience in writing complex queries and optimizing performance.
  • Must have programming experience in Python and/or SQL for data processing.
  • Must have experience in building and maintaining scalable data pipelines and troubleshooting data issues.
  • Exposure to data migration projects and/or data mesh architecture concepts.
  • Experience with Spark / PySpark or large-scale data processing frameworks.
  • Experience working in product-based companies or data-driven environments.
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.


NOTE:

  • There will be an interview drive scheduled on 28th and 29th March 2026, and if shortlisted, they will be expected to be available on these Interview dates. Only Immediate joiners are considered.
Read more
TalentXO
Bengaluru (Bangalore), Hyderabad, Mumbai, Gurugram
6 - 10 yrs
₹32L - ₹40L / yr
ETL
Data engineering
Dataform
BigQuery
dbt
+5 more

Note-“Urgently Hiring – Immediate Joiners Preferred”

Data Engineering

Role & Responsibilities

We are looking for a strong Data Engineer to join our growing team. The ideal candidate brings solid ETL fundamentals, hands-on pipeline experience, and cloud platform proficiency — with a preference for GCP/BigQuery expertise.

Responsibilities:

  • Design, build, and maintain scalable data pipelines and ETL/ELT workflows
  • Work with Dataform or dbt to implement transformation logic and data models
  • Develop and optimize data solutions on GCP (BigQuery, GCS) or AWS/Azure
  • Support data migration initiatives and data mesh architecture patterns
  • Collaborate with analysts, scientists, and business stakeholders to deliver reliable data products
  • Apply data governance and quality best practices across the data lifecycle
  • Troubleshoot pipeline issues and drive proactive monitoring and resolution

Ideal Candidate

  • Strong Data Engineer Profile
  • Mandatory (Experience 1) – Must have 6+ years of hands-on experience in Data Engineering, with strong ownership of end-to-end data pipeline development.
  • Mandatory (Experience 2) – Must have strong experience in ETL/ELT pipeline design, transformation logic, and data workflow orchestration.
  • Mandatory (Experience 3) – Must have hands-on experience with any one of the following: Dataform, dbt, or BigQuery, with practical exposure to data transformation, modeling, or cloud data warehousing.
  • Mandatory (Experience 4) – Must have working experience on any cloud platform: GCP (preferred), AWS, or Azure, including object storage (GCS, S3, ADLS).
  • Mandatory (Core Skill 1) – Must have strong SQL skills with experience in writing complex queries and optimizing performance.
  • Mandatory (Core Skill 2) – Must have programming experience in Python and/or SQL for data processing.
  • Mandatory (Core Skill 3) – Must have experience in building and maintaining scalable data pipelines and troubleshooting data issues.
  • Preferred (Experience 1) – Exposure to data migration projects and/or data mesh architecture concepts.
  • Preferred (Skill 1) – Experience with Spark/PySpark or large-scale data processing frameworks.
  • Preferred (Company) – Experience working in product-based companies or data-driven environments.
  • Preferred (Education) – Bachelor’s or Master’s degree in Computer Science, Engineering, or related field

.


Read more
Mango Sciences
Remote only
7 - 12 yrs
₹20L - ₹40L / yr
skill iconPython
SQL
ETL
Data pipeline
Datawarehousing
+12 more

The Mission: We are looking for a visionary Technical Leader to own our healthcare data ecosystem from the first byte to the final dashboard. You won't just be managing a platform; you’ll be the primary architect of a clinical data engine that powers life-changing analytics. If you are an expert in SQL and Python who thrives on solving the "puzzle" of healthcare interoperability (FHIR/HL7) while mentoring a high-performing team, this is your seat at the table.

What You’ll Own

  • Architectural Sovereignty: Define the end-to-end blueprint for our data warehouse (staging, marts, and semantic layers). You choose the frameworks, set the coding standards, and decide how we handle complex dimensional modeling and SCDs.
  • Engineering Excellence: Lead by example. You’ll write production-grade Python for ingestion frameworks and craft advanced, set-based SQL transformations that others use as gold-standard references.
  • The Interoperability Bridge: Turn the chaos of EHR exports, REST APIs, and claims data into clean, FHIR-aligned governed datasets. You ensure our data speaks the language of modern healthcare.
  • Technical Mentorship: Act as the "Engineer’s Engineer." You’ll run design reviews, champion CI/CD best practices, and build the runbooks that keep our small but mighty team efficient.
  • Security by Design: Direct the implementation of HIPAA-compliant data flows, ensuring encryption, auditability, and access controls are baked into the architecture, not bolted on.

The Stack You’ll Command

  • Languages: Expert-level SQL (CTE, Window Functions, Tuning) and Production Python.
  • Databases: Deep polyglot experience across MSSQL, PostgreSQL, Oracle, and NoSQL (MongoDB/Elasticsearch).
  • Orchestration: Advanced Apache Airflow (SLAs, retries, and complex DAGs).
  • Ecosystem: GitHub for CI/CD, Tableau/PowerBI for semantic layers, and Unix/Linux for shell scripting.

Who You Are

  • Experienced: You have 8–12+ years in data engineering, with a significant portion spent in a Lead or Architect capacity.
  • Healthcare-Fluent: You understand the stakes of PHI. You’ve worked with FHIR/HL7 and know how to map clinical resources to analytical models.
  • Performance-Obsessed: You don’t just make it work; you make it fast. You’re the person who uses EXPLAIN/ANALYZE to shave minutes off a query.
  • Culture-Builder: You believe in documentation, observability (lineage/freshness), and "leaving the campground cleaner than you found it."

Bonus Points for:

  • Privacy Pro: Experience with PII/PHI de-identification and privacy-by-design.
  • Cloud Native: Deep familiarity with Azure, AWS, or GCP security and data services.
  • Search Experts: Experience with near-real-time indexing via Elasticsearch.

To process your resume for the next process, please fill out the Google form with your updated resume.

 

Pre-screen Question: https://forms.gle/q3CzfdSiWoXTCEZJ7

 

Details: https://forms.gle/FGgkmQvLnS8tJqo5A

Read more
Technology Industry

Technology Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Kochi (Cochin)
5 - 8 yrs
₹12L - ₹27L / yr
Snowflake
Metabase
skill iconMongoDB
Data Pipelines
skill iconAmazon Web Services (AWS)
+4 more

Job Description & Specification: 

Post Title: Data Engineer

Work Mode: Kochi Onsite - UK Time zone


Role Overview: 

We are seeking a talented and experienced Data Engineer to join our team. The ideal candidate will have expertise in technologies such as Metabases, Dbt, Stitch, Snowflake, Avo, and MongoDB. As a Data Engineer, you will play a crucial role in designing, developing, and maintaining our data infrastructure to support our analytics and data-driven decision-making processes.


Responsibilities:

  • Designing, developing and implementing scalable data pipelines and ETL processes using tools such as Stitch and Dbt to ingest, transform, and load data from various sources into our data warehouse (Snowflake).
  • Implement data modeling best practices and standards using Dbt to create and manage data models for reporting and analytics.
  • Collaborating with cross-functional teams to understand data requirements and deliver solutions that meet business needs.
  • Develop and maintain dashboards and visualizations in Metabases to enable self-service analytics and data exploration for internal teams.
  • Building and optimizing ETL processes to ensure data quality and integrity.
  • Optimizing data processing and storage solutions for performance, scalability and reliability, leveraging cloud-based technologies.
  • Implementing monitoring and alerting systems to proactively identify and address data issues.
  • Implementing data quality checks and monitoring processes to ensure the accuracy, completeness, and integrity of data.
  • Managing and optimizing databases (like MongoDB for performance and scalability).
  • Developing and maintaining documentation, best practices, and standards for data engineering processes and workflows.
  • Stay up to date with emerging technologies and trends in data engineering, machine learning, and analytics, and evaluate their potential impact on data strategy and architecture.


Requirements:

  • Bachelor's or Master's degree in Computer Science.
  • Minimum of 4 years of experience working as a data engineer with expertise in Metabases, Dbt, Stitch, Snowflake, Avo, MongoDB.
  • Strong programming skills in languages like Python, and experience with SQL and database technologies (e.g., PostgreSQL, MySQL, MongoDB).
  • Hands-on experience with data integration tools (e.g., Stitch), data modeling tools (e.g., Dbt), and BI platforms (e.g., Metabases).
  • Experience with cloud platforms such as AWS.
  • Strong understanding of data modeling concepts, database design, and data warehousing principles
  • Experience with big data technologies and frameworks (e.g., Hadoop, Spark, Kafka) and cloud-based data platforms (e.g., AWS EMR, Azure Databricks, Google BigQuery).
  • Familiarity with data integration tools, ETL processes, and workflow orchestration tools (e.g., Apache Airflow, Apache NiFi).
  • Excellent problem-solving skills and attention to detail.
  • Strong communication skills with the ability to work effectively in a global team environment.
  • Experience in the education or Edtech industry is a plus.
  • Knowledge of Avo for schema management and versioning will be an added advantage.
  • Familiarity with machine learning algorithms, data science workflows, and analytics tools (e.g., TensorFlow, PyTorch, scikit-learn, Tableau).
  • Knowledge of distributed computing concepts and containerization technologies.
  • Experience with version control systems (e.g., Git) and CI/CD pipelines.
  • Certifications in cloud computing (e.g., AWS Certified Developer, Google Cloud Professional Data Engineer) or data engineering (e.g., Databricks Certified Associate Developer) are desirable.


Benefits:

  • Competitive salary and bonus structure based on performance and achievement of goals.
  • Comprehensive benefits package including medical insurance.


Join us in shaping the future of technology by applying your expertise as a Data Engineer. If you are passionate about driving innovation and delivering impactful solutions, we invite you to be part of our dynamic team. Apply now!!

Read more
Public Listed - Product Based company

Public Listed - Product Based company

Agency job
via Recruiting Bond by Pavan Kumar
Bengaluru (Bangalore)
4 - 8 yrs
₹25L - ₹70L / yr
skill iconData Science
data platforms
Data-flow analysis
Data pipelines
AI Infrastructure
+28 more

🤖 Data Scientist – Frontier AI for Data Platforms & Distributed Systems (4–8 Years)

Experience: 4–8 Years

Location: Bengaluru (On-site / Hybrid)

Company: Publicly Listed, Global Product Platform


🧠 About the Mission

We are building a Top 1% AI-Native Engineering & Data Organization — from first principles.

This is not incremental improvement.

This is a full-stack transformation of a large-scale enterprise into an AI-native data platform company.

We are re-architecting:

  • Legacy systems → AI-native architectures
  • Static pipelines → autonomous, self-healing systems
  • Data platforms → intelligent, learning systems
  • Software workflows → agentic execution layers

This is the kind of shift you would expect from companies like Google or Microsoft —

Except here, you will build it from day zero and scale it globally.


🧠 The Opportunity: This role sits at the intersection of three high-impact domains:

1. Frontier AI Systems: Large Language Models (LLMs), Small Language Models (SLMs), and Agentic AI

2. Data Platforms: Warehouses, Lakehouses, Streaming Systems, Query Engines

3. Distributed Systems: High-throughput, low-latency, multi-region infrastructure


We are building systems where:

  • Data platforms optimize themselves using ML/LLMs
  • Pipelines are autonomous, self-healing, and adaptive
  • Queries are generated, optimized, and executed intelligently
  • Infrastructure learns from usage and evolves continuously

This is: AI as the control plane for data infrastructure


🧩 What You’ll Work On

You will design and build AI-native systems deeply embedded inside data infrastructure.

1. AI-Native Data Platforms

  • Build LLM-powered interfaces:
  • Natural language → SQL / pipelines / transformations
  • Design semantic data layers:
  • Embeddings, vector search, knowledge graphs
  • Develop AI copilots:
  • For data engineers, analysts, and platform users

2. Autonomous Data Pipelines

  • Build self-healing ETL/ELT systems using AI agents
  • Create pipelines that:
  • Detect anomalies in real time
  • Automatically debug failures
  • Dynamically optimize transformations

3. Intelligent Query & Compute Optimization

  • Apply ML/LLMs to:
  • Query planning and execution
  • Cost-based optimization using learned models
  • Workload prediction and scheduling
  • Build systems that:
  • Learn from query patterns
  • Continuously improve performance and cost efficiency

4. Distributed Data + AI Infrastructure

  • Architect systems operating at:
  • Billions of events per day
  • Petabyte-scale data
  • Work with:
  • Distributed compute engines (Spark / Flink / Ray class systems)
  • Streaming systems (Kafka-class infra)
  • Vector databases and hybrid retrieval systems

5. Learning Systems & Feedback Loops

  • Build closed-loop AI systems:
  • Execution → feedback → model updates
  • Develop:
  • Continual learning pipelines
  • Online learning systems for infra optimization
  • Experimentation frameworks (A/B, bandits, eval pipelines)

6. LLM & Agentic Systems (Infra-Aware)

  • Build agents that understand data systems
  • Enable:
  • Autonomous pipeline debugging
  • Root cause analysis for infra failures
  • Intelligent orchestration of data workflows


🧠 What We’re Looking For

Core Foundations

  • Strong grounding in:
  • Machine Learning, Deep Learning, NLP
  • Statistics, optimization, probabilistic systems
  • Distributed systems fundamentals
  • Deep understanding of:
  • Transformer architectures
  • Modern LLM ecosystems

Hands-On Expertise

  • Experience building:
  • LLM / GenAI systems (RAG, fine-tuning, embeddings)
  • Data platforms (warehouse, lake, lakehouse architectures)
  • Distributed pipelines and compute systems
  • Strong programming skills:
  • Python (ML/AI stack)
  • SQL (deep understanding — query planning, optimization mindset)


Systems Thinking (Critical)

You think in systems, not components.

  • Built or worked on:
  • Large-scale data pipelines
  • High-throughput distributed systems
  • Low-latency, high-concurrency architectures
  • Understand:
  • Query optimization and execution
  • Data partitioning, indexing, caching
  • Trade-offs in distributed systems


🔥 What Sets You Apart (Top 1%)

  • Built AI-powered data platforms or infra systems in production
  • Designed or contributed to:
  • Query engines / optimizers
  • Data observability / lineage systems
  • AI-driven infra or AIOps platforms
  • Experience with:
  • Multi-modal AI (logs, metrics, traces, text)
  • Agentic AI systems
  • Autonomous infrastructure
  • Worked on systems at scale comparable to:
  • Google (BigQuery-like systems)
  • Meta (real-time analytics infra)
  • Snowflake / Databricks (lakehouse architectures)


🧬 Ideal Background (Not Mandatory)

We often see strong candidates from:

  • Data infrastructure or platform engineering teams
  • AI-first startups or research-driven environments
  • High-scale product companies

Experience building:

  • Internal platforms used by 1000s of engineers
  • Systems serving millions of users / high throughput workloads
  • Multi-region, distributed cloud systems


🧠 The Kind of Problems You’ll Solve

  • Can LLMs replace traditional query optimizers?
  • How do we build self-healing data pipelines at scale?
  • Can data systems learn from every query and improve automatically?
  • How do we embed reasoning and planning into infrastructure layers?
  • What does a fully autonomous data platform look like?


Background: We Commonly See (But Not Limited To)

Our team often includes engineers from top-tier institutions and strong research or product backgrounds, including:

  • Leading engineering schools in India and globally
  • Engineers with experience in top product companies, AI startups, or research-driven environments
  • That said, we care far more about demonstrated ability, depth, and impact than pedigree alone.


Read more
SAAS Industry

SAAS Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
5 - 8 yrs
₹20L - ₹25L / yr
skill iconAmazon Web Services (AWS)
skill iconNodeJS (Node.js)
RESTful APIs
NOSQL Databases
Systems design
+39 more

Job Details

Job Title: Senior Backend Engineer

Industry: SAAS

Function – Information Technology

Experience Required: 5-8 years

- Working Days: 6 days a week, (5 days-in-office, Saturdays WFH)

Employment Type: Full Time

Job Location: Bangalore

CTC Range: Best in Industry

 

Preferred Skills: AWS, NodeJS, RESTful APIs, NoSQL

 

Criteria

· Minimum 5+ years in backend engineering with strong system design expertise

· Experience building scalable systems from scratch

· Expert-level proficiency in Node.js

· Deep understanding of distributed systems

· Strong NoSQL design skills

· Hands-on AWS cloud experience

· Proven leadership and mentoring capability

· Preferred candidates from SAAS/Software/IT Services based startups or scaleup companies

 

Job Description

The Role:

What You’ll Build:

1. System Architecture & Design

● Architect highly scalable backend systems from the ground up

● Define technology choices: frameworks, databases, queues, caching layers

● Evaluate microservices vs monoliths based on product stage

● Design REST, GraphQL, and real-time WebSocket APIs

● Build event-driven systems for asynchronous processing

● Architect multi-tenant systems with strict data isolation

● Maintain architectural documentation and technical specs

2. Core Backend Services

● Build high-performance APIs for 3D content, XR experiences, analytics, and user interactions

● Create 3D asset processing pipelines for uploads, conversions, and optimization

● Develop distributed job workers for CPU/GPU-intensive tasks

● Build authentication/authorization systems (RBAC)

● Implement billing, subscription, and usage metering

● Build secure webhook systems and third-party integration APIs

● Create real-time collaboration features via WebSockets/SSE

3. Data Architecture & Databases

● Design scalable schemas for 3D metadata, XR sessions, and analytics

● Model complex product catalogs with variants and hierarchies

● Implement Redis-based caching strategies

● Build search and indexing systems (Elasticsearch/Algolia)

● Architect ETL pipelines and data warehouses

● Implement sharding, partitioning, and replication strategies

● Design backup, restore, and disaster recovery workflows

4. Scalability & Performance

● Build systems designed for 10x–100x traffic growth

● Implement load balancing, autoscaling, and distributed processing

● Optimize API response times and database performance

● Implement global CDN delivery for heavy 3D assets

● Build rate limiting, throttling, and backpressure mechanisms

● Optimize storage and retrieval of large 3D files

● Profile and improve CPU, memory, and network performance

5. Infrastructure & DevOps

● Architect AWS infrastructure (EC2, S3, Lambda, RDS, ElastiCache)

● Build CI/CD pipelines for automated deployments and rollbacks

● Use IaC tools (Terraform/CloudFormation) for infra provisioning

● Set up monitoring, logging, and alerting systems

● Use Docker + Kubernetes for container orchestration

● Implement security best practices for data, networks, and secrets

● Define disaster recovery and business continuity plans

6. Integration & APIs

● Build integrations with Shopify, WooCommerce, Magento

● Design webhook systems for real-time events

● Build SDKs, client libraries, and developer tools

● Integrate payment gateways (Stripe, Razorpay)

● Implement SSO and OAuth for enterprise customers

● Define API versioning and lifecycle/deprecation strategies

7. Data Processing & Analytics

● Build analytics pipelines for engagement, conversions, and XR performance

● Process high-volume event streams at scale

● Build data warehouses for BI and reporting

● Develop real-time dashboards and insights systems

● Implement analytics export pipelines and platform integrations

● Enable A/B testing and experimentation frameworks

● Build personalization and recommendation systems

 

Technical Stack:

1. Backend Languages & Frameworks 

●  Primary: Node.js (Express, NestJS), Python (FastAPI, Django)

●  Secondary: Go, Java/Kotlin (Spring)

●  APIs: REST, GraphQL, gRPC


2. Databases & Storage

● SQL: PostgreSQL, MySQL

● NoSQL: MongoDB, DynamoDB

● Caching: Redis, Memcached

● Search: Elasticsearch, Algolia

● Storage/CDN: AWS S3, CloudFront

● Queues: Kafka, RabbitMQ, AWS SQS

 

3. Cloud & Infrastructure: 

● Cloud: AWS (primary), GCP/Azure (nice to have)

● Compute: EC2, Lambda, ECS, EKS

● Infrastructure: Terraform, CloudFormation

● CI/CD: GitHub Actions, Jenkins, CircleCI

● Containers: Docker, Kubernetes

 

4. Monitoring & Operations 

● Monitoring: Datadog, New Relic, CloudWatch

● Logging: ELK Stack, CloudWatch Logs

● Error Tracking: Sentry, Rollbar

● APM tools

 

5. Security & Auth

● Auth: JWT, OAuth 2.0, SAML

● Secrets: AWS Secrets Manager, Vault

● Security: Encryption (at rest/in transit), TLS/SSL, IAM

 


What We’re Looking For:

1. Must-Haves

● 5+ years in backend engineering with strong system design expertise

● Experience building scalable systems from scratch

● Expert-level proficiency in at least one backend stack (Node, Python, Go, Java)

● Deep understanding of distributed systems and microservices

● Strong SQL/NoSQL design skills with performance optimization

● Hands-on AWS cloud experience

● Ability to write high-quality production code daily

● Experience building and scaling RESTful APIs

● Strong understanding of caching, sharding, horizontal scaling

● Solid security and best-practice implementation experience

● Proven leadership and mentoring capability


2. Highly Desirable

● Experience with large file processing (3D, video, images)

● Background in SaaS, multi-tenancy, or e-commerce

● Experience with real-time systems (WebSockets, streams)

● Knowledge of ML/AI infrastructure

● Experience with HA systems, DR planning

● Familiarity with GraphQL, gRPC, event-driven systems

● DevOps/infrastructure engineering background

● Experience with XR/AR/VR backend systems

● Open-source contributions or technical writing

● Prior senior technical leadership experience

 

Technical Challenges You’ll Solve:

● Designing large-scale 3D asset processing pipelines

● Serving XR content globally with ultra-low latency

● Scaling from thousands to millions of daily requests

● Efficiently handling CPU/GPU-heavy workloads

● Architecting multi-tenancy with complete data isolation

● Managing billions of analytics events at scale

● Building future-proof APIs with backward compatibility

 

Why company:

● Architectural Ownership: Build foundational systems from scratch

● Deep Technical Work: Solve distributed systems and scaling challenges

● Hands-On Impact: Design and code mission-critical infrastructure

● Diverse Problems: APIs, infra, data, ML, XR, asset processing

● Massive Scale Opportunity: Build systems for exponential growth

● Modern Stack and best practices

● Product Impact: Your architecture directly powers millions of users

● Leadership Opportunity: Shape engineering culture and direction

● Learning Environment: Stay at the forefront of backend engineering

● Backed by AWS, Microsoft, Google

 

Location & Work Culture:

● Location: Bengaluru

● Schedule: 6 days a week, (5 days-in-office, Saturdays WFH)

● Culture: Builder mindset, strong ownership, technical excellence

● Team: Small, highly skilled backend and infra team

● Resources: AWS credits, latest tooling, learning budget

 

Read more
Software and consulting company

Software and consulting company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
5 - 8 yrs
₹14L - ₹17L / yr
PowerBI
Business Intelligence (BI)
Business Analysis
skill iconData Analytics
Data Visualization
+15 more

Description

Power BI JD


Mandatory:

• 5+ years of Power BI Report development experience.

• Building Analysis Services reporting models.

• Developing visual reports, KPI scorecards, and dashboards using Power BI desktop.

• Connecting data sources, importing data, and transforming data for Business intelligence.

• Analytical thinking for translating data into informative reports and visuals.

• Capable of implementing row-level security on data along with an understanding of application security layer models in Power BI.

• Should have an edge over making DAX queries in Power BI desktop.

• Expert in using advanced-level calculations on the data set.

• Responsible for design methodology and project documentaries.

• Should be able to develop tabular and multidimensional models that are compatible with data warehouse standards.

• Very good communication skills must be able to discuss the requirements effectively with the client teams, and with internal teams.

• Experience working with Microsoft Business Intelligence Stack having Power BI, SSAS, SSRS, and SSIS

• Mandate to have experience with BI tools and systems such as Power BI, Tableau, and SAP.

• Must have 3-4years of experience in data-specific roles.

• Have knowledge of database fundamentals such as multidimensional database design, relational database design, and more

• Knowledge of all the Power BI products (Power Bi premium, Power BI server, Power BI services, Powerquery etc)

• Grip over data analytics

• Interact with customers to understand their business problems and provide best-in-class analytics solutions

• Proficient in SQL and Query performance tuning skills

• Understand data governance, quality and security and integrate analytics with these corporate platforms

• Attention to detail and ability to deliver accurate client outputs

• Experience of working with large and multiple datasets / data warehouses

• Ability to derive insights from data and analysis and create presentations for client teams

• Experience with performance optimization of the dashboards

• Interact with UX/UI designers to create best in class visualization for business harnessing all product capabilities.

• Resilience under pressure and against deadlines.

• Proactive attitude and an open outlook.

• Strong analytical problem-solving skills

• Skill in identifying data issues and anomalies during the analysis

• Strong business acumen demonstrated an aptitude for analytics that incite action

• Ability to execute on design requirements defined by business

• Ability to understand required Power BI functionality from wireframes/ requirement documents

• Ability to architect and design reporting solutions based on client needs.

• Being able to communicate with internal/external customers, desire to develop communication and client-facing skills.

• Ability to seamlessly work with MS Excel working knowledge of pivot table and related functions


Good to have:

• Experience in working with Azure and connecting synapse with Tableau

• Demonstrate strength in data modelling, ETL development, and data warehousing

• Knowledge of leading large-scale data warehousing and analytics projects using Azure, Synapse, MS SQL DB

• Good knowledge of building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets

• Good to have knowledge of Supply Chain Domain.

Read more
The Client is global data analytics and AI solutions company

The Client is global data analytics and AI solutions company

Agency job
via HyrHub by Neha Koshy
Bengaluru (Bangalore)
4 - 6 yrs
₹15L - ₹20L / yr
PowerBI
Data modeling
SQL
API
SQL Azure
+4 more
  • Design, develop, and deploy interactive Power BI dashboards and reports for client projects
  • Build and optimize data models using star schema and snowflake schema design patterns
  • Develop complex DAX measures and calculated columns to support business requirements
  • Connect to and integrate data from multiple sources including SQL databases, Excel, APIs, cloud platforms, and data warehouses
  • Implement data transformation and cleansing using Power Query (M language)
  • Collaborate with clients and stakeholders to gather requirements and translate business needs into technical specifications
  • Optimize report performance through query optimization, aggregations, and efficient data modeling
  • Configure and manage Power BI Service including workspaces, datasets, dataflows, and row-level security (RLS)
  • Create and maintain documentation for data models, reports, and development processes
  • Provide training and support to end-users on Power BI reports and dashboards
  • Design and implement custom visualizations using Power BI custom visuals and third-party visual libraries.
  • Implement conditional formatting, dynamic titles, and dynamic content based on user selections
  • Stay current with Power BI updates, new features, and industry best practices


Read more
The Blue Owls Solutions

at The Blue Owls Solutions

2 candid answers
Apoorvo Chakraborty
Posted by Apoorvo Chakraborty
Pune
2 - 5 yrs
₹10L - ₹18L / yr
PySpark
SQL
skill iconPython
Data engineering
ETL

Blue Owls Solutions is looking for a mid-level Azure Data Engineer with approximately 4 years of hands-on experience to join our growing data team. In this role, you will design, build, and maintain scalable data pipelines and architectures that power business-critical analytics and reporting. You'll work closely with cross-functional teams to transform raw data into reliable, high-quality datasets that drive decision-making across the organization.

Required Skills

  • 4+ years of professional experience as a Data Engineer or in a similar data-focused role
  • Strong proficiency in SQL for data manipulation, querying, and performance optimization
  • Hands-on experience with PySpark for large-scale data processing and transformation
  • Solid working knowledge of the Microsoft Azure ecosystem (Azure Data Factory, Azure Data Lake, Azure Synapse, etc.)
  • Experience with Microsoft Fabric for end-to-end data analytics workflows
  • Ability to design and implement robust data architectures including data warehouses, lakehouses, and ETL/ELT frameworks
  • Strong coding and scripting skills with Python
  • Proven problem-solving ability with a knack for debugging complex data issues and optimizing pipeline performance
  • Understanding of data modeling concepts, dimensional modeling, and data governance best practices


Interview Process

  • Take-Home Assessment
  • 60-Minute Technical Interview
  • Culture Fit Round


Preferred Skills & Certifications

  • Microsoft Certified: Fabric Analytics Engineer Associate (DP-600)
  • Microsoft Certified: Fabric Data Engineer Associate (DP-700)
  • Experience with CI/CD practices for data pipelines
  • Familiarity with version control systems such as Git
  • Exposure to real-time streaming data solutions
  • Experience working in Agile or Scrum environments
  • Strong communication skills with the ability to translate technical concepts for non-technical stakeholders

What We Offer

  • Competitive salary and performance-based bonuses
  • Flexible hybrid options
  • Opportunities for professional development, training, and certification sponsorship
  • A collaborative, innovation-driven team culture
  • Paid time off and company holidays
Read more
Hyderabad
5 - 8 yrs
₹15L - ₹30L / yr
ETL
Snowflake
skill iconPython
SQL
Fivetran
+4 more

Role Overview


We are looking for a Senior Data Quality Engineer who is passionate about building reliable and scalable data platforms. In this role, you will ensure high-quality, trustworthy data across pipelines and analytics systems by designing robust data ingestion frameworks, implementing data quality checks, and optimizing data transformations.

You will work closely with data engineers, analytics teams, and product stakeholders to ensure data accuracy, consistency, and reliability across the organization.


Key Responsibilities


  • Cleanse, normalize, and enhance data quality across operational systems and new data sources flowing through the data platform.
  • Design, build, monitor, and maintain ETL/ELT pipelines using Python, SQL, and Airflow.
  • Develop and optimize data models, tables, and transformations in Snowflake.
  • Build and maintain data ingestion workflows, including API integrations, file ingestion, and database connectors.
  • Ensure data reliability, integrity, and performance across pipelines.
  • Perform comprehensive data profiling to understand data structures, detect anomalies, and resolve inconsistencies.
  • Implement data quality validation frameworks and automated checks across pipelines.
  • Use data integration and data quality tools such as Deequ, Great Expectations (GX), Splink, Fivetran, Workato, Informatica, etc., to onboard new data sources.
  • Troubleshoot pipeline failures and implement data monitoring and alerting mechanisms.
  • Collaborate with engineering, analytics, and product teams in an Agile development environment.


Required Technical Skills


Core Technologies


  • Strong hands-on experience with SQL
  • Python for data transformation and pipeline development
  • Workflow orchestration using Apache Airflow
  • Experience working with Snowflake data warehouse


Data Engineering Expertise


  • Strong understanding of ETL / ELT pipeline design
  • Data profiling and data quality validation techniques
  • Experience building data ingestion pipelines from APIs, files, and databases
  • Data modeling and schema design


Tools & Platforms


  • Data Quality Tools: Deequ, Great Expectations (GX), Splink
  • Data Integration Tools: Fivetran, Workato, Informatica
  • Cloud Platforms: AWS (preferred)
  • Version Control & DevOps: Git, CI/CD pipelines


Qualifications


  • 5–8 years of experience in Data Quality Engineering / Data Engineering
  • Strong expertise in SQL, Python, Airflow, and Snowflake
  • Experience working with large-scale datasets and distributed data systems
  • Solid understanding of data engineering best practices across the development lifecycle
  • Experience working in Agile environments (Scrum, sprint planning, etc.)
  • Strong analytical and problem-solving skills


What We Look For


  • Passion for data accuracy, reliability, and governance
  • Ability to identify and resolve complex data issues
  • Strong collaboration skills across data, engineering, and analytics teams
  • Ownership mindset and attention to data integrity and performance


Why Join Us


  • Opportunity to work on modern data platforms and large-scale datasets
  • Collaborate with high-performing data and engineering teams
  • Exposure to cloud data architecture and modern data tools
  • Competitive compensation and strong career growth opportunities
Read more
HireTo
Rishita Sharma
Posted by Rishita Sharma
Hyderabad
5 - 13 yrs
₹15L - ₹30L / yr
snowflake
skill iconPython
SQL
Windows Azure
databricks
+4 more

Position Title : Senior Data Engineer(Founding Member) - Insurtech StartUp

Location : Hyderabad(Onsite)

Immediate to 15 days Joiners

Experience : 5+ to 13 Years

Role Summary

We are looking for a Senior Data Engineer who will play a foundational role in:

  • Client onboarding from a data perspective
  • Understanding complex insurance data flows
  • Designing secure, scalable ingestion pipelines
  • Establishing strong data modeling and governance standards

This role sits at the intersection of technology, data architecture, security, and business onboarding.

.

Key Responsibilities

  • Lead end-to-end data onboarding for new clients and partners, working closely with business and product teams to understand client systems, data formats, and migration constraints
  • Define and implement data ingestion strategies supporting multiple sources and formats, including CSV, XML, JSON files, and API-based integrations
  • Design, build, and operate robust, scalable ETL/ELT pipelines, supporting both batch and near-real-time data processing
  • Handle complex insurance-domain data including Contracts, Claims, Reserves, Cancellations, and Refunds
  • Architect ingestion pipelines with security-by-design principles, including secure credential management (keys, secrets, tokens), encryption at rest and in transit, and network-level controls where required
  • Enforce role-based and attribute-based access controls, ensuring strict data isolation, tenancy boundaries, and stakeholder-specific access rules
  • Design, maintain, and evolve canonical data models that support operational workflows, reporting & analytics, and regulatory/audit requirements
  • Define and enforce data governance standards, ensuring compliance with insurance and financial data regulations and consistent definitions of business metrics across stakeholders
  • Build and operate data pipelines on a cloud-native platform, leveraging distributed processing frameworks (Spark / PySpark), data lakes, lakehouses, and warehouses
  • Implement and manage orchestration, monitoring, alerting, and cost-optimization mechanisms across the data platform
  • Contribute to long-term data strategy, platform architecture decisions, and cost-optimization initiatives while maintaining strict security and compliance standards

Required Technical Skills

  • Core Stack: Python, Advanced SQL(Complex joins, window functions, performance tuning), Pyspark
  • Platforms: Azure, AWS, Data Bricks, Snowflake
  • ETL / Orchestration: Airflow or similar frameworks
  • Data Modeling: Star/Snowflake schema, dimensional modeling, OLAP/OLTP
  • Visualization Exposure: Power BI
  • Version Control & CI/CD: GitHub, Azure Devops, or equivalent
  • Integrations: APIs, real-time data streaming, ML model integration exposure

Preferred Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
  • 5+ years of experience in data engineering or similar roles
  • Strong ability to align technical solutions with business objectives
  • Excellent communication and stakeholder management skills

What We Offer

  • Direct collaboration with the core US data leadership team
  • High ownership and trust to manage the function end-to-end
  • Exposure to a global environment with advanced tools and best practices
Read more
Neuvamacro Technology Pvt Ltd
Remote only
5 - 15 yrs
₹12L - ₹15L / yr
Tableau
Snow flake schema
SQL
ETL
Data modeling
+4 more

Job Description:

Position Type: Full-Time Contract (with potential to convert to Permanent)

Location: Remote (Australian Time Zone)

Availability: Immediate Joiners Preferred

About the Role

We are seeking an experienced Tableau and Snowflake Specialist with 5+ years of hands‑on expertise to join our team as a full‑time contractor for the next few months. Based on performance and business requirements, this role has a strong potential to transition into a permanent position.

The ideal candidate is highly proficient in designing scalable dashboards, managing Snowflake data warehousing environments, and collaborating with cross-functional teams to drive data‑driven insights.

Key Responsibilities

  • Develop, design, and optimize advanced Tableau dashboards, reports, and visual analytics.
  • Build, maintain, and optimize datasets and data models in Snowflake Cloud Data Warehouse.
  • Collaborate with business stakeholders to gather requirements and translate them into analytics solutions.
  • Write efficient SQL queries, stored procedures, and data pipelines to support reporting needs.
  • Perform data profiling, data validation, and ensure data quality across systems.
  • Work closely with data engineering teams to improve data structures for better reporting efficiency.
  • Troubleshoot performance issues and implement best practices for both Snowflake and Tableau.
  • Support deployment, version control, and documentation of BI solutions.
  • Ensure availability of dashboards during Australian business hours.

Required Skills & Experience

  • 5+ years of strong hands-on experience with Tableau development (Dashboards, Storyboards, Calculated Fields, LOD Expressions).
  • 5+ years of experience working with Snowflake including schema design, warehouse configuration, and query optimization.
  • Advanced knowledge of SQL and performance tuning.
  • Strong understanding of data modeling, ETL processes, and cloud data platforms.
  • Experience working in fast-paced environments with tight delivery timelines.
  • Excellent communication and stakeholder management skills.
  • Ability to work independently and deliver high‑quality outputs aligned with business objectives.

Nice-to-Have Skills

  • Knowledge of Python or any ETL tool.
  • Experience with Snowflake integrations (Fivetran, DBT, Azure/AWS/GCP).
  • Tableau Server/Prep experience.

Contract Details

  • Full-Time Contract for several months.
  • High possibility of conversion to permanent, based on performance.
  • Must be available to work on the Australian Time Zone.
  • Immediate joiners are highly encouraged.


Read more
PhotonMatters
Human Resource
Posted by Human Resource
Remote only
4 - 13 yrs
₹8L - ₹20L / yr
skill iconPython
ETL
Spark
skill iconAmazon Web Services (AWS)
ELT
+2 more

 

 

 

Job Title: Data Engineer

Experience: 4–14 Years

Work Mode: Remote

Employment Type: Full-Time

 

Position Overview:

We are looking for highly experienced Senior Data Engineers to design, architect, and lead scalable, cloud-based data platforms on AWS. The role involves building enterprise-grade data pipelines, modernizing legacy systems, and developing high-performance scoring engines and analytics solutions and collaborate closely with architecture, analytics, risk, and business teams to deliver secure, reliable, and scalable data solutions.

 

Key Responsibilities:

·      Design and build scalable data pipelines for financial and customer data

·      Build and optimize scoring engines (credit, risk, fraud, customer scoring)

·      Design, develop, and optimize complex ETL/ELT pipelines (batch & real-time)

·      Ensure data quality, governance, reliability, and compliance standards

·      Optimize large-scale data processing using SQL, Spark/PySpark, and cloud technologies

·      Lead cloud data architecture, cost optimization, and performance tuning initiatives

·      Collaborate with Data Science, Analytics, and Product teams to deliver business-ready datasets

·      Mentor junior engineers and establish best practices for data engineering

 

Key Requirements:

·      Strong programming skills in Python and advanced SQL

·      Experience building scalable scoring or rule-based decision engines

·      Hands-on experience with Big Data technologies (Spark/PySpark/Kafka)

·      Strong expertise in designing ETL/ELT pipelines and data modeling

·      Experience with cloud platforms (AWS/Azure) and modern data architectures

·      Solid understanding of data warehousing, data lakes, and performance tuning

·      Knowledge of CI/CD, version control (Git), and production support best practices

Read more
Towards AGI
Shivani Sharma
Posted by Shivani Sharma
Bengaluru (Bangalore), Chennai
5 - 11 yrs
₹20L - ₹25L / yr
Data Transformation Tool (DBT)
skill iconAmazon Web Services (AWS)
Apache Airflow
SQL
Data engineering
+4 more

We are looking for an experienced Data Engineer with strong expertise in AWS, DBT, Databricks, and Apache Airflow to join our growing data engineering team.


Immediate joiners preferred


Role Overview 


The ideal candidate will design, develop, and maintain scalable data pipelines and data platforms to support analytics and business intelligence initiatives.


Key Responsibilities

  1. Design and build scalable data pipelines using AWS, Databricks, DBT, and Airflow.
  2. Develop and optimize ETL/ELT workflows for large-scale data processing.
  3. Implement data transformation models using DBT.
  4. Orchestrate workflows using Apache Airflow.
  5. Work with Databricks for big data processing and analytics.
  6. Ensure data quality, reliability, and performance optimization.
  7. Collaborate with data analysts, engineers, and business teams.


Required Skills

  1. Strong experience with AWS data services
  2. Hands-on experience with Databricks
  3. Experience in DBT (Data Build Tool)
  4. Workflow orchestration using Apache Airflow
  5. Strong SQL and Python skills
  6. Experience in data warehousing and ETL pipelines


Read more
Generative AI Persona platform

Generative AI Persona platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 7 yrs
₹15L - ₹20L / yr
skill iconMachine Learning (ML)
skill iconPython
ETL
skill iconData Science
ELT
+6 more

Description

We are currently hiring for the position of Data Scientist/ Senior Machine Learning Engineer (6–7 years’ experience).

 

Please find the detailed Job Description attached for your reference. We are looking for candidates with strong experience in:

  • Machine Learning model development
  • Scalable data pipeline development (ETL/ELT)
  • Python and SQL
  • Cloud platforms such as Azure/AWS/Databricks
  • ML deployment environments (SageMaker, Azure ML, etc.)

 

Kindly note:

  • Location: Pune (Work From Office)
  • Immediate joiners preferred

 

While sharing profiles, please ensure the following details are included:

  • Current CTC
  • Expected CTC
  • Notice Period
  • Current Location
  • Confirmation on Pune WFO comfort

 

Must have skills

Machine Learning - 6 years

Python - 6 years

ETL(Extract, Transform, Load) - 6 years

SQL - 6 years

Azure - 6 years

 

Read more
Digital solutions and services company

Digital solutions and services company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 7 yrs
₹17L - ₹23L / yr
skill iconMachine Learning (ML)
skill iconPython
ETL
skill iconData Science
SQL
+5 more

Data Scientist or Senior Machine Learning Engineer


We are currently hiring for the position of Data Scientist/ Senior Machine Learning Engineer (6–7 years' experience).


Please find the detailed Job Description attached for your reference.

We are looking for candidates with strong experience in:

  • Machine Learning model development
  • Scalable data pipeline development (ETL/ELT)
  • Python and SQL
  • Cloud platforms such as Azure/AWS/Databricks
  • ML deployment environments (SageMaker, Azure ML, etc.)


Kindly note:

  • Location: Pune (Work from Office)
  • Immediate joiners preferred


While sharing profiles, please ensure the following details are included:

  • Current CTC
  • Expected CTC
  • Notice Period
  • Current Location
  • Confirmation on Pune WFO comfort


Must have Skills

  • Machine Learning - 6 Years
  • Python - 6 Years
  • ETL (Extract, Transform, Load) - 6 Years
  • SQL - 6 Years
  • Azure - 6 Years


Request you to share relevant profiles at the earliest. Looking forward to your support.

Read more
Ekloud INC
ashwini rathod
Posted by ashwini rathod
india
1 - 15 yrs
₹3L - ₹24L / yr
salesforce
Salesforce development
skill iconJavascript
LWC
Salesforce Apex
+11 more

Salesforce Developer


Location : ONSITE


LOCATION : MUMBAI AND BANGALORE


Resources should have banking domain experience.


1. Salesforce development Engineer (1 - 3 Years) 

2. Salesforce development Engineer (3 - 5 Years) 

3. Salesforce development Engineer (5 - 8 Years) 


Job description. 


----------------------------------------------------------------------------


Technical Skills:


Strong hands-on frontend development using JavaScript and LWC

Expertise in backend development using Apex, Flows, Async Apex

Understanding of Database concepts: SOQL, SOSL and SQL

Hands-on experience in API integration using SOAP, REST API, graphql

Experience with ETL tools , Data migration, and Data governance

Experience with Apex Design Patterns, Integration Patterns and Apex testing framework

Follow agile, iterative execution model using CI-CD tools like Azure Devops, gitlab, bitbucket 

Should have worked with at least one programming language - Java, python, c++ and have good understanding of data structures

Preferred qualifications


Graduate degree in engineering

Experience developing with India stack

Experience in fintech or banking domain

----------------------------------------------------------------------------

 Skill details. 


1. Salesforce Fundamentals


Strong understanding of Salesforce core architecture

Objects (Standard vs Custom)

Fields, relationships (Lookup, Master-Detail)

Data model basics and record lifecycle

Awareness of declarative vs programmatic capabilities and when to use each

2. Salesforce Security Model

End-to-end understanding of Salesforce security layers, especially:

Record visibility when a record is created

Org-Wide Defaults (OWD) and their impact

Role Hierarchy and how it enables upward data access

Difference between Profiles, Permission Sets, and Sharing Rules

Ability to explain how Salesforce ensures that records are not visible to unauthorized users by default and how access is extended

3. Apex Triggers

Clear distinction between:

Before Triggers (before insert, before update)

Use cases such as validation and field updates

After Triggers (after insert, after update)

Use cases such as related record updates or integrations

Understanding of trigger context variables and best practices (bulkification, avoiding recursion)

4. Platform Events / Event-Driven Architecture

Knowledge of Platform Events and their use in decoupled, event-driven solutions

Understanding of real-time or near real-time notification use cases (e.g., UI alerts, pop-up style notifications)

Ability to position Platform Events versus alternatives (Streaming API, Change Data Capture)

5. Lightning Data Access (Wire Method)

Understanding of the @wire mechanism in Lightning Web Components (LWC)

Discussion point:

Whether records (e.g., AppX records) can be updated using the wire method

Awareness that @wire is primarily read/reactive and updates typically require imperative Apex calls

Clear articulation of reactive vs imperative data handling

6. Integrations Experience

Ability to articulate hands-on integration experience, including:

REST/SOAP API integrations

Inbound vs outbound integrations

Authentication mechanisms (OAuth, Named Credentials)

Use of Apex callouts, Platform Events, or middleware

Clarity on integration patterns and error handling approaches

Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
5 - 8 yrs
₹11L - ₹20L / yr
PySpark
Apache Kafka
Data architecture
skill iconAmazon Web Services (AWS)
EMR
+32 more

JOB DETAILS:

* Job Title: Lead II - Software Engineering - AWS, Apache Spark (PySpark/Scala), Apache Kafka

* Industry: Global digital transformation solutions provider

* Salary: Best in Industry

* Experience: 5-8 years

* Location: Hyderabad

 

Job Summary

We are seeking a skilled Data Engineer to design, build, and optimize scalable data pipelines and cloud-based data platforms. The role involves working with large-scale batch and real-time data processing systems, collaborating with cross-functional teams, and ensuring data reliability, security, and performance across the data lifecycle.


Key Responsibilities

ETL Pipeline Development & Optimization

  • Design, develop, and maintain complex end-to-end ETL pipelines for large-scale data ingestion and processing.
  • Optimize data pipelines for performance, scalability, fault tolerance, and reliability.

Big Data Processing

  • Develop and optimize batch and real-time data processing solutions using Apache Spark (PySpark/Scala) and Apache Kafka.
  • Ensure fault-tolerant, scalable, and high-performance data processing systems.

Cloud Infrastructure Development

  • Build and manage scalable, cloud-native data infrastructure on AWS.
  • Design resilient and cost-efficient data pipelines adaptable to varying data volume and formats.

Real-Time & Batch Data Integration

  • Enable seamless ingestion and processing of real-time streaming and batch data sources (e.g., AWS MSK).
  • Ensure consistency, data quality, and a unified view across multiple data sources and formats.

Data Analysis & Insights

  • Partner with business teams and data scientists to understand data requirements.
  • Perform in-depth data analysis to identify trends, patterns, and anomalies.
  • Deliver high-quality datasets and present actionable insights to stakeholders.

CI/CD & Automation

  • Implement and maintain CI/CD pipelines using Jenkins or similar tools.
  • Automate testing, deployment, and monitoring to ensure smooth production releases.

Data Security & Compliance

  • Collaborate with security teams to ensure compliance with organizational and regulatory standards (e.g., GDPR, HIPAA).
  • Implement data governance practices ensuring data integrity, security, and traceability.

Troubleshooting & Performance Tuning

  • Identify and resolve performance bottlenecks in data pipelines.
  • Apply best practices for monitoring, tuning, and optimizing data ingestion and storage.

Collaboration & Cross-Functional Work

  • Work closely with engineers, data scientists, product managers, and business stakeholders.
  • Participate in agile ceremonies, sprint planning, and architectural discussions.


Skills & Qualifications

Mandatory (Must-Have) Skills

  1. AWS Expertise
  • Hands-on experience with AWS Big Data services such as EMR, Managed Apache Airflow, Glue, S3, DMS, MSK, and EC2.
  • Strong understanding of cloud-native data architectures.
  1. Big Data Technologies
  • Proficiency in PySpark or Scala Spark and SQL for large-scale data transformation and analysis.
  • Experience with Apache Spark and Apache Kafka in production environments.
  1. Data Frameworks
  • Strong knowledge of Spark DataFrames and Datasets.
  1. ETL Pipeline Development
  • Proven experience in building scalable and reliable ETL pipelines for both batch and real-time data processing.
  1. Database Modeling & Data Warehousing
  • Expertise in designing scalable data models for OLAP and OLTP systems.
  1. Data Analysis & Insights
  • Ability to perform complex data analysis and extract actionable business insights.
  • Strong analytical and problem-solving skills with a data-driven mindset.
  1. CI/CD & Automation
  • Basic to intermediate experience with CI/CD pipelines using Jenkins or similar tools.
  • Familiarity with automated testing and deployment workflows.

 

Good-to-Have (Preferred) Skills

  • Knowledge of Java for data processing applications.
  • Experience with NoSQL databases (e.g., DynamoDB, Cassandra, MongoDB).
  • Familiarity with data governance frameworks and compliance tooling.
  • Experience with monitoring and observability tools such as AWS CloudWatch, Splunk, or Dynatrace.
  • Exposure to cost optimization strategies for large-scale cloud data platforms.

 

Skills: big data, scala spark, apache spark, ETL pipeline development

 

******

Notice period - 0 to 15 days only

Job stability is mandatory

Location: Hyderabad

Note: If a candidate is a short joiner, based in Hyderabad, and fits within the approved budget, we will proceed with an offer

F2F Interview: 14th Feb 2026

3 days in office, Hybrid model.

 


Read more
Global digital transformation solutions provider

Global digital transformation solutions provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Kochi (Cochin), Trivandrum
4 - 6 yrs
₹11L - ₹17L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
Data engineering
SQL
ETL
+22 more

JOB DETAILS:

* Job Title: Associate III - Data Engineering

* Industry: Global digital transformation solutions provide

* Salary: Best in Industry

* Experience: 4-6 years

* Location: Trivandrum, Kochi

Job Description

Job Title:

Data Services Engineer – AWS & Snowflake

 

Job Summary:

As a Data Services Engineer, you will be responsible for designing, developing, and maintaining robust data solutions using AWS cloud services and Snowflake.

You will work closely with cross-functional teams to ensure data is accessible, secure, and optimized for performance.

Your role will involve implementing scalable data pipelines, managing data integration, and supporting analytics initiatives.

 

Responsibilities:

• Design and implement scalable and secure data pipelines on AWS and Snowflake (Star/Snowflake schema)

• Optimize query performance using clustering keys, materialized views, and caching

• Develop and maintain Snowflake data warehouses and data marts.

• Build and maintain ETL/ELT workflows using Snowflake-native features (Snowpipe, Streams, Tasks).

• Integrate Snowflake with cloud platforms (AWS, Azure, GCP) and third-party tools (Airflow, dbt, Informatica)

• Utilize Snowpark and Python/Java for complex transformations

• Implement RBAC, data masking, and row-level security.

• Optimize data storage and retrieval for performance and cost-efficiency.

• Collaborate with stakeholders to gather data requirements and deliver solutions.

• Ensure data quality, governance, and compliance with industry standards.

• Monitor, troubleshoot, and resolve data pipeline and performance issues.

• Document data architecture, processes, and best practices.

• Support data migration and integration from various sources.

 

Qualifications:

• Bachelor’s degree in Computer Science, Information Technology, or a related field.

• 3 to 4 years of hands-on experience in data engineering or data services.

• Proven experience with AWS data services (e.g., S3, Glue, Redshift, Lambda).

• Strong expertise in Snowflake architecture, development, and optimization.

• Proficiency in SQL and Python for data manipulation and scripting.

• Solid understanding of ETL/ELT processes and data modeling.

• Experience with data integration tools and orchestration frameworks.

• Excellent analytical, problem-solving, and communication skills.

 

Preferred Skills:

• AWS Glue, AWS Lambda, Amazon Redshift

• Snowflake Data Warehouse

• SQL & Python

 

Skills: Aws Lambda, AWS Glue, Amazon Redshift, Snowflake Data Warehouse

 

Must-Haves

AWS data services (4-6 years), Snowflake architecture (4-6 years), SQL (proficient), Python (proficient), ETL/ELT processes (solid understanding)

Skills: AWS, AWS lambda, Snowflake, Data engineering, Snowpipe, Data integration tools, orchestration framework

Relevant 4 - 6 Years

python is mandatory

 

******

Notice period - 0 to 15 days only (Feb joiners’ profiles only)

Location: Kochi

F2F Interview 7th Feb

 

 

Read more
Global digital transformation solutions provider

Global digital transformation solutions provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
4 - 10 yrs
₹8L - ₹20L / yr
Automated testing
skill iconAmazon Web Services (AWS)
skill iconPython
Test Automation (QA)
AWS CloudFormation
+25 more

JOB DETAILS:

* Job Title: Tester III - Software Testing (Automation testing + Python + AWS)

* Industry: Global digital transformation solutions provide

* Salary: Best in Industry

* Experience: 4 -10 years

* Location: Hyderabad

Job Description

Responsibilities:

  • Develop, maintain, and execute automation test scripts using Python.
  • Build reliable and reusable test automation frameworks for web and cloud-based applications.
  • Work with AWS cloud services for test execution, environment management, and integration needs.
  • Perform functional, regression, and integration testing as part of the QA lifecycle.
  • Analyze test failures, identify root causes, raise defects, and collaborate with development teams.
  • Participate in requirement review, test planning, and strategy discussions.
  • Contribute to CI/CD setup and integration of automation suites.

 

Required Experience:

  • Strong hands-on experience in Automation Testing.
  • Proficiency in Python for automation scripting and framework development.
  • Understanding and practical exposure to AWS services (Lambda, EC2, S3, CloudWatch, or similar).
  • Good knowledge of QA methodologies, SDLC/STLC, and defect management.
  • Familiarity with automation tools/frameworks (e.g., Selenium, PyTest).
  • Experience with Git or other version control systems.

 

Good to Have:

  • API testing experience (REST, Postman, REST Assured).
  • Knowledge of Docker/Kubernetes.
  • Exposure to Agile/Scrum environment.

 

Skills: Automation testing, Python, Java, ETL, AWS

 

Read more
QAgile Services

at QAgile Services

1 recruiter
Radhika Chotai
Posted by Radhika Chotai
Remote only
2 - 4 yrs
₹3L - ₹5L / yr
PowerBI
Data modeling
ETL
Spark
SQL
+1 more

Microsoft Fabric, Power BI, Data modelling, ETL, Spark SQL

Remote work- 5-7 hours

450 Rs hourly charges

Read more
Lower Parel
2 - 4 yrs
₹6L - ₹7.2L / yr
skill iconReact.js
skill iconNextJs (Next.js)
skill iconNodeJS (Node.js)
GraphQL
RESTful APIs
+22 more

Senior Full Stack Developer – Analytics Dashboard

Job Summary

We are seeking an experienced Full Stack Developer to design and build a scalable, data-driven analytics dashboard platform. The role involves developing a modern web application that integrates with multiple external data sources, processes large datasets, and presents actionable insights through interactive dashboards.

The ideal candidate should be comfortable working across the full stack and have strong experience in building analytical or reporting systems.

Key Responsibilities

  • Design and develop a full-stack web application using modern technologies.
  • Build scalable backend APIs to handle data ingestion, processing, and storage.
  • Develop interactive dashboards and data visualisations for business reporting.
  • Implement secure user authentication and role-based access.
  • Integrate with third-party APIs using OAuth and REST protocols.
  • Design efficient database schemas for analytical workloads.
  • Implement background jobs and scheduled tasks for data syncing.
  • Ensure performance, scalability, and reliability of the system.
  • Write clean, maintainable, and well-documented code.
  • Collaborate with product and design teams to translate requirements into features.

Required Technical Skills

Frontend

  • Strong experience with React.js
  • Experience with Next.js
  • Knowledge of modern UI frameworks (Tailwind, MUI, Ant Design, etc.)
  • Experience building dashboards using chart libraries (Recharts, Chart.js, D3, etc.)

Backend

  • Strong experience with Node.js (Express or NestJS)
  • REST and/or GraphQL API development
  • Background job systems (cron, queues, schedulers)
  • Experience with OAuth-based integrations

Database

  • Strong experience with PostgreSQL
  • Data modelling and performance optimisation
  • Writing complex analytical SQL queries

DevOps / Infrastructure

  • Cloud platforms (AWS)
  • Docker and basic containerisation
  • CI/CD pipelines
  • Git-based workflows

Experience & Qualifications

  • 5+ years of professional full stack development experience.
  • Proven experience building production-grade web applications.
  • Prior experience with analytics, dashboards, or data platforms is highly preferred.
  • Strong problem-solving and system design skills.
  • Comfortable working in a fast-paced, product-oriented environment.

Nice to Have (Bonus Skills)

  • Experience with data pipelines or ETL systems.
  • Knowledge of Redis or caching systems.
  • Experience with SaaS products or B2B platforms.
  • Basic understanding of data science or machine learning concepts.
  • Familiarity with time-series data and reporting systems.
  • Familiarity with meta ads/Google ads API

Soft Skills

  • Strong communication skills.
  • Ability to work independently and take ownership.
  • Attention to detail and focus on code quality.
  • Comfortable working with ambiguous requirements.

Ideal Candidate Profile (Summary)

A senior-level full stack engineer who has built complex web applications, understands data-heavy systems, and enjoys creating analytical products with a strong focus on performance, scalability, and user experience.

Read more
Intineri infosol Pvt Ltd

at Intineri infosol Pvt Ltd

2 candid answers
Adil Saifi
Posted by Adil Saifi
Remote only
5 - 8 yrs
₹5L - ₹12L / yr
ETL
EDI
HIPAA
PHI
Healthcare
+1 more

Key Responsibilities:

Design and develop ETL processes for claims, enrollment, provider, and member data

Handle EDI transactions (837, 835, 834) and health plan system integrations

Build data feeds for regulatory reporting (HEDIS, Stars, Risk Adjustment)

Troubleshoot data quality issues and implement data validation frameworks

Required Experience & Skills:

5+ years of ETL development experience

Minimum 3 years in Healthcare / Health Plan / Payer environment

Strong expertise in SQL and ETL tools (Informatica, SSIS, Talend)

Deep understanding of health plan operations (claims, eligibility, provider networks)

Experience with healthcare data standards (X12 EDI, HL7)

Strong knowledge of HIPAA compliance and PHI handling

Read more
Euphoric Thought Technologies
Noida
2 - 4 yrs
₹8L - ₹15L / yr
SQL
ETL
Data modeling
Business Intelligence (BI)

Position Overview:

As a BI (Business Intelligence) Developer, they will be responsible for designing,

developing, and maintaining the business intelligence solutions that support data

analysis and reporting. They will collaborate with business stakeholders, analysts, and

data engineers to understand requirements and translate them into efficient and

effective BI solutions. Their role will involve working with various data sources,

designing data models, assisting ETL (Extract, Transform, Load) processes, and

developing interactive dashboards and reports.

Key Responsibilities:

1. Requirement Gathering: Collaborate with business stakeholders to understand

their data analysis and reporting needs. Translate these requirements into

technical specifications and develop appropriate BI solutions.

2. Data Modelling: Design and develop data models that effectively represent

the underlying business processes and facilitate data analysis and reporting.

Ensure data integrity, accuracy, and consistency within the data models.

3. Dashboard and Report Development: Design, develop, and deploy interactive

dashboards and reports using Sigma computing.

4. Data Integration: Integrate data from various systems and sources to provide a

comprehensive view of business performance. Ensure data consistency and

accuracy across different data sets.

5. Performance Optimization: Identify performance bottlenecks in BI solutions and

optimize query performance, data processing, and report rendering. Continuously

monitor and fine-tune the performance of BI applications.

6. Data Governance: Ensure compliance with data governance policies and

standards. Implement appropriate security measures to protect sensitive data.

7. Documentation and Training: Document the technical specifications, data

models, ETL processes, and BI solution configurations.

8. Ensuring that the proposed solutions meet business needs and requirements.

9. Should be able to create and own Business/ Functional Requirement

Documents.

10. Monitor or track project milestones and deliverables.

11. Submit project deliverables, ensuring adherence to quality standards.

Qualifications and Skills:

1. Master/ Bachelor’s degree in IT or relevant and having a minimum of 2-4 years of

experience in Business Analysis or a related field

2. Proven experience as a BI Developer or similar role.

3. Fundamental analytical and conceptual thinking skills with demonstrated skills in

managing projects on implementation of Platform Solutions

4. Excellent planning, organizational and time management skills.

5. Strong understanding of data warehousing concepts, dimensional modelling, and

ETL processes.

6. Proficiency in SQL, Snowflake for data extraction, manipulation, and analysis.

7. Experience with one or more BI tools such as Sigma computing.

8. Knowledge of data visualization best practices and ability to create compelling

data visualizations.

9. Solid problem-solving and analytical skills with a detail-oriented mindset.

10. Strong communication and interpersonal skills to collaborate effectively with

different stakeholders.

11. Ability to work independently and manage multiple priorities in a fast-paced

environment.

12. Knowledge of data governance principles and security best practices.

13. Candidate having experience in managing implementation project of platform

solutions to the U.S. clients would be preferable.

14. Exposure to U.S debt collection industry is a plus.

Read more
leading digital testing boutique firm

leading digital testing boutique firm

Agency job
via Peak Hire Solutions by Dhara Thakkar
Delhi
5 - 8 yrs
₹11L - ₹15L / yr
SQL
Software Testing (QA)
Data modeling
ETL
Data extraction
+14 more

Review Criteria

  • Strong Data / ETL Test Engineer
  • 5+ years of overall experience in Testing/QA
  • 3+ years of hands-on end-to-end data testing/ETL testing experience, covering data extraction, transformation, loading validation, reconciliation, working across BI / Analytics / Data Warehouse / e-Governance platforms
  • Must have strong understanding and hands-on exposure to Data Warehouse concepts and processes, including fact & dimension tables, data models, data flows, aggregations, and historical data handling.
  • Must have experience in Data Migration Testing, including validation of completeness, correctness, reconciliation, and post-migration verification from legacy platforms to upgraded/cloud-based data platforms.
  • Must have independently handled test strategy, test planning, test case design, execution, defect management, and regression cycles for ETL and BI testing
  • Hands-on experience with ETL tools and SQL-based data validation is mandatory (Working knowledge or hands-on exposure to Redshift and/or Qlik will be considered sufficient)
  • Must hold a Bachelor’s degree B.E./B.Tech else should have master's in M.Tech/MCA/M.Sc/MS
  • Must demonstrate strong verbal and written communication skills, with the ability to work closely with business stakeholders, data teams, and QA leadership
  • Mandatory Location: Candidate must be based within Delhi NCR (100 km radius)


Preferred

  • Relevant certifications such as ISTQB or Data Analytics / BI certifications (Power BI, Snowflake, AWS, etc.)


Job Specific Criteria

  • CV Attachment is mandatory
  • Do you have experience working on Government projects/companies, mention brief about project?
  • Do you have experience working on enterprise projects/companies, mention brief about project?
  • Please mention the names of 2 key projects you have worked on related to Data Warehouse / ETL / BI testing?
  • Do you hold any ISTQB or Data / BI certifications (Power BI, Snowflake, AWS, etc.)?
  • Do you have exposure to BI tools such as Qlik?
  • Are you willing to relocate to Delhi and why (if not from Delhi)?
  • Are you available for a face-to-face round?


Role & Responsibilities

  • 5 years’ experience in Data Testing across BI Analytics platforms with at least 2 largescale enterprise Data Warehouse Analytics eGovernance programs
  • Proficiency in ETL Data Warehouse and BI report dashboard validation including test planning data reconciliation acceptance criteria definition defect triage and regression cycle management for BI landscapes
  • Proficient in analyzing business requirements and data mapping specifications BRDs Data Models Source to Target Mappings User Stories Reports Dashboards to define comprehensive test scenarios and test cases
  • Ability to review high level and low-level data models ETL workflows API specifications and business logic implementations to design test strategies ensuring accuracy consistency and performance of data pipelines
  • Ability to test and validate the migrated data from old platform to an upgraded platform and ensure the completeness and correctness of migration
  • Experience of conducting test of migrated data and defining test scenarios and test cases for the same
  • Experience with BI tools like Qlik ETL platforms Data Lake platforms Redshift to support end to end validation
  • Exposure to Data Quality Metadata Management and Data Governance frameworks ensuring KPIs metrics and dashboards align with business expectations


Ideal Candidate

  • 5 years’ experience in Data Testing across BI Analytics platforms with at least 2 largescale enterprise Data Warehouse Analytics eGovernance programs
  • Proficiency in ETL Data Warehouse and BI report dashboard validation including test planning data reconciliation acceptance criteria definition defect triage and regression cycle management for BI landscapes
  • Proficient in analyzing business requirements and data mapping specifications BRDs Data Models Source to Target Mappings User Stories Reports Dashboards to define comprehensive test scenarios and test cases
  • Ability to review high level and low-level data models ETL workflows API specifications and business logic implementations to design test strategies ensuring accuracy consistency and performance of data pipelines
  • Ability to test and validate the migrated data from old platform to an upgraded platform and ensure the completeness and correctness of migration
  • Experience of conducting test of migrated data and defining test scenarios and test cases for the same
  • Experience with BI tools like Qlik ETL platforms Data Lake platforms Redshift to support end to end validation
  • Exposure to Data Quality Metadata Management and Data Governance frameworks ensuring KPIs metrics and dashboards align with business expectations



Read more
Global digital transformation solutions provider

Global digital transformation solutions provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 9 yrs
₹15L - ₹25L / yr
Data engineering
Apache Kafka
skill iconPython
skill iconAmazon Web Services (AWS)
AWS Lambda
+11 more

Job Details

- Job Title: Lead I - Data Engineering 

- Industry: Global digital transformation solutions provider

- Domain - Information technology (IT)

- Experience Required: 6-9 years

- Employment Type: Full Time

- Job Location: Pune

- CTC Range: Best in Industry


Job Description

Job Title: Senior Data Engineer (Kafka & AWS)

Responsibilities:

  • Develop and maintain real-time data pipelines using Apache Kafka (MSK or Confluent) and AWS services.
  • Configure and manage Kafka connectors, ensuring seamless data flow and integration across systems.
  • Demonstrate strong expertise in the Kafka ecosystem, including producers, consumers, brokers, topics, and schema registry.
  • Design and implement scalable ETL/ELT workflows to efficiently process large volumes of data.
  • Optimize data lake and data warehouse solutions using AWS services such as Lambda, S3, and Glue.
  • Implement robust monitoring, testing, and observability practices to ensure reliability and performance of data platforms.
  • Uphold data security, governance, and compliance standards across all data operations.

 

Requirements:

  • Minimum of 5 years of experience in Data Engineering or related roles.
  • Proven expertise with Apache Kafka and the AWS data stack (MSK, Glue, Lambda, S3, etc.).
  • Proficient in coding with Python, SQL, and Java — with Java strongly preferred.
  • Experience with Infrastructure-as-Code (IaC) tools (e.g., CloudFormation) and CI/CD pipelines.
  • Excellent problem-solvingcommunication, and collaboration skills.
  • Flexibility to write production-quality code in both Python and Java as required.

 

Skills: Aws, Kafka, Python


Must-Haves

Minimum of 5 years of experience in Data Engineering or related roles.

Proven expertise with Apache Kafka and the AWS data stack (MSK, Glue, Lambda, S3, etc.).

Proficient in coding with Python, SQL, and Java — with Java strongly preferred.

Experience with Infrastructure-as-Code (IaC) tools (e.g., CloudFormation) and CI/CD pipelines.

Excellent problem-solving, communication, and collaboration skills.

Flexibility to write production-quality code in both Python and Java as required.

Skills: Aws, Kafka, Python

Notice period - 0 to 15days only

Read more
Ekloud INC
ashwini rathod
Posted by ashwini rathod
Remote only
6 - 20 yrs
₹20L - ₹30L / yr
CPQ
Billing
Sales
DevOps
APL
+3 more

Hiring : Salesforce CPQ Developer 


Experience : 6+ years

Shift timings : 7:30PM to 3:30AM

Location : India (Remote) 


Key skills: CPQ steel brick , Billing sales cloud , LWC ,Apex integrations, Devops, APL, ETL Tool


Design scalable and efficient Salesforce Sales Cloud solutions that meet best practices and business requirements.


Lead the technical design and implementation of Sales Cloud features, including CPQ, Partner Portal, Lead, Opportunity and Quote Management.


Provide technical leadership and mentorship to development teams. Review and approve technical designs, code, and configurations.

Work with business stakeholders to gather requirements, provide guidance, and ensure that solutions meet their needs. Translate business requirements into technical specifications.


Oversee and guide the development of custom Salesforce applications, including custom objects, workflows, triggers, and LWC/ Apex code.

Ensure data quality, integrity, and security within the Salesforce platform. Implement data migration strategies and manage data integrations.


Establish and enforce Salesforce development standards, best practices, and governance processes. Monitor and optimize the performance of Salesforce solutions, including addressing performance issues and ensuring efficient use of resources.


Stay up-to-date with Salesforce updates and new features. Propose and implement innovative solutions to enhance Salesforce capabilities and improve business processes.

Document design, code consistently throughout the design/development process

Diagnose, resolve, and document system issues to support project team.


Research questions with respect to both maintenance and development activities. 

Perform post-migration system review and ongoing support.

Prepare and deliver demonstrations/presentations to client audiences, professional seniors/peers


Adhere to best practices constantly around code/data source control, ticket tracking, etc. during the course of an assignment

 

Skills/Experience:


Bachelor’s degree in Computer Science, Information Systems, or related field.

6+ years of experience in architecting and designing full stack solutions on the Salesforce Platform.

Must have 3+ years of Experience in architecting, designing and developing Salesforce CPQ (SteelBrick CPQ) and Billing solutions.

Minimum 3+ years of Lightning Framework development experience (Aura & LWC).

CPQ Specialist and Salesforce Platform Developer II certification is required.


Extensive development experience with Apex Classes, Triggers, Visualforce, Lightning, Batch Apex, Salesforce DX, Apex Enterprise Patterns, Apex Mocks, Force.com API, Visual Flows, Platform Events, SOQL, Salesforce APIs, and other programmatic solutions on the Salesforce platform.


Experience in debugging APEX CPU Error, SOQL queries Exceptions, Refactoring code and working with complex implementations involving features like asynchronous processing


Clear insight of Salesforce platform best practices, coding and design guidelines and governor limits.

Experience with Development Tools and Technologies: Visual Studio Code, GIT, and DevOps Setup to automate deployment/releases.

Knowledge of integration architecture as well as third-party integration tools and ETL (Such as Informatica, Workato, Boomi, Mulesoft etc.) with Salesforce


Experience in Agile development, iterative development, and proof of concepts (POCs).

Excellent written and verbal communication skills with ability to lead technical projects and manage multiple priorities in a fast-paced environment.

Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
7 - 9 yrs
₹15L - ₹28L / yr
databricks
skill iconPython
SQL
PySpark
skill iconAmazon Web Services (AWS)
+9 more

Role Proficiency:

This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required.


Skill Examples:

  1. Proficiency in SQL Python or other programming languages used for data manipulation.
  2. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF.
  3. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery).
  4. Conduct tests on data pipelines and evaluate results against data quality and performance specifications.
  5. Experience in performance tuning.
  6. Experience in data warehouse design and cost improvements.
  7. Apply and optimize data models for efficient storage retrieval and processing of large datasets.
  8. Communicate and explain design/development aspects to customers.
  9. Estimate time and resource requirements for developing/debugging features/components.
  10. Participate in RFP responses and solutioning.
  11. Mentor team members and guide them in relevant upskilling and certification.

 

Knowledge Examples:

  1. Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF.
  2. Proficient in SQL for analytics and windowing functions.
  3. Understanding of data schemas and models.
  4. Familiarity with domain-related data.
  5. Knowledge of data warehouse optimization techniques.
  6. Understanding of data security concepts.
  7. Awareness of patterns frameworks and automation practices.


 

Additional Comments:

# of Resources: 22 Role(s): Technical Role Location(s): India Planned Start Date: 1/1/2026 Planned End Date: 6/30/2026

Project Overview:

Role Scope / Deliverables: We are seeking highly skilled Data Engineer with strong experience in Databricks, PySpark, Python, SQL, and AWS to join our data engineering team on or before 1st week of Dec, 2025.

The candidate will be responsible for designing, developing, and optimizing large-scale data pipelines and analytics solutions that drive business insights and operational efficiency.

Design, build, and maintain scalable data pipelines using Databricks and PySpark.

Develop and optimize complex SQL queries for data extraction, transformation, and analysis.

Implement data integration solutions across multiple AWS services (S3, Glue, Lambda, Redshift, EMR, etc.).

Collaborate with analytics, data science, and business teams to deliver clean, reliable, and timely datasets.

Ensure data quality, performance, and reliability across data workflows.

Participate in code reviews, data architecture discussions, and performance optimization initiatives.

Support migration and modernization efforts for legacy data systems to modern cloud-based solutions.


Key Skills:

Hands-on experience with Databricks, PySpark & Python for building ETL/ELT pipelines.

Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).

Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).

Experience with data modeling, schema design, and performance optimization.

Familiarity with CI/CD pipelines, version control (Git), and workflow orchestration (Airflow preferred).

Excellent problem-solving, communication, and collaboration skills.

 

Skills: Databricks, Pyspark & Python, Sql, Aws Services

 

Must-Haves

Python/PySpark (5+ years), SQL (5+ years), Databricks (3+ years), AWS Services (3+ years), ETL tools (Informatica, Glue, DataProc) (3+ years)

Hands-on experience with Databricks, PySpark & Python for ETL/ELT pipelines.

Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).

Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).

Experience with data modeling, schema design, and performance optimization.

Familiarity with CI/CD pipelines, Git, and workflow orchestration (Airflow preferred).


******

Notice period - Immediate to 15 days

Location: Bangalore

Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Kochi (Cochin), Trivandrum, Hyderabad, Thiruvananthapuram
8 - 10 yrs
₹10L - ₹25L / yr
Business Analysis
Data Visualization
PowerBI
SQL
Tableau
+18 more

Job Description – Senior Technical Business Analyst

Location: Trivandrum (Preferred) | Open to any location in India

Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST

 

About the Role

We are seeking highly motivated and analytically strong Senior Technical Business Analysts who can work seamlessly with business and technology stakeholders to convert a one-line problem statement into a well-defined project or opportunity. This role is ideal for fresh graduates who have a strong foundation in data analytics, data engineering, data visualization, and data science, along with a strong drive to learn, collaborate, and grow in a dynamic, fast-paced environment.

As a Technical Business Analyst, you will be responsible for translating complex business challenges into actionable user stories, analytical models, and executable tasks in Jira. You will work across the entire data lifecycle—from understanding business context to delivering insights, solutions, and measurable outcomes.

 

Key Responsibilities

Business & Analytical Responsibilities

  • Partner with business teams to understand one-line problem statements and translate them into detailed business requirementsopportunities, and project scope.
  • Conduct exploratory data analysis (EDA) to uncover trends, patterns, and business insights.
  • Create documentation including Business Requirement Documents (BRDs)user storiesprocess flows, and analytical models.
  • Break down business needs into concise, actionable, and development-ready user stories in Jira.

Data & Technical Responsibilities

  • Collaborate with data engineering teams to design, review, and validate data pipelinesdata models, and ETL/ELT workflows.
  • Build dashboards, reports, and data visualizations using leading BI tools to communicate insights effectively.
  • Apply foundational data science concepts such as statistical analysispredictive modeling, and machine learning fundamentals.
  • Validate and ensure data quality, consistency, and accuracy across datasets and systems.

Collaboration & Execution

  • Work closely with product, engineering, BI, and operations teams to support the end-to-end delivery of analytical solutions.
  • Assist in development, testing, and rollout of data-driven solutions.
  • Present findings, insights, and recommendations clearly and confidently to both technical and non-technical stakeholders.

 

Required Skillsets

Core Technical Skills

  • 6+ years of Technical Business Analyst experience within an overall professional experience of 8+ years
  • Data Analytics: SQL, descriptive analytics, business problem framing.
  • Data Engineering (Foundational): Understanding of data warehousing, ETL/ELT processes, cloud data platforms (AWS/GCP/Azure preferred).
  • Data Visualization: Experience with Power BI, Tableau, or equivalent tools.
  • Data Science (Basic/Intermediate): Python/R, statistical methods, fundamentals of ML algorithms.

 

Soft Skills

  • Strong analytical thinking and structured problem-solving capability.
  • Ability to convert business problems into clear technical requirements.
  • Excellent communication, documentation, and presentation skills.
  • High curiosity, adaptability, and eagerness to learn new tools and techniques.

 

Educational Qualifications

  • BE/B.Tech or equivalent in:
  • Computer Science / IT
  • Data Science

 

What We Look For

  • Demonstrated passion for data and analytics through projects and certifications.
  • Strong commitment to continuous learning and innovation.
  • Ability to work both independently and in collaborative team environments.
  • Passion for solving business problems using data-driven approaches.
  • Proven ability (or aptitude) to convert a one-line business problem into a structured project or opportunity.

 

Why Join Us?

  • Exposure to modern data platforms, analytics tools, and AI technologies.
  • A culture that promotes innovation, ownership, and continuous learning.
  • Supportive environment to build a strong career in data and analytics.

 

Skills: Data Analytics, Business Analysis, Sql


Must-Haves

Technical Business Analyst (6+ years), SQL, Data Visualization (Power BI, Tableau), Data Engineering (ETL/ELT, cloud platforms), Python/R

Read more
Financial Services Company

Financial Services Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Delhi
3 - 6 yrs
₹10L - ₹25L / yr
Project Management
SQL
JIRA
SQL Query Analyzer
confluence
+23 more

Required Skills: Excellent Communication Skills, Project Management, SQL queries, Expertise with Tools such as Jira, Confluence etc.


Criteria:

  • Candidate must have Project management experience.
  • Candidate must have strong experience in accounting principles, financial workflows, and R2R (Record to Report) processes.
  • Candidate should have an academic background in Commerce or MBA Finance.
  • Candidates must be from a Fintech/ Financial service only.
  • Good experience with SQL and must have MIS experience.
  • Must have experience in Treasury Module.
  • 3+ years of implementation experience is required.
  • Candidate should have Hands-on experience with tools such as Jira, Confluence, Excel, and project management platforms.
  • Need candidate from Bangalore and Delhi/NCR ONLY.
  • Need Immediate joiner or candidate with up to 30 Days’ Notice period.

 

Description

Position Overview

We are looking for an experienced Implementation Lead with deep expertise in financial workflows, R2R processes, and treasury operations to drive client onboarding and end-to-end implementations. The ideal candidate will bring a strong Commerce / MBA Finance background, proven project management experience, and technical skills in SQL and ETL to ensure seamless deployments for fintech and financial services clients.


Key Responsibilities

  • Lead end-to-end implementation projects for enterprise fintech clients
  • Translate client requirements into detailed implementation plans and configure solutions accordingly.
  • Write and optimize complex SQL queries for data analysis, validation, and integration
  • Oversee ETL processes – extract, transform, and load financial data across systems
  • Collaborate with cross-functional teams including Product, Engineering, and Support
  • Ensure timely, high-quality delivery across multiple stakeholders and client touchpoints
  • Document processes, client requirements, and integration flows in detail.
  • Configure and deploy company solutions for R2R, treasury, and reporting workflows.


Required Qualifications

  • Bachelor’s degree Commerce background / MBA Finance (mandatory).
  • 3+ years of hands-on implementation/project management experience
  • Proven experience delivering projects in Fintech, SaaS, or ERP environments
  • Strong expertise in accounting principles, R2R (Record-to-Report), treasury, and financial workflows.
  • Hands-on SQL experience, including the ability to write and debug complex queries (joins, CTEs, subqueries)
  • Experience working with ETL pipelines or data migration processes
  • Proficiency in tools like Jira, Confluence, Excel, and project tracking systems
  • Strong communication and stakeholder management skills
  • Ability to manage multiple projects simultaneously and drive client success


Qualifications

  • Prior experience implementing financial automation tools (e.g., SAP, Oracle, Anaplan, Blackline)
  • Familiarity with API integrations and basic data mapping
  • Experience in agile/scrum-based implementation environments
  • Exposure to reconciliation, book closure, AR/AP, and reporting systems
  • PMP, CSM, or similar certifications



Skills & Competencies

Functional Skills

  • Financial process knowledge (e.g., reconciliation, accounting, reporting)
  • Business analysis and solutioning
  • Client onboarding and training
  • UAT coordination
  • Documentation and SOP creation

 

Project Skills

  • Project planning and risk management
  • Task prioritization and resource coordination
  • KPI tracking and stakeholder reporting

 

Soft Skills

  • Cross-functional collaboration
  • Communication with technical and non-technical teams
  • Attention to detail and customer empathy
  • Conflict resolution and crisis management


What We Offer

  • An opportunity to shape fintech implementations across fast-growing companies
  • Work in a dynamic environment with cross-functional experts
  • Competitive compensation and rapid career growth
  • A collaborative and meritocratic culture
Read more
Navi Mumbai
4 - 8 yrs
₹8L - ₹10L / yr
Oracle SQL Developer
MySQL
ETL
Database Design
SQL
+1 more

Company Name : Enlink Managed Services

Company Website : https://enlinkit.com/

Location : Turbhe , Navi Mumbai

Shift Time : 12 pm to 9:30 pm

Working Days : 5 Days Working(Sat-Sun Fixed Off)

SQL Developer 

Roles & Responsibilities :

Designing Database, writing stored procedures, complex and dynamic queries in SQL

Creating Indexes, Views, complex Triggers, effective Functions, and appropriate store procedures to facilitate efficient data manipulation and data consistency

Implementing database architecture, ETL and development activities

Troubleshooting data load, ETL and application support related issues

Demonstrates ability to communicate effectively in both technical and business environments

Troubleshooting failed batch jobs, correcting outstanding issues and resubmitting scheduled jobs to ensure completion

Troubleshoot, optimize, and tune SQL processes and complex SQL queries

Required Qualifications/Experience

4+ years of experience in the design and optimization of MySQL databases

General database development using MySQL

Advanced level of writing stored procedures, reading query plans, tuning indexes and troubleshooting performance bottlenecks

Troubleshoot, optimize, and tune SQL processes and complex SQL queries

Experienced and versed in creating sophisticated MySQL Server databases to quickly handle complex queries

Problem solving, analytical and fluent communication

Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹50L - ₹75L / yr
Ansible
Terraform
skill iconAmazon Web Services (AWS)
Platform as a Service (PaaS)
CI/CD
+30 more

ROLE & RESPONSIBILITIES:

We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.


KEY RESPONSIBILITIES:

1.     Cloud Security (AWS)-

  • Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
  • Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
  • Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
  • Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
  • Ensure encryption of data at rest/in transit across all cloud services.

 

2.     DevOps Security (IaC, CI/CD, Kubernetes, Linux)-

Infrastructure as Code & Automation Security:

  • Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
  • Enforce misconfiguration scanning and automated remediation.

CI/CD Security:

  • Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
  • Implement secure build, artifact signing, and deployment workflows.

Containers & Kubernetes:

  • Harden Docker images, private registries, runtime policies.
  • Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
  • Apply CIS Benchmarks for Kubernetes and Linux.

Monitoring & Reliability:

  • Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
  • Ensure audit logging across cloud/platform layers.


3.     MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-

Pipeline & Workflow Security:

  • Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
  • Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.

ML Platform Security:

  • Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
  • Control model access, artifact protection, model registry security, and ML metadata integrity.

Data Security:

  • Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
  • Enforce data versioning security, lineage tracking, PII protection, and access governance.

ML Observability:

  • Implement drift detection (data drift/model drift), feature monitoring, audit logging.
  • Integrate ML monitoring with Grafana/Prometheus/CloudWatch.


4.     Network & Endpoint Security-

  • Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
  • Conduct vulnerability assessments, penetration test coordination, and network segmentation.
  • Secure remote workforce connectivity and internal office networks.


5.     Threat Detection, Incident Response & Compliance-

  • Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
  • Build security alerts, automated threat detection, and incident workflows.
  • Lead incident containment, forensics, RCA, and remediation.
  • Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
  • Maintain security policies, procedures, RRPs (Runbooks), and audits.


IDEAL CANDIDATE:

  • 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
  • Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
  • Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
  • Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
  • Strong Linux security (CIS hardening, auditing, intrusion detection).
  • Proficiency in Python, Bash, and automation/scripting.
  • Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
  • Understanding of microservices, API security, serverless security.
  • Strong understanding of vulnerability management, penetration testing practices, and remediation plans.


EDUCATION:

  • Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
  • Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.


PERKS, BENEFITS AND WORK CULTURE:

  • Competitive Salary Package
  • Generous Leave Policy
  • Flexible Working Hours
  • Performance-Based Bonuses
  • Health Care Benefits
Read more
PRODUCT DEVELOPMENT COMPANY

PRODUCT DEVELOPMENT COMPANY

Agency job
Remote only
7 - 16 yrs
₹15L - ₹20L / yr
skill iconData Analytics
Data Warehouse (DWH)
Business Intelligence (BI)
Data governance
BI/DW
+6 more

EMPLOYMENT TYPE: Full-Time, Permanent


LOCATION: Remote


SHIFT TIMINGS: 11.00 AM - 8:00 PM IST


Role : Lead Data Analyst


Qualifications:


● Bachelor’s or Master’s degree in Computer Science, Data Analytics, Information Systems, or a related field.


● 7–10 years of experience in data operations, data management, or analytics.


● Strong understanding of data governance, ETL processes, and quality control methodologies.


● Hands-on experience with SQL, Excel/Google Sheets, and data visualization tools


● Experience with automation tools like Python script is a plus.


● Must be capable of working independently and delivering stable, efficient and reliable software.


● Excellent written and verbal communication skills in English.


● Experience supporting and working with cross-functional teams in a dynamic environment



Preferred Skills:


● Experience in SaaS, B2B data, or lead intelligence industry.


● Exposure to data privacy regulations (GDPR, CCPA) and compliance practices.


● Ability to work effectively in cross-functional, global, and remote environments.

Read more
Tecblic Private LImited
Ahmedabad
5 - 6 yrs
₹5L - ₹15L / yr
Windows Azure
skill iconPython
SQL
Data Warehouse (DWH)
Data modeling
+5 more

Job Description: Data Engineer

Location: Ahmedabad

Experience: 5 to 6 years

Employment Type: Full-Time



We are looking for a highly motivated and experienced Data Engineer to join our  team. As a Data Engineer, you will play a critical role in designing, building, and optimizing data pipelines that ensure the availability, reliability, and performance of our data infrastructure. You will collaborate closely with data scientists, analysts, and cross-functional teams to provide timely and efficient data solutions.



Responsibilities


● Design and optimize data pipelines for various data sources


● Design and implement efficient data storage and retrieval mechanisms


● Develop data modelling solutions and data validation mechanisms


● Troubleshoot data-related issues and recommend process improvements


● Collaborate with data scientists and stakeholders to provide data-driven insights and solutions


● Coach and mentor junior data engineers in the team




Skills Required: 


● Minimum 4 years of experience in data engineering or related field


● Proficient in designing and optimizing data pipelines and data modeling


● Strong programming expertise in Python


● Hands-on experience with big data technologies such as Hadoop, Spark, and Hive


● Extensive experience with cloud data services such as AWS, Azure, and GCP


● Advanced knowledge of database technologies like SQL, NoSQL, and data warehousing


● Knowledge of distributed computing and storage systems


● Familiarity with DevOps practices and power automate and Microsoft Fabric will be an added advantage


● Strong analytical and problem-solving skills with outstanding communication and collaboration abilities




Qualifications


  • Bachelor's degree in Computer Science, Data Science, or a Computer related field


Read more
Financial Services Company

Financial Services Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
4 - 8 yrs
₹10L - ₹13L / yr
SQL
databricks
PowerBI
Windows Azure
Data engineering
+9 more

Review Criteria

  • Strong Senior Data Engineer profile
  • 4+ years of hands-on Data Engineering experience
  • Must have experience owning end-to-end data architecture and complex pipelines
  • Must have advanced SQL capability (complex queries, large datasets, optimization)
  • Must have strong Databricks hands-on experience
  • Must be able to architect solutions, troubleshoot complex data issues, and work independently
  • Must have Power BI integration experience
  • CTC has 80% fixed and 20% variable in their ctc structure


Preferred

  • Worked on Call center data, understand nuances of data generated in call centers
  • Experience implementing data governance, quality checks, or lineage frameworks
  • Experience with orchestration tools (Airflow, ADF, Glue Workflows), Python, Delta Lake, Lakehouse architecture


Job Specific Criteria

  • CV Attachment is mandatory
  • Are you Comfortable integrating with Power BI datasets?
  • We have an alternate Saturdays working. Are you comfortable to WFH on 1st and 4th Saturday?


Role & Responsibilities

We are seeking a highly experienced Senior Data Engineer with strong architectural capability, excellent optimisation skills, and deep hands-on experience in modern data platforms. The ideal candidate will have advanced SQL skills, strong expertise in Databricks, and practical experience working across cloud environments such as AWS and Azure. This role requires end-to-end ownership of complex data engineering initiatives, including architecture design, data governance implementation, and performance optimisation. You will collaborate with cross-functional teams to build scalable, secure, and high-quality data solutions.

 

Key Responsibilities-

  • Lead the design and implementation of scalable data architectures, pipelines, and integration frameworks.
  • Develop, optimise, and maintain complex SQL queries, transformations, and Databricks-based data workflows.
  • Architect and deliver high-performance ETL/ELT processes across cloud platforms.
  • Implement and enforce data governance standards, including data quality, lineage, and access control.
  • Partner with analytics, BI (Power BI), and business teams to enable reliable, governed, and high-value data delivery.
  • Optimise large-scale data processing, ensuring efficiency, reliability, and cost-effectiveness.
  • Monitor, troubleshoot, and continuously improve data pipelines and platform performance.
  • Mentor junior engineers and contribute to engineering best practices, standards, and documentation.


Ideal Candidate

  • Proven industry experience as a Senior Data Engineer, with ownership of high-complexity projects.
  • Advanced SQL skills with experience handling large, complex datasets.
  • Strong expertise with Databricks for data engineering workloads.
  • Hands-on experience with major cloud platforms — AWS and Azure.
  • Deep understanding of data architecture, data modelling, and optimisation techniques.
  • Familiarity with BI and reporting environments such as Power BI.
  • Strong analytical and problem-solving abilities with a focus on data quality and governance
  • Proficiency in python or another programming language in a plus.
Read more
Non-Banking Financial Company

Non-Banking Financial Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
1 - 2 yrs
₹5L - ₹6.1L / yr
SQL
databricks
PowerBI
Data engineering
ETL
+6 more

ROLES AND RESPONSIBILITIES:

We are looking for a Junior Data Engineer who will work under guidance to support data engineering tasks, perform basic coding, and actively learn modern data platforms and tools. The ideal candidate should have foundational SQL knowledge, basic exposure to Databricks. This role is designed for early-career professionals who are eager to grow into full data engineering responsibilities while contributing to data pipeline operations and analytical support.


Key Responsibilities-

  • Support the development and maintenance of data pipelines and ETL/ELT workflows under mentorship.
  • Write basic SQL queries, transformations, and assist with Databricks notebook tasks.
  • Help troubleshoot data issues and contribute to ensuring pipeline reliability.
  • Work with senior engineers and analysts to understand data requirements and deliver small tasks.
  • Assist in maintaining documentation, data dictionaries, and process notes.
  • Learn and apply data engineering best practices, coding standards, and cloud fundamentals.
  • Support basic tasks related to Power BI data preparation or integrations as needed.


IDEAL CANDIDATE:

  • Foundational SQL skills with the ability to write and understand basic queries.
  • Basic exposure to Databricks, data transformation concepts, or similar data tools.
  • Understanding of ETL/ELT concepts, data structures, and analytical workflows.
  • Eagerness to learn modern data engineering tools, technologies, and best practices.
  • Strong problem-solving attitude and willingness to work under guidance.
  • Good communication and collaboration skills to work with senior engineers and analysts.


PERKS, BENEFITS AND WORK CULTURE:

Our people define our passion and our audacious, incredibly rewarding achievements. Bajaj Finance Limited is one of India’s most diversified Non-banking financial companies, and among Asia’s top 10 Large workplaces. If you have the drive to get ahead, we can help find you an opportunity at any of the 500+ locations we’re present in India.

Read more
Financial Services Company

Financial Services Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
2 - 5 yrs
₹8L - ₹10.7L / yr
SQL Azure
databricks
ETL
SQL
Data modeling
+4 more

ROLES AND RESPONSIBILITIES:

We are seeking a skilled Data Engineer who can work independently on data pipeline development, troubleshooting, and optimisation tasks. The ideal candidate will have strong SQL skills, hands-on experience with Databricks, and familiarity with cloud platforms such as AWS and Azure. You will be responsible for building and maintaining reliable data workflows, supporting analytical teams, and ensuring high-quality, secure, and accessible data across the organisation.


KEY RESPONSIBILITIES:

  • Design, develop, and maintain scalable data pipelines and ETL/ELT workflows.
  • Build, optimise, and troubleshoot SQL queries, transformations, and Databricks data processes.
  • Work with large datasets to deliver efficient, reliable, and high-performing data solutions.
  • Collaborate closely with analysts, data scientists, and business teams to support data requirements.
  • Ensure data quality, availability, and security across systems and workflows.
  • Monitor pipeline performance, diagnose issues, and implement improvements.
  • Contribute to documentation, standards, and best practices for data engineering processes.


IDEAL CANDIDATE:

  • Proven experience as a Data Engineer or in a similar data-focused role (3+ years).
  • Strong SQL skills with experience writing and optimising complex queries.
  • Hands-on experience with Databricks for data engineering tasks.
  • Experience with cloud platforms such as AWS and Azure.
  • Understanding of ETL/ELT concepts, data modelling, and pipeline orchestration.
  • Familiarity with Power BI and data integration with BI tools.
  • Strong analytical and troubleshooting skills, with the ability to work independently.
  • Experience working end-to-end on data engineering workflows and solutions.


PERKS, BENEFITS AND WORK CULTURE:

Our people define our passion and our audacious, incredibly rewarding achievements. The company is one of India’s most diversified Non-banking financial companies, and among Asia’s top 10 Large workplaces. If you have the drive to get ahead, we can help find you an opportunity at any of the 500+ locations we’re present in India.

Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Pune
2 - 5 yrs
₹8L - ₹11L / yr
Data modeling
ETL

Strong Data engineer profile

Mandatory (Experience 1): Must have 2+ years of hands-on Data Engineering experience.

Mandatory (Experience 2): Must have end-to-end experience in building & maintaining ETL/ELT pipelines (not just BI/reporting).

Mandatory (Technical 1): Must have strong SQL capability (complex queries + optimization).

Mandatory (Technical 2): Must have hands-on Databricks experience.

Mandatory (Role Requirement): Must be able to work independently, troubleshoot data issues, and manage large datasets.

Read more
Tech AI startup in Bangalore

Tech AI startup in Bangalore

Agency job
via Recruit Square by Priyanka choudhary
Remote only
4 - 8 yrs
₹12L - ₹18L / yr
pandas
NumPy
MLOps
SQL
ETL
+1 more

Data Engineer – Validation & Quality


Responsibilities

  • Build rule-based and statistical validation frameworks using Pandas / NumPy.
  • Implement contradiction detection, reconciliation, and anomaly flagging.
  • Design and compute confidence metrics for each evidence record.
  • Automate schema compliance, sampling, and checksum verification across data sources.
  • Collaborate with the Kernel to embed validation results into every output artifact.

Requirements

  • 5 + years in data engineering, data quality, or MLOps validation.
  • Strong SQL optimization and ETL background.
  • Familiarity with data lineage, DQ frameworks, and regulatory standards (SOC 2 / GDPR).
Read more
Agentic AI Platform

Agentic AI Platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Gurugram
3 - 6 yrs
₹10L - ₹25L / yr
DevOps
skill iconPython
Google Cloud Platform (GCP)
Linux/Unix
CI/CD
+21 more

Review Criteria

  • Strong DevOps /Cloud Engineer Profiles
  • Must have 3+ years of experience as a DevOps / Cloud Engineer
  • Must have strong expertise in cloud platforms – AWS / Azure / GCP (any one or more)
  • Must have strong hands-on experience in Linux administration and system management
  • Must have hands-on experience with containerization and orchestration tools such as Docker and Kubernetes
  • Must have experience in building and optimizing CI/CD pipelines using tools like GitHub Actions, GitLab CI, or Jenkins
  • Must have hands-on experience with Infrastructure-as-Code tools such as Terraform, Ansible, or CloudFormation
  • Must be proficient in scripting languages such as Python or Bash for automation
  • Must have experience with monitoring and alerting tools like Prometheus, Grafana, ELK, or CloudWatch
  • Top tier Product-based company (B2B Enterprise SaaS preferred)


Preferred

  • Experience in multi-tenant SaaS infrastructure scaling.
  • Exposure to AI/ML pipeline deployments or iPaaS / reverse ETL connectors.


Role & Responsibilities

We are seeking a DevOps Engineer to design, build, and maintain scalable, secure, and resilient infrastructure for our SaaS platform and AI-driven products. The role will focus on cloud infrastructure, CI/CD pipelines, container orchestration, monitoring, and security automation, enabling rapid and reliable software delivery.


Key Responsibilities:

  • Design, implement, and manage cloud-native infrastructure (AWS/Azure/GCP).
  • Build and optimize CI/CD pipelines to support rapid release cycles.
  • Manage containerization & orchestration (Docker, Kubernetes).
  • Own infrastructure-as-code (Terraform, Ansible, CloudFormation).
  • Set up and maintain monitoring & alerting frameworks (Prometheus, Grafana, ELK, etc.).
  • Drive cloud security automation (IAM, SSL, secrets management).
  • Partner with engineering teams to embed DevOps into SDLC.
  • Troubleshoot production issues and drive incident response.
  • Support multi-tenant SaaS scaling strategies.


Ideal Candidate

  • 3–6 years' experience as DevOps/Cloud Engineer in SaaS or enterprise environments.
  • Strong expertise in AWS, Azure, or GCP.
  • Strong expertise in LINUX Administration.
  • Hands-on with Kubernetes, Docker, CI/CD tools (GitHub Actions, GitLab, Jenkins).
  • Proficient in Terraform/Ansible/CloudFormation.
  • Strong scripting skills (Python, Bash).
  • Experience with monitoring stacks (Prometheus, Grafana, ELK, CloudWatch).
  • Strong grasp of cloud security best practices.



Read more
Data Havn

Data Havn

Agency job
via Infinium Associate by Toshi Srivastava
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2.5 - 4.5 yrs
₹10L - ₹20L / yr
skill iconPython
SQL
Google Cloud Platform (GCP)
SQL server
ETL
+9 more

About the Role:


We are seeking a talented Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.

Responsibilities:

  • Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
  • Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
  • Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
  • Team Management: Able to handle team.
  • Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
  • Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
  • Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
  • Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.

 

 Skills:

  • Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
  • Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
  • Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
  • Understanding of data modeling and data architecture concepts.
  • Experience with ETL/ELT tools and frameworks.
  • Excellent problem-solving and analytical skills.
  • Ability to work independently and as part of a team.

Preferred Qualifications:

  • Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
  • Knowledge of machine learning and artificial intelligence concepts.
  • Experience with data visualization tools (e.g., Tableau, Power BI).
  • Certification in cloud platforms or data engineering.
Read more
JK Technosoft Ltd
Akanksh Gupta
Posted by Akanksh Gupta
Bengaluru (Bangalore), Delhi, Gurugram, Noida, Ghaziabad, Faridabad
6 - 10 yrs
₹30L - ₹42L / yr
Generative AI
GenAI
skill iconPython
skill iconFlask
FastAPI
+3 more

We are looking for a Technical Lead - GenAI with a strong foundation in Python, Data Analytics, Data Science or Data Engineering, system design, and practical experience in building and deploying Agentic Generative AI systems. The ideal candidate is passionate about solving complex problems using LLMs, understands the architecture of modern AI agent frameworks like LangChain/LangGraph, and can deliver scalable, cloud-native back-end services with a GenAI focus.


Key Responsibilities :


- Design and implement robust, scalable back-end systems for GenAI agent-based platforms.


- Work closely with AI researchers and front-end teams to integrate LLMs and agentic workflows into production services.


- Develop and maintain services using Python (FastAPI/Django/Flask), with best practices in modularity and performance.


- Leverage and extend frameworks like LangChain, LangGraph, and similar to orchestrate tool-augmented AI agents.


- Design and deploy systems in Azure Cloud, including usage of serverless functions, Kubernetes, and scalable data services.


- Build and maintain event-driven / streaming architectures using Kafka, Event Hubs, or other messaging frameworks.


- Implement inter-service communication using gRPC and REST.


- Contribute to architectural discussions, especially around distributed systems, data flow, and fault tolerance.


Required Skills & Qualifications :


- Strong hands-on back-end development experience in Python along with Data Analytics or Data Science.


- Strong track record on platforms like LeetCode or in real-world algorithmic/system problem-solving.


- Deep knowledge of at least one Python web framework (e.g., FastAPI, Flask, Django).


- Solid understanding of LangChain, LangGraph, or equivalent LLM agent orchestration tools.


- 2+ years of hands-on experience in Generative AI systems and LLM-based platforms.


- Proven experience with system architecture, distributed systems, and microservices.


- Strong familiarity with Any Cloud infrastructure and deployment practices.


- Should know about any Data Engineering or Analytics expertise (Preferred) e.g. Azure Data Factory, Snowflake, Databricks, ETL tools Talend, Informatica or Power BI, Tableau, Data modelling, Datawarehouse development.


Read more
Remote only
6 - 10 yrs
₹8L - ₹15L / yr
Informatica IICS/IDMC
Informatica PowerCenter
ETL
SQL
Data migration
+1 more

Job Title : Informatica Cloud Developer / Migration Specialist

Experience : 6 to 10 Years

Location : Remote

Notice Period : Immediate


Job Summary :

We are looking for an experienced Informatica Cloud Developer with strong expertise in Informatica IDMC/IICS and experience in migrating from PowerCenter to Cloud.

The candidate will be responsible for designing, developing, and maintaining ETL workflows, data warehouses, and performing data integration across multiple systems.


Mandatory Skills :

Informatica IICS/IDMC, Informatica PowerCenter, ETL Development, SQL, Data Migration (PowerCenter to IICS), and Performance Tuning.


Key Responsibilities :

  • Design, develop, and maintain ETL processes using Informatica IICS/IDMC.
  • Work on migration projects from Informatica PowerCenter to IICS Cloud.
  • Troubleshoot and resolve issues related to mappings, mapping tasks, and taskflows.
  • Analyze business requirements and translate them into technical specifications.
  • Conduct unit testing, performance tuning, and ensure data quality.
  • Collaborate with cross-functional teams for data integration and reporting needs.
  • Prepare and maintain technical documentation.

Required Skills :

  • 4 to 5 years of hands-on experience in Informatica Cloud (IICS/IDMC).
  • Strong experience with Informatica PowerCenter.
  • Proficiency in SQL and data warehouse concepts.
  • Good understanding of ETL performance tuning and debugging.
  • Excellent communication and problem-solving skills.
Read more
Bluecopa
Mumbai, Bengaluru (Bangalore), Delhi
3 - 6 yrs
₹14L - ₹15L / yr
JIRA
ETL
confluence
R2R
Financial analysis
+3 more

Required Qualifications

  • Bachelor’s degree Commerce background / MBA Finance (mandatory).
  • 3+ years of hands-on implementation/project management experience
  • Proven experience delivering projects in Fintech, SaaS, or ERP environments
  • Strong expertise in accounting principles, R2R (Record-to-Report), treasury, and financial workflows.
  • Hands-on SQL experience, including the ability to write and debug complex queries (joins, CTEs, subqueries)
  • Experience working with ETL pipelines or data migration processes
  • Proficiency in tools like Jira, Confluence, Excel, and project tracking systems
  • Strong communication and stakeholder management skills
  • Ability to manage multiple projects simultaneously and drive client success

Preferred Qualifications

  • Prior experience implementing financial automation tools (e.g., SAP, Oracle, Anaplan, Blackline)
  • Familiarity with API integrations and basic data mapping
  • Experience in agile/scrum-based implementation environments
  • Exposure to reconciliation, book closure, AR/AP, and reporting systems
  • PMP, CSM, or similar certifications
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Bharanidharan K
Posted by Bharanidharan K
Mumbai
7 - 12 yrs
Best in industry
SQL
SQL server
Databases
Performance tuning
Stored Procedures
+2 more

Required Skills and Qualifications :


  • Bachelor’s degree in Computer Science, Information Technology, or a related field. 
  • Proven experience as a Data Modeler or in a similar role at a asset manager or financial firm. 
  • Strong Understanding of various business concepts related to buy side financial firms. Understanding of Private Markets (Private Credit, Private Equity, Real Estate, Alternatives) is required. 
  • Strong understanding of database design principles and data modeling techniques (e.g., ER modeling, dimensional modeling). 
  • Knowledge of SQL and experience with relational databases (e.g., Oracle, SQL Server, MySQL). 
  • Familiarity with NoSQL databases is a plus. 
  • Excellent analytical and problem-solving skills. 
  • Strong communication skills and the ability to work collaboratively. 


Preferred Qualifications: 

  • Experience in data warehousing and business intelligence. 
  • Knowledge of data governance practices. 
  • Certification in data modeling or related fields.
  •  

Key Responsibilities :

  • Design and develop conceptual, logical, and physical data models based on business requirements. 
  • Collaborate with stakeholders in finance, operations, risk, legal, compliance and front offices to gather and analyze data requirements. 
  • Ensure data models adhere to best practices for data integrity, performance, and security. 
  • Create and maintain documentation for data models, including data dictionaries and metadata. 
  • Conduct data profiling and analysis to identify data quality issues. 
  • Conduct detailed meetings and discussions with business to translate broad business functionality requirements into data concepts, data models and data products.


Read more
Nyx Wolves
Remote only
5 - 8 yrs
₹11L - ₹13L / yr
Denodo VDP
Denodo Scheduler
Denodo Data Catalog
SQL server
Query optimization
+4 more


💡 Transform Banking Data with Us!


We’re on the lookout for a Senior Denodo Developer (Remote) to shape the future of data virtualization in the banking domain. If you’re passionate about turning complex financial data into actionable insights, this role is for you! 🚀


What You’ll Do:

✔ Build cutting-edge Denodo-based data virtualization solutions

✔ Collaborate with banking SMEs, architects & analysts

✔ Design APIs, data services & scalable models

✔ Ensure compliance with global banking standards

✔ Mentor juniors & drive best practices


💼 What We’re Looking For:

🔹 6+ years of IT experience (3+ years in Denodo)

🔹 Strong in Denodo VDP, Scheduler & Data Catalog

🔹 Skilled in SQL, optimization & performance tuning

🔹 Banking/Financial services domain expertise (CBS, Payments, KYC/AML, Risk & Compliance)

🔹 Cloud knowledge (AWS, Azure, GCP)

📍 Location: Remote


🎯 Experience: 6+ years

🌟 Catchline for candidates:


👉 “If you thrive in the world of data and want to make banking smarter, faster, and more secure — this is YOUR chance!”


📩 Apply Now:

  • Connect with me here on Cutshort and share your resume/message directly.


Let’s build something great together 🚀


#WeAreHiring #DenodoDeveloper #BankingJobs #RemoteWork #DataVirtualization #FinTechCareers #DataIntegration #TechTalent

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort