Cutshort logo

50+ ETL Jobs in India

Apply to 50+ ETL Jobs on CutShort.io. Find your next job, effortlessly. Browse ETL Jobs and apply today!

icon
CoffeeBeans

at CoffeeBeans

2 candid answers
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Pune
7 - 9 yrs
Upto ₹32L / yr (Varies
)
skill iconPython
ETL
Data modeling
CI/CD
databricks
+2 more

We are looking for experienced Data Engineers who can independently build, optimize, and manage scalable data pipelines and platforms.

In this role, you’ll:

  • Work closely with clients and internal teams to deliver robust data solutions powering analytics, AI/ML, and operational systems.
  • Mentor junior engineers and bring engineering discipline into our data engagements.

Key Responsibilities

  • Design, build, and optimize large-scale, distributed data pipelines for both batch and streaming use cases.
  • Implement scalable data models, warehouses/lakehouses, and data lakes to support analytics and decision-making.
  • Collaborate with stakeholders to translate business requirements into technical solutions.
  • Drive performance tuning, monitoring, and reliability of data pipelines.
  • Write clean, modular, production-ready code with proper documentation and testing.
  • Contribute to architectural discussions, tool evaluations, and platform setup.
  • Mentor junior engineers and participate in code/design reviews.

Must-Have Skills

  • Strong programming skills in Python and advanced SQL expertise.
  • Deep understanding of ETL/ELT, data modeling (OLTP & OLAP), warehousing, and stream processing.
  • Hands-on with distributed data processing frameworks (Apache Spark, Flink, or similar).
  • Experience with orchestration tools like Airflow (or similar).
  • Familiarity with CI/CD pipelines and Git.
  • Ability to debug, optimize, and scale data pipelines in production.

Good to Have

  • Experience with cloud platforms (AWS preferred; GCP/Azure also welcome).
  • Exposure to Databricks, dbt, or similar platforms.
  • Understanding of data governance, quality frameworks, and observability.
  • Certifications (e.g., AWS Data Analytics, Solutions Architect, or Databricks).

Other Expectations

  • Comfortable working in fast-paced, client-facing environments.
  • Strong analytical and problem-solving skills with attention to detail.
  • Ability to adapt across tools, stacks, and business domains.
  • Willingness to travel within India for short/medium-term client engagements, as needed.
Read more
Bluecopa
Mumbai, Bengaluru (Bangalore), Delhi
3 - 6 yrs
₹14L - ₹15L / yr
JIRA
ETL
confluence
R2R
Financial analysis
+3 more

Required Qualifications

  • Bachelor’s degree Commerce background / MBA Finance (mandatory).
  • 3+ years of hands-on implementation/project management experience
  • Proven experience delivering projects in Fintech, SaaS, or ERP environments
  • Strong expertise in accounting principles, R2R (Record-to-Report), treasury, and financial workflows.
  • Hands-on SQL experience, including the ability to write and debug complex queries (joins, CTEs, subqueries)
  • Experience working with ETL pipelines or data migration processes
  • Proficiency in tools like Jira, Confluence, Excel, and project tracking systems
  • Strong communication and stakeholder management skills
  • Ability to manage multiple projects simultaneously and drive client success

Preferred Qualifications

  • Prior experience implementing financial automation tools (e.g., SAP, Oracle, Anaplan, Blackline)
  • Familiarity with API integrations and basic data mapping
  • Experience in agile/scrum-based implementation environments
  • Exposure to reconciliation, book closure, AR/AP, and reporting systems
  • PMP, CSM, or similar certifications
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Bharanidharan K
Posted by Bharanidharan K
Mumbai
7 - 12 yrs
Best in industry
SQL
SQL server
Databases
Performance tuning
Stored Procedures
+2 more

Required Skills and Qualifications :


  • Bachelor’s degree in Computer Science, Information Technology, or a related field. 
  • Proven experience as a Data Modeler or in a similar role at a asset manager or financial firm. 
  • Strong Understanding of various business concepts related to buy side financial firms. Understanding of Private Markets (Private Credit, Private Equity, Real Estate, Alternatives) is required. 
  • Strong understanding of database design principles and data modeling techniques (e.g., ER modeling, dimensional modeling). 
  • Knowledge of SQL and experience with relational databases (e.g., Oracle, SQL Server, MySQL). 
  • Familiarity with NoSQL databases is a plus. 
  • Excellent analytical and problem-solving skills. 
  • Strong communication skills and the ability to work collaboratively. 


Preferred Qualifications: 

  • Experience in data warehousing and business intelligence. 
  • Knowledge of data governance practices. 
  • Certification in data modeling or related fields.
  •  

Key Responsibilities :

  • Design and develop conceptual, logical, and physical data models based on business requirements. 
  • Collaborate with stakeholders in finance, operations, risk, legal, compliance and front offices to gather and analyze data requirements. 
  • Ensure data models adhere to best practices for data integrity, performance, and security. 
  • Create and maintain documentation for data models, including data dictionaries and metadata. 
  • Conduct data profiling and analysis to identify data quality issues. 
  • Conduct detailed meetings and discussions with business to translate broad business functionality requirements into data concepts, data models and data products.


Read more
MyOperator - VoiceTree Technologies

at MyOperator - VoiceTree Technologies

1 video
3 recruiters
Vijay Muthu
Posted by Vijay Muthu
Remote only
3 - 5 yrs
₹12L - ₹20L / yr
skill iconPython
skill iconDjango
FastAPI
Microservices
Large Language Models (LLM)
+22 more

About Us:

MyOperator and Heyo are India’s leading conversational platforms, empowering 40,000+ businesses with Call and WhatsApp-based engagement. We’re a product-led SaaS company scaling rapidly, and we’re looking for a skilled Software Developer to help build the next generation of scalable backend systems.


Role Overview:

We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.


Key Responsibilities:

  • Develop robust backend services using Python, Django, and FastAPI
  • Design and maintain a scalable microservices architecture
  • Integrate LangChain/LLMs into AI-powered features
  • Write clean, tested, and maintainable code with pytest
  • Manage and optimize databases (MySQL/Postgres)
  • Deploy and monitor services on AWS
  • Collaborate across teams to define APIs, data flows, and system architecture

Must-Have Skills:

  • Python and Django
  • MySQL or Postgres
  • Microservices architecture
  • AWS (EC2, RDS, Lambda, etc.)
  • Unit testing using pytest
  • LangChain or Large Language Models (LLM)
  • Strong grasp of Data Structures & Algorithms
  • AI coding assistant tools (e.g., Chat GPT & Gemini)

Good to Have:

  • MongoDB or ElasticSearch
  • Go or PHP
  • FastAPI
  • React, Bootstrap (basic frontend support)
  • ETL pipelines, Jenkins, Terraform

Why Join Us?

  • 100% Remote role with a collaborative team
  • Work on AI-first, high-scale SaaS products
  • Drive real impact in a fast-growing tech company
  • Ownership and growth from day one


Read more
Service Co

Service Co

Agency job
via Vikash Technologies by Rishika Teja
Bengaluru (Bangalore)
6 - 10 yrs
₹10L - ₹25L / yr
Azure Data Factory
databricks
blob storage
SQL
skill iconPython
+3 more


Key Responsibilities:


Design, build and maintain scalable ETL/ELT pipelines using Azure Data Factory, Azure Databricks and Spark.


Develop and optimize data workflows using SQL and Python/Scala for large-scale processing.


Implement performance tuning and optimization strategies for pipelines and Spark jobs.


Support feature engineering and model deployment workflows with data engineering teams.


Ensure data quality, validation, error-handling and monitoring are in place. Work with Delta Lake, Parquet and Big Data storage (ADLS / Blob).


Required Skills:


Azure Data Platform: Data Factory, Databricks, ADLS / Blob Storage.


Strong SQL and Python or Scala.


Big Data technologies: Spark, Delta Lake, Parquet. ETL/ELT pipeline design and data transformation expertise.


Data pipeline optimization, performance tuning and CI/CD for data workloads.


Nice-to-Have Familiarity with data governance, security and compliance in hybrid environments.

Read more
Nyx Wolves
Remote only
5 - 8 yrs
₹11L - ₹13L / yr
Denodo VDP
Denodo Scheduler
Denodo Data Catalog
SQL server
Query optimization
+4 more


💡 Transform Banking Data with Us!


We’re on the lookout for a Senior Denodo Developer (Remote) to shape the future of data virtualization in the banking domain. If you’re passionate about turning complex financial data into actionable insights, this role is for you! 🚀


What You’ll Do:

✔ Build cutting-edge Denodo-based data virtualization solutions

✔ Collaborate with banking SMEs, architects & analysts

✔ Design APIs, data services & scalable models

✔ Ensure compliance with global banking standards

✔ Mentor juniors & drive best practices


💼 What We’re Looking For:

🔹 6+ years of IT experience (3+ years in Denodo)

🔹 Strong in Denodo VDP, Scheduler & Data Catalog

🔹 Skilled in SQL, optimization & performance tuning

🔹 Banking/Financial services domain expertise (CBS, Payments, KYC/AML, Risk & Compliance)

🔹 Cloud knowledge (AWS, Azure, GCP)

📍 Location: Remote


🎯 Experience: 6+ years

🌟 Catchline for candidates:


👉 “If you thrive in the world of data and want to make banking smarter, faster, and more secure — this is YOUR chance!”


📩 Apply Now:

  • Connect with me here on Cutshort and share your resume/message directly.


Let’s build something great together 🚀


#WeAreHiring #DenodoDeveloper #BankingJobs #RemoteWork #DataVirtualization #FinTechCareers #DataIntegration #TechTalent

Read more
Tata Consultancy Services
Agency job
via Risk Resources LLP hyd by susmitha o
Chennai, Hyderabad, Kolkata, Delhi, Pune, Bengaluru (Bangalore)
5 - 8 yrs
₹7L - ₹30L / yr
Informatica MDM
MDM
ETL
Big Data

• Technical expertise in the area of development of Master Data Management, data extraction, transformation, and load (ETL) applications, big data using existing and emerging technology platforms and cloud architecture

• Functions as lead developer• Support System Analysis, Technical/Data design, development, unit testing, and oversee end-to-end data solution.

• Technical SME in Master Data Management application, ETL, big data and cloud technologies                                                                                               

• Collaborate with IT teams to ensure technical designs and implementations account for requirements, standards, and best practices                                                                                           

• Performance tuning of end-to-end MDM, database, ETL, Big data processes or in the source/target database endpoints as needed.                                                               

• Mentor and advise junior members of team to provide guidance.                                                  

• Perform a technical lead and solution lead role for a team of onshore and offshore developers

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Archana M
Posted by Archana M
Mumbai
5 - 7 yrs
Best in industry
ETL
skill iconPython
Apache Spark

📢 DATA SOURCING & ANALYSIS EXPERT (L3 Support) – Mumbai 📢

Are you ready to supercharge your Data Engineering career in the financial domain?

We’re seeking a seasoned professional (5–7 years experience) to join our Mumbai team and lead in data sourcing, modelling, and analysis. If you’re passionate about solving complex challenges in Relational & Big Data ecosystems, this role is for you.

What You’ll Be Doing

  • Translate business needs into robust data models, program specs, and solutions
  • Perform advanced SQL optimization, query tuning, and L3-level issue resolution
  • Work across the entire data stack: ETL, Python / Spark, Autosys, and related systems
  • Debug, monitor, and improve data pipelines in production
  • Collaborate with business, analytics, and engineering teams to deliver dependable data services

What You Should Bring

  • 5+ years in financial / fintech / capital markets environment
  • Proven expertise in relational databases and big data technologies
  • Strong command over SQL tuning, query optimization, indexing, partitioning
  • Hands-on experience with ETL pipelines, Spark / PySpark, Python scripting, job scheduling (e.g. Autosys)
  • Ability to troubleshoot issues at the L3 level, root cause analysis, performance tuning
  • Good communication skills — you’ll coordinate with business users, analytics, and tech teams


Read more
Data Axle

at Data Axle

2 candid answers
Nikita Sinha
Posted by Nikita Sinha
Pune
3 - 6 yrs
Upto ₹28L / yr (Varies
)
skill iconPython
PySpark
SQL
skill iconAmazon Web Services (AWS)
databricks
+1 more


Data Pipeline Development: Design and implement scalable data pipelines using PySpark and Databricks on AWS cloud infrastructure

ETL/ELT Operations: Extract, transform, and load data from various sources using Python, SQL, and PySpark for batch and streaming data processing

Databricks Platform Management: Develop and maintain data workflows, notebooks, and clusters in Databricks environment for efficient data processing

AWS Cloud Services: Utilize AWS services including S3, Glue, EMR, Redshift, Kinesis, and Lambda for comprehensive data solutions

Data Transformation: Write efficient PySpark scripts and SQL queries to process large-scale datasets and implement complex business logic

Data Quality & Monitoring: Implement data validation, quality checks, and monitoring solutions to ensure data integrity across pipelines

Collaboration: Work closely with data scientists, analysts, and other engineering teams to support analytics and machine learning initiatives

Performance Optimization: Monitor and optimize data pipeline performance, query efficiency, and resource utilization in Databricks and AWS environments

Required Qualifications:

Experience: 3+ years of hands-on experience in data engineering, ETL development, or related field

PySpark Expertise: Strong proficiency in PySpark for large-scale data processing and transformations

Python Programming: Solid Python programming skills with experience in data manipulation libraries (pandas etc)

SQL Proficiency: Advanced SQL skills including complex queries, window functions, and performance optimization

Databricks Experience: Hands-on experience with Databricks platform, including notebook development, cluster management, and job scheduling

AWS Cloud Services: Working knowledge of core AWS services (S3, Glue, EMR, Redshift, IAM, Lambda)

Data Modeling: Understanding of dimensional modeling, data warehousing concepts, and ETL best practices

Version Control: Experience with Git and collaborative development workflows


Preferred Qualifications:

Education: Bachelor's degree in Computer Science, Engineering, Mathematics, or related technical field

Advanced AWS: Experience with additional AWS services like Athena, QuickSight, Step Functions, and CloudWatch

Data Formats: Experience working with various data formats (JSON, Parquet, Avro, Delta Lake)

Containerization: Basic knowledge of Docker and container orchestration

Agile Methodology: Experience working in Agile/Scrum development environments

Business Intelligence Tools: Exposure to BI tools like Tableau, Power BI, or Databricks SQL Analytics


Technical Skills Summary:

Core Technologies:

  • PySpark & Spark SQL
  • Python (pandas, boto3)
  • SQL (PostgreSQL, MySQL, Redshift)
  • Databricks (notebooks, clusters, jobs, Delta Lake)

AWS Services:

  • S3, Glue, EMR, Redshift
  • Lambda, Athena
  • IAM, CloudWatch

Development Tools:

  • Git/GitHub
  • CI/CD pipelines, Docker
  • Linux/Unix command line


Read more
Goalkeep

at Goalkeep

2 candid answers
Simran Adwani
Posted by Simran Adwani
Remote only
1 - 8 yrs
₹7.2L - ₹9L / yr
SQL
Data modeling
Relational Database (RDBMS)
ETL
Pipeline management
+2 more


Apply here: https://forms.gle/DefR28CvNfepJT3o6


Roles and Responsibilities:

You’ll work closely with Goalkeep’s internal team and support client projects as needed. Your responsibilities will include: 

  • Infrastructure Maintenance & Optimization
  • Design and review data pipeline diagrams for both client and internal projects
  • Build data pipelines by writing clean, efficient SQL queries for data analysis and quality checks
  • Monitor data pipeline performance and proactively raise alarms and fix issues
  • Maintain and upgrade Goalkeep’s internal tech infrastructure
  • Monitor infrastructure costs to identify inefficiencies and recommend cost-saving strategies across cloud services and tech subscriptions
  • Internal Tech Enablement & SupportProvide tech setup and troubleshooting support to internal teams and analysts so that we can successfully deliver on client projects
  • Assist with onboarding new team members onto Goalkeep systems

What we’re looking for:

Hard Skills:

  • Data modeling and database management (PostgreSQL, MySQL, or SQL Server)
  • Installing and maintaining software on Linux systems
  • Familiarity with cloud-based platforms (AWS / GCP / Azure) is a must
  • Ability to troubleshoot based on system logs and performance indicators
  • Data engineering: writing efficient SQL, designing pipelines 

Soft Skills & Mindsets:

  • Curiosity and accountability when investigating system issues
  • Discipline to proactively maintain and monitor infra

Must-Know Tools:

  • SQL (any dialect)
  • Knowledge of software engineering best practices and version control (Git)

Preferred Qualifications:

  • Engineering degree with a minimum 2 years of experience working as a data scientist /data analyst
  • Bachelor's degree in any STEM field

What’s in it for you?

  • The chance to work at the intersection of social impact and technology.
  • Learn and grow in areas such as cloud infrastructure, data governance, and data engineering.
  • Be part of a close-knit team passionate about using data for good. 
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Manasa S
Posted by Manasa S
Mumbai
6 - 12 yrs
Best in industry
Informatica
Stored Procedures
SQL
ETL

We are looking for an experienced DB2 developer/DBA who has worked in a critical application

with large sized Database. The role requires the candidate to understand the landscape of the

application and the data including its topology across the Online Data store and the Data

Warehousing counter parts. The challenges we strive to solve include scalability/performance

related to dealing with very large data sets and multiple data sources.


The role involves collaborating with global team members and provides a unique opportunity to

network with a diverse group of people.


The candidate who fills this role of a Database developer in our team will be involved in building

and creating solutions from the requirements stage through deployment. A successful candidate

is self-motivated, innovative, thinks outside the box, has excellent communication skills and can

work with clients and stakeholders from both the business and technology with

ease.


Required Skills:

Expertise in writing complex data retrieval queries, stored procs and performance tuning

Experience in migrating large scale database from Sybase to a new tech stack

Expertise in relational DB: Sybase, AZURE SQL Server, DB2 and nosgl databases

Strong knowledge in Linux Shell Scripting

Working knowledge of Python programming

Working knowledge of Informatica

Good knowledge of Autosys or any such scheduling tool

Detail oriented, ability to turn deliverables around quickly with high degree of accuracy Strong

analytical skills, ability to interpret business requirements and produce functional and technical

design documents

Good time management skills - ability to prioritize and multi-task, handling multiple efforts at

once

Strong desire to understand and learn domain.


Desired Skills:

Experience in Sybase, AZURE SQL Server, DB2

Experience in migrating relational database to modern tech stack

Experience in a financial services/banking industry

Read more
MatchMove

at MatchMove

2 candid answers
1 recruiter
Ariba Khan
Posted by Ariba Khan
Remote only
6yrs+
Upto ₹50L / yr (Varies
)
skill iconPython
ETL
skill iconAmazon Web Services (AWS)

About Us

MatchMove is a leading embedded finance platform that empowers businesses to embed financial services into their applications. We provide innovative solutions across payments, banking-as-a-service, and spend/send management, enabling our clients to drive growth and enhance customer experiences.


Are You The One?

As a Technical Lead Engineer - Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business.


You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight.


You will contribute to

  • Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services.
  • Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark.
  • Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services.
  • Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases.
  • Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment.
  • Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM).
  • Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights.
  • Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines.

Responsibilities

  • Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR.
  • Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication.
  • Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation.
  • Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards).
  • Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations.
  • Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership.
  • Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform.

 Requirements

  • At-least 6 years of experience in data engineering.
  • Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum.
  • Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs.
  • Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation.
  • Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale.
  • Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions.
  • Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments.
  • Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene.
  • Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders.

 Brownie Points

  • Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements.
  • Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection.
  • Familiarity with data contracts, data mesh patterns, and data as a product principles.
  • Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases.
  • Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3.
  • Experience building data platforms for ML/AI teams or integrating with model feature stores.

 MatchMove Culture:

  • We cultivate a dynamic and innovative culture that fuels growth, creativity, and collaboration. Our fast-paced fintech environment thrives on adaptability, agility, and open communication.
  • We focus on employee development, supporting continuous learning and growth through training programs, learning on the job and mentorship.
  • We encourage speaking up, sharing ideas, and taking ownership. Embracing diversity, our team spans across Asia, fostering a rich exchange of perspectives and experiences.
  • Together, we harness the power of fintech and e-commerce to make a meaningful impact on people's lives.

Grow with us and shape the future of fintech and e-commerce. Join us and be part of something bigger!

Read more
Pluginlive

at Pluginlive

1 recruiter
Harsha Saggi
Posted by Harsha Saggi
Mumbai, Chennai
1 - 3 yrs
₹5L - ₹8L / yr
skill iconPython
SQL
Data Structures
ETL
Dashboard
+3 more

About Us:

PluginLive is an all-in-one tech platform that bridges the gap between all its stakeholders - Corporates, Institutes Students, and Assessment & Training Partners. This ecosystem helps Corporates in brand building/positioning with colleges and the student community to scale its human capital, at the same time increasing student placements for Institutes, and giving students a real time perspective of the corporate world to help upskill themselves into becoming more desirable candidates.


Role Overview:

Entry-level Data Engineer position focused on building and maintaining data pipelines while developing visualization skills. You'll work alongside senior engineers to support our data infrastructure and create meaningful insights through data visualization.


Responsibilities:

  • Assist in building and maintaining ETL/ELT pipelines for data processing
  • Write SQL queries to extract and analyze data from various sources
  • Support data quality checks and basic data validation processes
  • Create simple dashboards and reports using visualization tools
  • Learn and work with Oracle Cloud services under guidance
  • Use Python for basic data manipulation and cleaning tasks
  • Document data processes and maintain data dictionaries
  • Collaborate with team members to understand data requirements
  • Participate in troubleshooting data issues with senior support
  • Contribute to data migration tasks as needed


Qualifications:

Required:

  • Bachelor's degree in Computer Science, Information Systems, or related field
  • around 2 years of experience in data engineering or related field
  • Strong SQL knowledge and database concepts
  • Comfortable with Python programming
  • Understanding of data structures and ETL concepts
  • Problem-solving mindset and attention to detail
  • Good communication skills
  • Willingness to learn cloud technologies


Preferred:

  • Exposure to Oracle Cloud or any cloud platform (AWS/GCP)
  • Basic knowledge of data visualization tools (Tableau, Power BI, or Python libraries like Matplotlib)
  • Experience with Pandas for data manipulation
  • Understanding of data warehousing concepts
  • Familiarity with version control (Git)
  • Academic projects or internships involving data processing


Nice-to-Have:

  • Knowledge of dbt, BigQuery, or Snowflake
  • Exposure to big data concepts
  • Experience with Jupyter notebooks
  • Comfort with AI-assisted coding tools (Copilot, GPTs)
  • Personal projects showcasing data work


What We Offer:

  • Mentorship from senior data engineers
  • Hands-on learning with modern data stack
  • Access to paid AI tools and learning resources
  • Clear growth path to mid-level engineer
  • Direct impact on product and data strategy
  • No unnecessary meetings — focused execution
  • Strong engineering culture with continuous learning opportunities
Read more
Cymetrix Software

at Cymetrix Software

2 candid answers
Netra Shettigar
Posted by Netra Shettigar
Remote only
3 - 7 yrs
₹8L - ₹20L / yr
Google Cloud Platform (GCP)
ETL
skill iconPython
Big Data
SQL
+4 more

Must have skills:

1. GCP - GCS, PubSub, Dataflow or DataProc, Bigquery, Airflow/Composer, Python(preferred)/Java

2. ETL on GCP Cloud - Build pipelines (Python/Java) + Scripting, Best Practices, Challenges

3. Knowledge of Batch and Streaming data ingestion, build End to Data pipelines on GCP

4. Knowledge of Databases (SQL, NoSQL), On-Premise and On-Cloud, SQL vs No SQL, Types of No-SQL DB (At least 2 databases)

5. Data Warehouse concepts - Beginner to Intermediate level


Role & Responsibilities:

● Work with business users and other stakeholders to understand business processes.

● Ability to design and implement Dimensional and Fact tables

● Identify and implement data transformation/cleansing requirements

● Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data

from various systems to the Enterprise Data Warehouse

● Develop conceptual, logical, and physical data models with associated metadata including data lineage and technical

data definitions

● Design, develop and maintain ETL workflows and mappings using the appropriate data load technique

● Provide research, high-level design, and estimates for data transformation and data integration from source

applications to end-user BI solutions.

● Provide production support of ETL processes to ensure timely completion and availability of data in the data

warehouse for reporting use.

● Analyze and resolve problems and provide technical assistance as necessary. Partner with the BI team to evaluate,

design, develop BI reports and dashboards according to functional specifications while maintaining data integrity and

data quality.

● Work collaboratively with key stakeholders to translate business information needs into well-defined data

requirements to implement the BI solutions.

● Leverage transactional information, data from ERP, CRM, HRIS applications to model, extract and transform into

reporting & analytics.

● Define and document the use of BI through user experience/use cases, prototypes, test, and deploy BI solutions.

● Develop and support data governance processes, analyze data to identify and articulate trends, patterns, outliers,

quality issues, and continuously validate reports, dashboards and suggest improvements.

● Train business end-users, IT analysts, and developers.

Read more
Aceis Services

at Aceis Services

2 candid answers
Anushi Mishra
Posted by Anushi Mishra
Remote only
2 - 10 yrs
₹8.6L - ₹30.2L / yr
CI/CD
Apache Spark
PySpark
MLOps
skill iconMachine Learning (ML)
+6 more

We are hiring freelancers to work on advanced Data & AI projects using Databricks. If you are passionate about cloud platforms, machine learning, data engineering, or architecture, and want to work with cutting-edge tools on real-world challenges, this is the opportunity for you!

Key Details

  • Work Type: Freelance / Contract
  • Location: Remote
  • Time Zones: IST / EST only
  • Domain: Data & AI, Cloud, Big Data, Machine Learning
  • Collaboration: Work with industry leaders on innovative projects

🔹 Open Roles

1. Databricks – Senior Consultant

  • Skills: Data Warehousing, Python, Java, Scala, ETL, SQL, AWS, GCP, Azure
  • Experience: 6+ years

2. Databricks – ML Engineer

  • Skills: CI/CD, MLOps, Machine Learning, Spark, Hadoop
  • Experience: 4+ years

3. Databricks – Solution Architect

  • Skills: Azure, GCP, AWS, CI/CD, MLOps
  • Experience: 7+ years

4. Databricks – Solution Consultant

  • Skills: SQL, Spark, BigQuery, Python, Scala
  • Experience: 2+ years

What We Offer

  • Opportunity to work with top-tier professionals and clients
  • Exposure to cutting-edge technologies and real-world data challenges
  • Flexible remote work environment aligned with IST / EST time zones
  • Competitive compensation and growth opportunities

📌 Skills We Value

Cloud Computing | Data Warehousing | Python | Java | Scala | ETL | SQL | AWS | GCP | Azure | CI/CD | MLOps | Machine Learning | Spark |

Read more
CoffeeBeans

at CoffeeBeans

2 candid answers
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Pune
5 - 7 yrs
Upto ₹22L / yr (Varies
)
skill iconPython
SQL
ETL
Data modeling
Spark
+6 more

Role Overview

We're looking for experienced Data Engineers who can independently design, build, and manage scalable data platforms. You'll work directly with clients and internal teams to develop robust data pipelines that support analytics, AI/ML, and operational systems.

You’ll also play a mentorship role and help establish strong engineering practices across our data projects.

Key Responsibilities

  • Design and develop large-scale, distributed data pipelines (batch and streaming)
  • Implement scalable data models, warehouses/lakehouses, and data lakes
  • Translate business requirements into technical data solutions
  • Optimize data pipelines for performance and reliability
  • Ensure code is clean, modular, tested, and documented
  • Contribute to architecture, tooling decisions, and platform setup
  • Review code/design and mentor junior engineers

Must-Have Skills

  • Strong programming skills in Python and advanced SQL
  • Solid grasp of ETL/ELT, data modeling (OLTP & OLAP), and stream processing
  • Hands-on experience with frameworks like Apache Spark, Flink, etc.
  • Experience with orchestration tools like Airflow
  • Familiarity with CI/CD pipelines and Git
  • Ability to debug and scale data pipelines in production

Preferred Skills

  • Experience with cloud platforms (AWS preferred, GCP or Azure also fine)
  • Exposure to Databricks, dbt, or similar tools
  • Understanding of data governance, quality frameworks, and observability
  • Certifications (e.g., AWS Data Analytics, Solutions Architect, Databricks) are a bonus

What We’re Looking For

  • Problem-solver with strong analytical skills and attention to detail
  • Fast learner who can adapt across tools, tech stacks, and domains
  • Comfortable working in fast-paced, client-facing environments
  • Willingness to travel within India when required
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Anurag Sinha
Posted by Anurag Sinha
Bengaluru (Bangalore), Mumbai, Pune
4 - 8 yrs
Best in industry
skill iconPython
API
RESTful APIs
skill iconFlask
ETL
+1 more
  • 4= years of experience
  • Proficiency in Python programming.
  • Experience with Python Service Development (RestAPI/FlaskAPI)
  • Basic knowledge of front-end development.
  • Basic knowledge of Data manipulation and analysis libraries
  • Code versioning and collaboration. (Git)
  • Knowledge for Libraries for extracting data from websites.
  • Knowledge of SQL and NoSQL databases
  • Familiarity with Cloud (Azure /AWS) technologies


Read more
Bluecopa

Bluecopa

Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Hyderabad, Bengaluru (Bangalore)
3 - 6 yrs
₹10L - ₹15L / yr
Project Management
SQL Query Analyzer
JIRA
confluence
Implementation
+5 more

Role: Technical Lead - Finance Solutions

Exp: 3 - 6 Years

CTC: up to 20 LPA



Required Qualifications

  • Bachelor’s degree in Finance, Business Administration, Information Systems, or related field
  • 3+ years of hands-on implementation/project management experience
  • Proven experience delivering projects in Fintech, SaaS, or ERP environments
  • Strong understanding of accounting principles and financial workflows
  • Hands-on SQL experience, including the ability to write and debug complex queries (joins, CTEs, subqueries)
  • Experience working with ETL pipelines or data migration processes
  • Proficiency in tools like Jira, Confluence, Excel, and project tracking systems
  • Strong communication and stakeholder management skills
  • Ability to manage multiple projects simultaneously and drive client success


Read more
MindCrew Technologies

at MindCrew Technologies

3 recruiters
Agency job
Pune
8 - 12 yrs
₹10L - ₹15L / yr
Data engineering
Data modeling
Snow flake schema
ETL
ETL architecture
+3 more

Job Title: Lead Data Engineer

📍 Location: Pune

🧾 Experience: 10+ Years

💰 Budget: Up to 1.7 LPM


Responsibilities

  • Collaborate with Data & ETL teams to review, optimize, and scale data architectures within Snowflake.
  • Design, develop, and maintain efficient ETL/ELT pipelines and robust data models.
  • Optimize SQL queries for performance and cost efficiency.
  • Ensure data quality, reliability, and security across pipelines and datasets.
  • Implement Snowflake best practices for performance, scaling, and governance.
  • Participate in code reviews, knowledge sharing, and mentoring within the data engineering team.
  • Support BI and analytics initiatives by enabling high-quality, well-modeled datasets.


Read more
MindCrew Technologies

at MindCrew Technologies

3 recruiters
Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Pune
10 - 14 yrs
₹10L - ₹15L / yr
Snowflake
ETL
SQL
Snow flake schema
Data modeling
+3 more

Exp: 10+ Years

CTC: 1.7 LPM

Location: Pune

SnowFlake Expertise Profile


Should hold 10 + years of experience with strong skills with core understanding of cloud data warehouse principles and extensive experience in designing, building, optimizing, and maintaining robust and scalable data solutions on the Snowflake platform.

Possesses a strong background in data modelling, ETL/ELT, SQL development, performance tuning, scaling, monitoring and security handling.


Responsibilities:

* Collaboration with Data and ETL team to review code, understand current architecture and help improve it based on Snowflake offerings and experience

* Review and implement best practices to design, develop, maintain, scale, efficiently monitor data pipelines and data models on the Snowflake platform for

ETL or BI.

* Optimize complex SQL queries for data extraction, transformation, and loading within Snowflake.

* Ensure data quality, integrity, and security within the Snowflake environment.

* Participate in code reviews and contribute to the team's development standards.

Education:

* Bachelor’s degree in computer science, Data Science, Information Technology, or anything equivalent.

* Relevant Snowflake certifications are a plus (e.g., Snowflake certified Pro / Architecture / Advanced).

Read more
Remote only
0 - 1 yrs
₹5000 - ₹7000 / mo
Attention to detail
Troubleshooting
Data modeling
warehousing concepts
Google Cloud Platform (GCP)
+15 more

Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence. The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.


Responsibilities:

-Design, build, and maintain scalable data pipelines for

structured and unstructured data sources

-Develop ETL processes to collect, clean, and transform data

from internal and external systems. Support integration of data into

dashboards, analytics tools, and reporting systems

-Collaborate with data analysts and software developers to

improve data accessibility and performance.

-Document workflows and maintain data infrastructure best

practices.

-Assist in identifying opportunities to automate repetitive data

tasks


Please send your resume to talent@springer. capital

Read more
Remote only
0 - 1 yrs
₹5000 - ₹7000 / mo
Attention to detail
Troubleshooting
Data modeling
warehousing concepts
Google Cloud Platform (GCP)
+15 more

Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence. The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.


Responsibilities:

-Design, build, and maintain scalable data pipelines for

structured and unstructured data sources

-Develop ETL processes to collect, clean, and transform data

from internal and external systems. Support integration of data into

dashboards, analytics tools, and reporting systems

-Collaborate with data analysts and software developers to

improve data accessibility and performance.

-Document workflows and maintain data infrastructure best

practices.

-Assist in identifying opportunities to automate repetitive data

tasks


Please send your resume to talent@springer. capital

Read more
empowers digital transformation for innovative and high grow

empowers digital transformation for innovative and high grow

Agency job
via Hirebound by Jebin Joy
Pune
4 - 12 yrs
₹12L - ₹30L / yr
Hadoop
Spark
Apache Kafka
ETL
skill iconJava
+2 more

To be successful in this role, you should possess

• Collaborate closely with Product Management and Engineering leadership to devise and build the

right solution.

• Participate in Design discussions and brainstorming sessions to select, integrate, and maintain Big

Data tools and frameworks required to solve Big Data problems at scale.

• Design and implement systems to cleanse, process, and analyze large data sets using distributed

processing tools like Akka and Spark.

• Understanding and critically reviewing existing data pipelines, and coming up with ideas in

collaboration with Technical Leaders and Architects to improve upon current bottlenecks

• Take initiatives, and show the drive to pick up new stuff proactively, and work as a Senior

Individual contributor on the multiple products and features we have.

• 3+ years of experience in developing highly scalable Big Data pipelines.

• In-depth understanding of the Big Data ecosystem including processing frameworks like Spark,

Akka, Storm, and Hadoop, and the file types they deal with.

• Experience with ETL and Data pipeline tools like Apache NiFi, Airflow etc.

• Excellent coding skills in Java or Scala, including the understanding to apply appropriate Design

Patterns when required.

• Experience with Git and build tools like Gradle/Maven/SBT.

• Strong understanding of object-oriented design, data structures, algorithms, profiling, and

optimization.

• Have elegant, readable, maintainable and extensible code style.


You are someone who would easily be able to

• Work closely with the US and India engineering teams to help build the Java/Scala based data

pipelines

• Lead the India engineering team in technical excellence and ownership of critical modules; own

the development of new modules and features

• Troubleshoot live production server issues.

• Handle client coordination and be able to work as a part of a team, be able to contribute

independently and drive the team to exceptional contributions with minimal team supervision

• Follow Agile methodology, JIRA for work planning, issue management/tracking


Additional Project/Soft Skills:

• Should be able to work independently with India & US based team members.

• Strong verbal and written communication with ability to articulate problems and solutions over phone and emails.

• Strong sense of urgency, with a passion for accuracy and timeliness.

• Ability to work calmly in high pressure situations and manage multiple projects/tasks.

• Ability to work independently and possess superior skills in issue resolution.

• Should have the passion to learn and implement, analyze and troubleshoot issues

Read more
Lorven technologies Inc
Remote only
5 - 10 yrs
₹2L - ₹13L / yr
Snap Logic
ETL
Oracle
skill iconMongoDB

Role Overview

We are seeking a skilled and highly motivated ETL Developer to fill a key role working on distributed team, in a dynamic fast-paced environment. This project is an enterprise-wide distributed system with users worldwide.

This hands-on role requires the candidate to work collaboratively in a squad following a Scaled Agile development methodology. You must be a self-starter, delivery-focused, and possess a broad set of technology skills.

We will count on you to:

  • Designs, codes, tests and debugs new and existing software applications primarily using ETL technologies and relational database languages.
  • Excellent documentation and presentation skills, analytical and critical thinking skills, and the ability to identify needs and take initiative ​
  • Proven expertise working on large scale enterprise applications
  • Working on Agile/Scrum/Spotify development methodology
  • Quickly learn new technologies, solve complex problems and be able to ramp up on new projects quickly.
  • Communicate effectively and be able to review ones work as well as others with a particular attention to accuracy and detail.
  • The candidate must demonstrate a great knowledge of ETL technology and be able to work effectively on distributed components.​
  • Investigate research and correct defects effectively and efficiently.
  • Ensure code meets specifications, quality and security standards, and is maintainable
  • Complete work within prescribed standards and follow prescribed workflow process.
  • Unit test software components efficiently and effectively
  • Ensure that solution requirements are gathered accurately, understood, and that all stakeholders have transparency on impacts
  • Follow engineering best practices and principles within your organisation
  • Work closely with a Lead Software Engineer
  • Build strong relationships with members of your engineering squad

What you need to have:

  • Proven track record of successfully delivering software solutions
  • The ability to communicate effectively to both technical and non-technical colleagues in a cross-functional environment
  • Some experience or knowledge of working with Agile at Scale, Lean and Continuous Delivery approaches such as Continuous Integration, Test-Driven Development and Infrastructure as Code
  • Some experience with cloud native software architectures
  • Proven experience in the remediation of SAST/DAST findings
  • Understanding of CI/CD and DevOps practices
  • Strong Self-starter and active squad contributor

Technical Skills or Qualifications Required:

Mandatory Skills:

  • Strong ETL Skills: Snap logic
  • Expertise on Relational Databases Oracle, SSMS and familiar with NO SQL DB MongoDB
  • Knowledge of data warehousing concepts and data modelling
  • Experience of performing validations on large-scale data
  • Strong Rest API ,JSON’s and Data transformations experience
  • Experience with Unit Testing and Integration Testing
  • Knowledge of SDLC processes, practices, and experience with some or all of: Confluence, JIRA, ADO, Github etc.


Read more
Pluginlive

at Pluginlive

1 recruiter
Harsha Saggi
Posted by Harsha Saggi
Chennai, Mumbai
4 - 6 yrs
₹10L - ₹20L / yr
skill iconPython
SQL
NOSQL Databases
Data architecture
Data modeling
+7 more

Role Overview:

We are seeking a talented and experienced Data Architect with strong data visualization capabilities to join our dynamic team in Mumbai. As a Data Architect, you will be responsible for designing, building, and managing our data infrastructure, ensuring its reliability, scalability, and performance. You will also play a crucial role in transforming complex data into insightful visualizations that drive business decisions. This role requires a deep understanding of data modeling, database technologies (particularly Oracle Cloud), data warehousing principles, and proficiency in data manipulation and visualization tools, including Python and SQL.


Responsibilities:

  • Design and implement robust and scalable data architectures, including data warehouses, data lakes, and operational data stores, primarily leveraging Oracle Cloud services.
  • Develop and maintain data models (conceptual, logical, and physical) that align with business requirements and ensure data integrity and consistency.
  • Define data governance policies and procedures to ensure data quality, security, and compliance.
  • Collaborate with data engineers to build and optimize ETL/ELT pipelines for efficient data ingestion, transformation, and loading.
  • Develop and execute data migration strategies to Oracle Cloud.
  • Utilize strong SQL skills to query, manipulate, and analyze large datasets from various sources.
  • Leverage Python and relevant libraries (e.g., Pandas, NumPy) for data cleaning, transformation, and analysis.
  • Design and develop interactive and insightful data visualizations using tools like [Specify Visualization Tools - e.g., Tableau, Power BI, Matplotlib, Seaborn, Plotly] to communicate data-driven insights to both technical and non-technical stakeholders.
  • Work closely with business analysts and stakeholders to understand their data needs and translate them into effective data models and visualizations.
  • Ensure the performance and reliability of data visualization dashboards and reports.
  • Stay up-to-date with the latest trends and technologies in data architecture, cloud computing (especially Oracle Cloud), and data visualization.
  • Troubleshoot data-related issues and provide timely resolutions.
  • Document data architectures, data flows, and data visualization solutions.
  • Participate in the evaluation and selection of new data technologies and tools.


Qualifications:

  • Bachelor's or Master's degree in Computer Science, Data Science, Information Systems, or a related field.
  • Proven experience (typically 5+ years) as a Data Architect, Data Modeler, or similar role. 

  • Deep understanding of data warehousing concepts, dimensional modeling (e.g., star schema, snowflake schema), and ETL/ELT processes.
  • Extensive experience working with relational databases, particularly Oracle, and proficiency in SQL.
  • Hands-on experience with Oracle Cloud data services (e.g., Autonomous Data Warehouse, Object Storage, Data Integration).
  • Strong programming skills in Python and experience with data manipulation and analysis libraries (e.g., Pandas, NumPy).
  • Demonstrated ability to create compelling and effective data visualizations using industry-standard tools (e.g., Tableau, Power BI, Matplotlib, Seaborn, Plotly).
  • Excellent analytical and problem-solving skills with the ability to interpret complex data and translate it into actionable insights. 
  • Strong communication and presentation skills, with the ability to effectively communicate technical concepts to non-technical audiences. 
  • Experience with data governance and data quality principles.
  • Familiarity with agile development methodologies.
  • Ability to work independently and collaboratively within a team environment.

Application Link- https://forms.gle/km7n2WipJhC2Lj2r5

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Shivangi Bhattacharyya
Posted by Shivangi Bhattacharyya
Bengaluru (Bangalore)
6 - 8 yrs
Best in industry
IntelliMatch
MS SQLServer
ETL
Informatica

Job Title: IntelliMatch


Location: Bangalore (Hydrid)

Experience: 6+ Years

Employment Type: Full-time


Role:

We are looking for a highly skilled and experienced professional with deep knowledge in IntelliMatch and Microsoft SQL Server (MSSQL) to join our dynamic team. The ideal candidate should have a strong understanding of financial reconciliations and hands-on experience implementing IntelliMatch solutions in enterprise environments. Working experience with ETL tools preferably Informatica.


Key Responsibilities:

  • Design, configure, and implement reconciliation processes using IntelliMatch.
  • Manage and optimize data processing workflows in MSSQL for high-performance reconciliation systems.
  • Working experience with ETL tools preferably Informatica
  • Collaborate with business analysts and stakeholders to gather reconciliation requirements and translate them into technical solutions.
  • Troubleshoot and resolve complex issues in the reconciliation process.
  • Ensure data accuracy, integrity, and compliance with business rules and controls.
  • Support end-to-end testing and deployment processes.
  • Document technical solutions and maintain configuration records.
  • 6+ years of IT experience with a strong focus on IntelliMatch (FIS) implementation and support.
  • Hands-on expertise in MSSQL writing complex queries, stored procedures, performance tuning, etc.
  • Strong knowledge of reconciliation workflows in financial services.
  • Ability to work independently in a fast-paced environment.


Read more
QAgile Services

at QAgile Services

1 recruiter
Radhika Chotai
Posted by Radhika Chotai
Bengaluru (Bangalore)
4 - 6 yrs
₹12L - ₹25L / yr
skill iconData Science
skill iconPython
skill iconMachine Learning (ML)
PowerBI
SQL
+5 more

Proven experience as a Data Scientist or similar role with relevant experience of at least 4 years and total experience 6-8 years.


· Technical expertiseregarding data models, database design development, data mining and segmentation techniques


· Strong knowledge of and experience with reporting packages (Business Objects and likewise), databases, programming in ETL frameworks


· Experience with data movement and management in the Cloud utilizing a combination of Azure or AWS features


· Hands on experience in data visualization tools – Power BI preferred


· Solid understanding of machine learning


· Knowledge of data management and visualization techniques


· A knack for statistical analysis and predictive modeling


· Good knowledge of Python and Matlab


· Experience with SQL and NoSQL databases including ability to write complex queries and procedures

Read more
A Data Analytics company

A Data Analytics company

Agency job
via FIRST CAREER CENTRE by Aisha Fcc
Bengaluru (Bangalore)
4 - 10 yrs
₹18L - ₹35L / yr
ETL
skill iconPython
SQL
Microsoft Windows Azure

Key Responsibilities

  • Data Architecture & Pipeline Development
  • Design, implement, and optimize ETL/ELT pipelines using Azure Data Factory, Databricks, and Synapse Analytics.
  • Integrate structured, semi-structured, and unstructured data from multiple sources.
  • Data Storage & Management
  • Develop and maintain Azure SQL Database, Azure Synapse Analytics, and Azure Data Lake solutions.
  • Ensure proper indexing, partitioning, and storage optimization for performance.
  • Data Governance & Security
  • Implement role-based access control, data encryption, and compliance with GDPR/CCPA.
  • Ensure metadata management and data lineage tracking with Azure Purview or similar tools.
  • Collaboration & Stakeholder Engagement
  • Work closely with BI developers, analysts, and business teams to translate requirements into data solutions.
  • Provide technical guidance and best practices for data integration and transformation.
  • Monitoring & Optimization
  • Set up monitoring and alerting for data pipelines.


Read more
Sonatype

at Sonatype

5 candid answers
Reshika Mendiratta
Posted by Reshika Mendiratta
Hyderabad
6 - 10 yrs
₹15L - ₹33L / yr
ETL
Spark
Apache Kafka
skill iconPython
skill iconJava
+11 more

The Opportunity

We’re looking for a Senior Data Engineer to join our growing Data Platform team. This role is a hybrid of data engineering and business intelligence, ideal for someone who enjoys solving complex data challenges while also building intuitive and actionable reporting solutions.


You’ll play a key role in designing and scaling the infrastructure and pipelines that power analytics, dashboards, machine learning, and decision-making across Sonatype. You’ll also be responsible for delivering clear, compelling, and insightful business intelligence through tools like Looker Studio and advanced SQL queries.


What You’ll Do

  • Design, build, and maintain scalable data pipelines and ETL/ELT processes.
  • Architect and optimize data models and storage solutions for analytics and operational use.
  • Create and manage business intelligence reports and dashboards using tools like Looker Studio, Power BI, or similar.
  • Collaborate with data scientists, analysts, and stakeholders to ensure datasets are reliable, meaningful, and actionable.
  • Own and evolve parts of our data platform (e.g., Airflow, dbt, Spark, Redshift, or Snowflake).
  • Write complex, high-performance SQL queries to support reporting and analytics needs.
  • Implement observability, alerting, and data quality monitoring for critical pipelines.
  • Drive best practices in data engineering and business intelligence, including documentation, testing, and CI/CD.
  • Contribute to the evolution of our next-generation data lakehouse and BI architecture.


What We’re Looking For


Minimum Qualifications

  • 5+ years of experience as a Data Engineer or in a hybrid data/reporting role.
  • Strong programming skills in Python, Java, or Scala.
  • Proficiency with data tools such as Databricks, data modeling techniques (e.g., star schema, dimensional modeling), and data warehousing solutions like Snowflake or Redshift.
  • Hands-on experience with modern data platforms and orchestration tools (e.g., Spark, Kafka, Airflow).
  • Proficient in SQL with experience in writing and optimizing complex queries for BI and analytics.
  • Experience with BI tools such as Looker Studio, Power BI, or Tableau.
  • Experience in building and maintaining robust ETL/ELT pipelines in production.
  • Understanding of data quality, observability, and governance best practices.


Bonus Points

  • Experience with dbt, Terraform, or Kubernetes.
  • Familiarity with real-time data processing or streaming architectures.
  • Understanding of data privacy, compliance, and security best practices in analytics and reporting.


Why You’ll Love Working Here

  • Data with purpose: Work on problems that directly impact how the world builds secure software.
  • Full-spectrum impact: Use both engineering and analytical skills to shape product, strategy, and operations.
  • Modern tooling: Leverage the best of open-source and cloud-native technologies.
  • Collaborative culture: Join a passionate team that values learning, autonomy, and real-world impact.
Read more
Sonatype

at Sonatype

5 candid answers
Reshika Mendiratta
Posted by Reshika Mendiratta
Hyderabad
2 - 5 yrs
Upto ₹20L / yr (Varies
)
skill iconPython
ETL
Spark
Apache Kafka
databricks
+12 more

About the Role

We’re hiring a Data Engineer to join our Data Platform team. You’ll help build and scale the systems that power analytics, reporting, and data-driven features across the company. This role works with engineers, analysts, and product teams to make sure our data is accurate, available, and usable.


What You’ll Do

  • Build and maintain reliable data pipelines and ETL/ELT workflows.
  • Develop and optimize data models for analytics and internal tools.
  • Work with team members to deliver clean, trusted datasets.
  • Support core data platform tools like Airflow, dbt, Spark, Redshift, or Snowflake.
  • Monitor data pipelines for quality, performance, and reliability.
  • Write clear documentation and contribute to test coverage and CI/CD processes.
  • Help shape our data lakehouse architecture and platform roadmap.


What You Need

  • 2–4 years of experience in data engineering or a backend data-related role.
  • Strong skills in Python or another backend programming language.
  • Experience working with SQL and distributed data systems (e.g., Spark, Kafka).
  • Familiarity with NoSQL stores like HBase or similar.
  • Comfortable writing efficient queries and building data workflows.
  • Understanding of data modeling for analytics and reporting.
  • Exposure to tools like Airflow or other workflow schedulers.


Bonus Points

  • Experience with DBT, Databricks, or real-time data pipelines.
  • Familiarity with cloud infrastructure tools like Terraform or Kubernetes.
  • Interest in data governance, ML pipelines, or compliance standards.


Why Join Us?

  • Work on data that supports meaningful software security outcomes.
  • Use modern tools in a cloud-first, open-source-friendly environment.
  • Join a team that values clarity, learning, and autonomy.


If you're excited about building impactful software and helping others do the same, this is an opportunity to grow as a technical leader and make a meaningful impact.

Read more
Nirmitee.io

at Nirmitee.io

4 recruiters
Gitashri K
Posted by Gitashri K
Pune
8 - 10 yrs
₹6L - ₹20L / yr
ETL
datastage
ETL Datastage
IBM InfoSphere DataStage
  • Experience:
  • 7+ years of experience in ETL development using IBM DataStage.
  • Hands-on experience with designing, developing, and maintaining ETL jobs for data warehousing or business intelligence solutions.
  • Experience with data integration across relational databases (e.g., IBM DB2, Oracle, MS SQL Server), flat files, and other data sources.
  • Technical Skills:
  • Strong proficiency in IBM DataStage (Designer, Director, Administrator, and Manager components).
  • Expertise in SQL and database programming (e.g., PL/SQL, T-SQL).
  • Familiarity with data warehousing concepts, data modeling, and ETL/ELT processes.
  • Experience with scripting languages (e.g., UNIX shell scripting) for automation.
  • Knowledge of CI/CD tools (e.g., Git, BitBucket, Artifactory) and Agile methodologies.
  • Familiarity with IBM Watsonx.data integration or other ETL tools (e.g., Informatica, Talend) is a plus.
  • Experience with big data technologies (e.g., Hadoop) is an advantage.
  • Soft Skills:
  • Excellent problem-solving and analytical skills.
  • Strong communication and interpersonal skills to collaborate with stakeholders and cross-functional teams.
  • Ability to work independently and manage multiple priorities in a fast-paced environment.
Read more
Springer Capital
Remote only
0 - 1 yrs
₹5000 - ₹7000 / mo
PowerBI
Microsoft Excel
SQL
Attention to detail
Troubleshooting
+13 more

Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence.

The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.

Responsibilities:

  • Design, build, and maintain scalable data pipelines for structured and unstructured data sources
  • Develop ETL processes to collect, clean, and transform data from internal and external systems
  • Support integration of data into dashboards, analytics tools, and reporting systems
  • Collaborate with data analysts and software developers to improve data accessibility and performance
  • Document workflows and maintain data infrastructure best practices
  • Assist in identifying opportunities to automate repetitive data tasks


Read more
Springer Capital
Andrew Rose
Posted by Andrew Rose
Remote only
0 - 1 yrs
₹5000 - ₹7000 / mo
Attention to detail
Troubleshooting
Data modeling
Warehousing concepts
Google Cloud Platform (GCP)
+15 more

Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence. The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process. 

 

Responsibilities: 

▪ Design, build, and maintain scalable data pipelines for structured and unstructured data sources 

▪ Develop ETL processes to collect, clean, and transform data from internal and external systems 

▪ Support integration of data into dashboards, analytics tools, and reporting systems 

▪ Collaborate with data analysts and software developers to improve data accessibility and performance 

▪ Document workflows and maintain data infrastructure best practices 

▪ Assist in identifying opportunities to automate repetitive data tasks 

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Priyanka Seshadri
Posted by Priyanka Seshadri
Hyderabad, Pune, Bengaluru (Bangalore)
6 - 13 yrs
Best in industry
SQL
ETL
Banking
  • 5 -10 years of experience in ETL Testing, Snowflake, DWH Concepts.
  • Strong SQL knowledge & debugging skills are a must.
  • Experience on Azure and Snowflake Testing is plus
  • Experience with Qlik Replicate and Compose tools (Change Data Capture) tools is considered a plus
  • Strong Data warehousing Concepts, ETL tools like Talend Cloud Data Integration, Pentaho/Kettle tool
  • Experience in JIRA, Xray defect management toolis good to have.
  • Exposure to the financial domain knowledge is considered a plus
  • Testing the data-readiness (data quality) address code or data issues
  • Demonstrated ability to rationalize problems and use judgment and innovation to define clear and concise solutions
  • Demonstrate strong collaborative experience across regions (APAC, EMEA and NA) to effectively and efficiently identify root cause of code/data issues and come up with a permanent solution
  • Prior experience with State Street and Charles River Development (CRD) considered a plus
  • Experience in tools such as PowerPoint, Excel, SQL
  • Exposure to Third party data providers such as Bloomberg, Reuters, MSCI and other Rating agencies is a plus

Key Attributes include:

  • Team player with professional and positive approach
  • Creative, innovative and able to think outside of the box
  • Strong attention to detail during root cause analysis and defect issue resolution
  • Self-motivated & self-sufficient
  • Effective communicator both written and verbal
  • Brings a high level of energy with enthusiasm to generate excitement and motivate the team
  • Able to work under pressure with tight deadlines and/or multiple projects
  • Experience in negotiation and conflict resolution
Read more
VDart

VDart

Agency job
via VDart by Don Blessing
Remote only
8 - 10 yrs
₹20L - ₹25L / yr
Cleo
EDI
EDI management
ERP management
Supply Chain Management (SCM)
+5 more

Role: Cleo EDI Solution Architect / Sr EDI Developer

Location : Remote

Start Date – asap


This is a niche technology (Cleo EDI), which enables the integration of ERP with Transp. Mgt/Extended Supply Chain etc

 

Expertise in designing and developing end-to-end integration solutions, especially B2B integrations involving EDI (Electronic Data Interchange) and APIs.

Familiarity with Cleo Integration Cloud or similar EDI platforms.

Strong experience with Azure Integration Services, particularly:

  • Azure Data Factory – for orchestrating data movement and transformation
  • Azure Functions – for serverless compute tasks in integration pipelines
  • Azure Logic Apps or Service Bus – for message handling and triggering workflows

Understanding of ETL/ELT processes and data mapping.

Solid grasp of EDI standards (e.g., X12, EDIFACT) and workflows.

Experience working with EDI developers and analysts to align business requirements with technical implementation.

Familiarity with Cleo EDI tools or similar platforms.

Develop and maintain EDI integrations using Cleo Integration Cloud (CIC), Cleo Clarify, or similar Cleo solutions.

Create, test, and deploy EDI maps for transactions such as 850, 810, 856, etc., and other EDI/X12/EDIFACT documents.

Configure trading partner setups, including communication protocols (AS2, SFTP, FTP, HTTPS).

Monitor EDI transaction flows, identify errors, troubleshoot, and implement fixes.

Collaborate with business analysts, ERP teams, and external partners to gather and analyze EDI requirements.

Document EDI processes, mappings, and configurations for ongoing support and knowledge sharing.

Provide timely support for EDI-related incidents, ensuring minimal disruption to business operations.

Participate in EDI onboarding projects for new trading partners and customers.

Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Bengaluru (Bangalore), Pune, Jaipur, Bhopal, Gurugram, Hyderabad
5 - 7 yrs
₹5L - ₹18L / yr
Software Testing (QA)
Manual testing
SQL
ETL

🚀 Hiring: Manual Tester

⭐ Experience: 5+ Years

📍 Location: Pan India

⭐ Work Mode:- Hybrid

⏱️ Notice Period: Immediate Joiners

(Only immediate joiners & candidates serving notice period)


Must-Have Skills:

✅5+ years of experience in Manual Testing

✅Solid experience in ETL, Database, and Report Testing

✅Strong expertise in SQL queries, RDBMS concepts, and DML/DDL operations

✅Working knowledge of BI tools such as Power BI

✅Ability to write effective Test Cases and Test Scenarios

Read more
Fountane inc
HR Fountane
Posted by HR Fountane
Remote only
5 - 9 yrs
₹18L - ₹32L / yr
skill iconAmazon Web Services (AWS)
AWS Lambda
AWS CloudFormation
ETL
skill iconDocker
+3 more

Position Overview: We are looking for an experienced and highly skilled Senior Data Engineer to join our team and help design, implement, and optimize data systems that support high-end analytical solutions for our clients. As a customer-centric Data Engineer, you will work closely with clients to understand their business needs and translate them into robust, scalable, and efficient technical solutions. You will be responsible for end-to-end data modelling, integration workflows, and data transformation processes while ensuring security, privacy, and compliance.In this role, you will also leverage the latest advancements in artificial intelligence, machine learning, and large language models (LLMs) to deliver high-impact solutions that drive business success. The ideal candidate will have a deep understanding of data infrastructure, optimization techniques, and cost-effective data management


Key Responsibilities:


• Customer Collaboration:

– Partner with clients to gather and understand their business

requirements, translating them into actionable technical specifications.

– Act as the primary technical consultant to guide clients through data challenges and deliver tailored solutions that drive value.


•Data Modeling & Integration:

– Design and implement scalable, efficient, and optimized data models to support business operations and analytical needs.

– Develop and maintain data integration workflows to seamlessly extract, transform, and load (ETL) data from various sources into data repositories.

– Ensure smooth integration between multiple data sources and platforms, including cloud and on-premise systems


• Data Processing & Optimization:

– Develop, optimize, and manage data processing pipelines to enable real-time and batch data processing at scale.

– Continuously evaluate and improve data processing performance, optimizing for throughput while minimizing infrastructure costs.


• Data Governance & Security:

–Implement and enforce data governance policies and best practices, ensuring data security, privacy, and compliance with relevant industry regulations (e.g., GDPR, HIPAA).

–Collaborate with security teams to safeguard sensitive data and maintain privacy controls across data environments.


• Cross-Functional Collaboration:

– Work closely with data engineers, data scientists, and business

analysts to ensure that the data architecture aligns with organizational objectives and delivers actionable insights.

– Foster collaboration across teams to streamline data workflows and optimize solution delivery.


• Leveraging Advanced Technologies:

– Utilize AI, machine learning models, and large language models (LLMs) to automate processes, accelerate delivery, and provide

smart, data-driven solutions to business challenges.

– Identify opportunities to apply cutting-edge technologies to improve the efficiency, speed, and quality of data processing and analytics.


• Cost Optimization:

–Proactively manage infrastructure and cloud resources to optimize throughput while minimizing operational costs.

–Make data-driven recommendations to reduce infrastructure overhead and increase efficiency without sacrificing performance.


Qualifications:


• Experience:

– Proven experience (5+ years) as a Data Engineer or similar role, designing and implementing data solutions at scale.

– Strong expertise in data modelling, data integration (ETL), and data transformation processes.

– Experience with cloud platforms (AWS, Azure, Google Cloud) and big data technologies (e.g., Hadoop, Spark).


• Technical Skills:

– Advanced proficiency in SQL, data modelling tools (e.g., Erwin,PowerDesigner), and data integration frameworks (e.g., Apache

NiFi, Talend).

– Strong understanding of data security protocols, privacy regulations, and compliance requirements.

– Experience with data storage solutions (e.g., data lakes, data warehouses, NoSQL, relational databases).


• AI & Machine Learning Exposure:

– Familiarity with leveraging AI and machine learning technologies (e.g., TensorFlow, PyTorch, scikit-learn) to optimize data processing and analytical tasks.

–Ability to apply advanced algorithms and automation techniques to improve business processes.


• Soft Skills:

– Excellent communication skills to collaborate with clients, stakeholders, and cross-functional teams.

– Strong problem-solving ability with a customer-centric approach to solution design.

– Ability to translate complex technical concepts into clear, understandable terms for non-technical audiences.


• Education:

– Bachelor’s or Master’s degree in Computer Science, Information Systems, Data Science, or a related field (or equivalent practical experience).


LIFE AT FOUNTANE:

  • Fountane offers an environment where all members are supported, challenged, recognized & given opportunities to grow to their fullest potential.
  • Competitive pay
  • Health insurance for spouses, kids, and parents.
  • PF/ESI or equivalent
  • Individual/team bonuses
  • Employee stock ownership plan
  • Fun/challenging variety of projects/industries
  • Flexible workplace policy - remote/physical
  • Flat organization - no micromanagement
  • Individual contribution - set your deadlines
  • Above all - culture that helps you grow exponentially!


A LITTLE BIT ABOUT THE COMPANY:

Established in 2017, Fountane Inc is a Ventures Lab incubating and investing in new competitive technology businesses from scratch. Thus far, we’ve created half a dozen multi-million valuation companies in the US and a handful of sister ventures for large corporations, including Target, US Ventures, and Imprint Engine.

We’re a team of 120+ strong from around the world that are radically open-minded and believes in excellence, respecting one another, and pushing our boundaries to the furthest it's ever been.

Read more
Cymetrix Software

at Cymetrix Software

2 candid answers
Netra Shettigar
Posted by Netra Shettigar
Bengaluru (Bangalore)
3 - 8 yrs
₹9L - ₹15L / yr
Salesforce development
Oracle Application Express (APEX)
Salesforce Lightning
SQL
ETL
+6 more

1. Software Development Engineer - Salesforce

What we ask for

We are looking for strong engineers to build best in class systems for commercial &

wholesale banking at Bank, using Salesforce service cloud. We seek experienced

developers who bring deep understanding of salesforce development practices, patterns,

anti-patterns, governor limits, sharing & security model that will allow us to architect &

develop robust applications.

You will work closely with business, product teams to build applications which provide end

users with intuitive, clean, minimalist, easy to navigate experience

Develop systems by implementing software development principles and clean code

practices scalable, secure, highly resilient, have low latency

Should be open to work in a start-up environment and have confidence to deal with complex

issues keeping focus on solutions and project objectives as your guiding North Star


Technical Skills:

● Strong hands-on frontend development using JavaScript and LWC

● Expertise in backend development using Apex, Flows, Async Apex

● Understanding of Database concepts: SOQL, SOSL and SQL

● Hands-on experience in API integration using SOAP, REST API, graphql

● Experience with ETL tools , Data migration, and Data governance

● Experience with Apex Design Patterns, Integration Patterns and Apex testing

framework

● Follow agile, iterative execution model using CI-CD tools like Azure Devops, gitlab,

bitbucket

● Should have worked with at least one programming language - Java, python, c++

and have good understanding of data structures


Preferred qualifications

● Graduate degree in engineering

● Experience developing with India stack

● Experience in fintech or banking domain

Read more
top Mnc

top Mnc

Agency job
via VY SYSTEMS PRIVATE LIMITED by Ajeethkumar s
Bengaluru (Bangalore)
4.5 - 7 yrs
₹4L - ₹12L / yr
ETL
Snow flake schema
skill iconAmazon Web Services (AWS)
Amazon Redshift
SQL

Designing, building, and automating ETL processes using AWS services like Apache Sqoop, AWS S3, AWS CLI, Amazon 

EMR, Amazon MSK, Amazon Sagemaker. 

∙Developing and maintaining data pipelines to move and transform data from diverse sources into data warehouses or 

data lakes. 

∙Ensuring data quality and integrity through validation, cleansing, and monitoring ETL processes. 

∙Optimizing ETL workflows for performance, scalability, and cost efficiency within the AWS environment. 

∙Troubleshooting and resolving issues related to data processing and ETL workflows. 

∙Implementing and maintaining security measures and compliance standards for data pipelines and infrastructure. 

∙Documenting ETL processes, data mappings, and system architecture. 

 


Read more
Egen Solutions
Hemavathi Panduri
Posted by Hemavathi Panduri
Hyderabad
4 - 8 yrs
₹12L - ₹25L / yr
skill iconPython
Google Cloud Platform (GCP)
ETL
Apache Airflow

We are looking for a skilled and motivated Data Engineer with strong experience in Python programming and Google Cloud Platform (GCP) to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust and scalable ETL (Extract, Transform, Load) data pipelines. The role involves working with various GCP services, implementing data ingestion and transformation logic, and ensuring data quality and consistency across systems.


Key Responsibilities:

  • Design, develop, test, and maintain scalable ETL data pipelines using Python.
  • Work extensively on Google Cloud Platform (GCP) services such as:
  • Dataflow for real-time and batch data processing
  • Cloud Functions for lightweight serverless compute
  • BigQuery for data warehousing and analytics
  • Cloud Composer for orchestration of data workflows (based on Apache Airflow)
  • Google Cloud Storage (GCS) for managing data at scale
  • IAM for access control and security
  • Cloud Run for containerized applications
  • Perform data ingestion from various sources and apply transformation and cleansing logic to ensure high-quality data delivery.
  • Implement and enforce data quality checks, validation rules, and monitoring.
  • Collaborate with data scientists, analysts, and other engineering teams to understand data needs and deliver efficient data solutions.
  • Manage version control using GitHub and participate in CI/CD pipeline deployments for data projects.
  • Write complex SQL queries for data extraction and validation from relational databases such as SQL Server, Oracle, or PostgreSQL.
  • Document pipeline designs, data flow diagrams, and operational support procedures.

Required Skills:

  • 4–8 years of hands-on experience in Python for backend or data engineering projects.
  • Strong understanding and working experience with GCP cloud services (especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.).
  • Solid understanding of data pipeline architecture, data integration, and transformation techniques.
  • Experience in working with version control systems like GitHub and knowledge of CI/CD practices.
  • Strong experience in SQL with at least one enterprise database (SQL Server, Oracle, PostgreSQL, etc.).



Read more
Deqode

at Deqode

1 recruiter
Apoorva Jain
Posted by Apoorva Jain
Indore
0 - 2 yrs
₹6L - ₹12L / yr
skill iconPython
skill iconMachine Learning (ML)
pandas
NumPy
Blockchain
+1 more

About Us

Alfred Capital - Alfred Capital is a next-generation on-chain proprietary quantitative trading technology provider, pioneering fully autonomous algorithmic systems that reshape trading and capital allocation in decentralized finance. 


As a sister company of Deqode — a 400+ person blockchain innovation powerhouse — we operate at the cutting edge of quant research, distributed infrastructure, and high-frequency execution.


What We Build

  • Alpha Discovery via On‑Chain Intelligence — Developing trading signals using blockchain data, CEX/DEX markets, and protocol mechanics.
  • DeFi-Native Execution Agents — Automated systems that execute trades across decentralized platforms.
  • ML-Augmented Infrastructure — Machine learning pipelines for real-time prediction, execution heuristics, and anomaly detection.
  • High-Throughput Systems — Resilient, low-latency engines that operate 24/7 across EVM and non-EVM chains tuned for high-frequency trading (HFT) and real-time response
  • Data-Driven MEV Analysis & Strategy — We analyze mempools, order flow, and validator behaviors to identify and capture MEV opportunities ethically—powering strategies that interact deeply with the mechanics of block production and inclusion.


Evaluation Process

  • HR Discussion – A brief conversation to understand your motivation and alignment with the role.
  • Initial Technical Interview – A quick round focused on fundamentals and problem-solving approach.
  • Take-Home Assignment – Assesses research ability, learning agility, and structured thinking.
  • Assignment Presentation – Deep-dive into your solution, design choices, and technical reasoning.
  • Final Interview – A concluding round to explore your background, interests, and team fit in depth.
  • Optional Interview – In specific cases, an additional round may be scheduled to clarify certain aspects or conduct further assessment before making a final decision.


Job Description : Blockchain Data & ML Engineer


As a Blockchain Data & ML Engineer, you’ll work on ingesting and modelling on-chain behaviour, building scalable data pipelines, and designing systems that support intelligent, autonomous market interaction.


What You’ll Work On

  • Build and maintain ETL pipelines for ingesting and processing blockchain data.
  • Assist in designing, training, and validating machine learning models for prediction and anomaly detection.
  • Evaluate model performance, tune hyperparameters, and document experimental results.
  • Develop monitoring tools to track model accuracy, data drift, and system health.
  • Collaborate with infrastructure and execution teams to integrate ML components into production systems.
  • Design and maintain databases and storage systems to efficiently manage large-scale datasets.


Ideal Traits

  • Strong in data structures, algorithms, and core CS fundamentals.
  • Proficiency in any programming language
  • Familiarity with backend systems, APIs, and database design, along with a basic    understanding of machine learning and blockchain fundamentals.
  • Curiosity about how blockchain systems and crypto markets work under the hood.
  • Self-motivated, eager to experiment and learn in a dynamic environment.


Bonus Points For

  • Hands-on experience with pandas, numpy, scikit-learn, or PyTorch.
  • Side projects involving automated ML workflows, ETL pipelines, or crypto protocols.
  • Participation in hackathons or open-source contributions.


What You’ll Gain

  • Cutting-Edge Tech Stack: You'll work on modern infrastructure and stay up to date with the latest trends in technology.
  • Idea-Driven Culture: We welcome and encourage fresh ideas. Your input is valued, and you're empowered to make an impact from day one.
  • Ownership & Autonomy: You’ll have end-to-end ownership of projects. We trust our team and give them the freedom to make meaningful decisions.
  • Impact-Focused: Your work won’t be buried under bureaucracy. You’ll see it go live and make a difference in days, not quarters


What We Value:

  • Craftsmanship over shortcuts: We appreciate engineers who take the time to understand the problem deeply and build durable solutions—not just quick fixes.
  • Depth over haste: If you're the kind of person who enjoys going one level deeper to really "get" how something works, you'll thrive here.
  • Invested mindset: We're looking for people who don't just punch tickets, but care about the long-term success of the systems they build.
  • Curiosity with follow-through: We admire those who take the time to explore and validate new ideas, not just skim the surface.

Compensation:

  • INR 6 - 12 LPA
  • Performance Bonuses: Linked to contribution, delivery, and impact.



Read more
Bengaluru (Bangalore)
5 - 7 yrs
₹12L - ₹15L / yr
Enterprise Resource Planning (ERP)
skill iconData Analytics
SAP
JD Edwards
ETL
+3 more

Location: Bangalore – Hebbal – 5 Days - WFO

Type:  Contract – 6 Months to start with, extendable

Experience Required: 5+ years in Data Analysis, with ERP migration experience


Key Responsibilities:

  • Analyze and map data from SAP to JD Edwards structures.
  • Define data transformation rules and business logic.
  • Assist with data extraction, cleansing, and enrichment.
  • Collaborate with technical teams to design and execute ETL processes.
  • Perform data validation and reconciliation before and after migration.
  • Work closely with business stakeholders to understand master and transactional data requirements.
  • Support the creation of reports to validate data accuracy in JDE.
  • Document data mapping, cleansing rules, and transformation processes.
  • Participate in testing cycles and assist with UAT data validation.


Required Skills and Qualifications:

  • Strong experience in SAP ERP data models (FI, MM, SD, etc.).
  • Knowledge of JD Edwards EnterpriseOne data structure is a plus.
  • Proficiency in Excel, SQL, and data profiling tools.
  • Experience in data migration tools like SAP BODS, Talend, or Informatica.
  • Strong analytical, problem-solving, and documentation skills.
  • Excellent communication and collaboration skills.
  • ERP migration project experience is essential.


Read more
Deqode

at Deqode

1 recruiter
Apoorva Jain
Posted by Apoorva Jain
Indore
0 - 2 yrs
₹6L - ₹12L / yr
Blockchain
ETL
Artificial Intelligence (AI)
Generative AI
skill iconPython
+3 more

About Us

Alfred Capital - Alfred Capital is a next-generation on-chain proprietary quantitative trading technology provider, pioneering fully autonomous algorithmic systems that reshape trading and capital allocation in decentralized finance. 


As a sister company of Deqode — a 400+ person blockchain innovation powerhouse — we operate at the cutting edge of quant research, distributed infrastructure, and high-frequency execution.


What We Build

  • Alpha Discovery via On‑Chain Intelligence — Developing trading signals using blockchain data, CEX/DEX markets, and protocol mechanics.
  • DeFi-Native Execution Agents — Automated systems that execute trades across decentralized platforms.
  • ML-Augmented Infrastructure — Machine learning pipelines for real-time prediction, execution heuristics, and anomaly detection.
  • High-Throughput Systems — Resilient, low-latency engines that operate 24/7 across EVM and non-EVM chains tuned for high-frequency trading (HFT) and real-time response
  • Data-Driven MEV Analysis & Strategy — We analyze mempools, order flow, and validator behaviors to identify and capture MEV opportunities ethically—powering strategies that interact deeply with the mechanics of block production and inclusion.


Evaluation Process

  • HR Discussion – A brief conversation to understand your motivation and alignment with the role.
  • Initial Technical Interview – A quick round focused on fundamentals and problem-solving approach.
  • Take-Home Assignment – Assesses research ability, learning agility, and structured thinking.
  • Assignment Presentation – Deep-dive into your solution, design choices, and technical reasoning.
  • Final Interview – A concluding round to explore your background, interests, and team fit in depth.
  • Optional Interview – In specific cases, an additional round may be scheduled to clarify certain aspects or conduct further assessment before making a final decision.


Blockchain Data & ML Engineer


As a Blockchain Data & ML Engineer, you’ll work on ingesting and modeling on-chain behavior, building scalable data pipelines, and designing systems that support intelligent, autonomous market interaction.


What You’ll Work On

  • Build and maintain ETL pipelines for ingesting and processing blockchain data.
  • Assist in designing, training, and validating machine learning models for prediction and anomaly detection.
  • Evaluate model performance, tune hyperparameters, and document experimental results.
  • Develop monitoring tools to track model accuracy, data drift, and system health.
  • Collaborate with infrastructure and execution teams to integrate ML components into production systems.
  • Design and maintain databases and storage systems to efficiently manage large-scale datasets.


Ideal Traits

  • Strong in data structures, algorithms, and core CS fundamentals.
  • Proficiency in any programming language
  • Curiosity about how blockchain systems and crypto markets work under the hood.
  • Self-motivated, eager to experiment and learn in a dynamic environment.


Bonus Points For

  • Hands-on experience with pandas, numpy, scikit-learn, or PyTorch.
  • Side projects involving automated ML workflows, ETL pipelines, or crypto protocols.
  • Participation in hackathons or open-source contributions.


What You’ll Gain

  • Cutting-Edge Tech Stack: You'll work on modern infrastructure and stay up to date with the latest trends in technology.
  • Idea-Driven Culture: We welcome and encourage fresh ideas. Your input is valued, and you're empowered to make an impact from day one.
  • Ownership & Autonomy: You’ll have end-to-end ownership of projects. We trust our team and give them the freedom to make meaningful decisions.
  • Impact-Focused: Your work won’t be buried under bureaucracy. You’ll see it go live and make a difference in days, not quarters


What We Value:

  • Craftsmanship over shortcuts: We appreciate engineers who take the time to understand the problem deeply and build durable solutions—not just quick fixes.
  • Depth over haste: If you're the kind of person who enjoys going one level deeper to really "get" how something works, you'll thrive here.
  • Invested mindset: We're looking for people who don't just punch tickets, but care about the long-term success of the systems they build.
  • Curiosity with follow-through: We admire those who take the time to explore and validate new ideas, not just skim the surface.


Compensation:

  • INR 6 - 12 LPA
  • Performance Bonuses: Linked to contribution, delivery, and impact.
Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Remote only
10 - 15 yrs
₹10L - ₹18L / yr
Solution architecture
Denodo
Data Virtualization
Data architecture
SQL
+5 more

Job Title : Solution Architect – Denodo

Experience : 10+ Years

Location : Remote / Work from Home

Notice Period : Immediate joiners preferred


Job Overview :

We are looking for an experienced Solution Architect – Denodo to lead the design and implementation of data virtualization solutions. In this role, you will work closely with cross-functional teams to ensure our data architecture aligns with strategic business goals. The ideal candidate will bring deep expertise in Denodo, strong technical leadership, and a passion for driving data-driven decisions.


Mandatory Skills : Denodo, Data Virtualization, Data Architecture, SQL, Data Modeling, ETL, Data Integration, Performance Optimization, Communication Skills.


Key Responsibilities :

  • Architect and design scalable data virtualization solutions using Denodo.
  • Collaborate with business analysts and engineering teams to understand requirements and define technical specifications.
  • Ensure adherence to best practices in data governance, performance, and security.
  • Integrate Denodo with diverse data sources and optimize system performance.
  • Mentor and train team members on Denodo platform capabilities.
  • Lead tool evaluations and recommend suitable data integration technologies.
  • Stay updated with emerging trends in data virtualization and integration.

Required Qualifications :

  • Bachelor’s degree in Computer Science, IT, or a related field.
  • 10+ Years of experience in data architecture and integration.
  • Proven expertise in Denodo and data virtualization frameworks.
  • Strong proficiency in SQL and data modeling.
  • Hands-on experience with ETL processes and data integration tools.
  • Excellent communication, presentation, and stakeholder management skills.
  • Ability to lead technical discussions and influence architectural decisions.
  • Denodo or data architecture certifications are a strong plus.
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Shrutika SaileshKumar
Posted by Shrutika SaileshKumar
Remote, Bengaluru (Bangalore)
5 - 9 yrs
Best in industry
skill iconPython
SDET
BDD
SQL
Data Warehouse (DWH)
+2 more

Primary skill set: QA Automation, Python, BDD, SQL 

As Senior Data Quality Engineer you will:

  • Evaluate product functionality and create test strategies and test cases to assess product quality.
  • Work closely with the on-shore and the offshore team.
  • Work on multiple reports validation against the databases by running medium to complex SQL queries.
  • Better understanding of Automation Objects and Integrations across various platforms/applications etc.
  • Individual contributor exploring opportunities to improve performance and suggest/articulate the areas of improvements importance/advantages to management.
  • Integrate with SCM infrastructure to establish a continuous build and test cycle using CICD tools.
  • Comfortable working on Linux/Windows environment(s) and Hybrid infrastructure models hosted on Cloud platforms.
  • Establish processes and tools set to maintain automation scripts and generate regular test reports.
  • Peer review to provide feedback and to make sure the test scripts are flaw-less.

Core/Must have skills:

  • Excellent understanding and hands on experience in ETL/DWH testing preferably DataBricks paired with Python experience.
  • Hands on experience SQL (Analytical Functions and complex queries) along with knowledge of using SQL client utilities effectively.
  • Clear & crisp communication and commitment towards deliverables
  • Experience on BigData Testing will be an added advantage.
  • Knowledge on Spark and Scala, Hive/Impala, Python will be an added advantage.

Good to have skills:

  • Test automation using BDD/Cucumber / TestNG combined with strong hands-on experience with Java with Selenium. Especially working experience in WebDriver.IO
  • Ability to effectively articulate technical challenges and solutions
  • Work experience in qTest, Jira, WebDriver.IO


Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Remote only
10 - 15 yrs
₹10L - ₹18L / yr
Denodo
Data Visualization
Data integration
ETL

Responsibilities

·        Design and architect data virtualization solutions using Denodo.

·        Collaborate with business analysts and data engineers to understand data requirements and translate them into technical specifications.

·        Implement best practices for data governance and security within Denodo environments.

·        Lead the integration of Denodo with various data sources, ensuring performance optimization.

·        Conduct training sessions and provide guidance to technical teams on Denodo capabilities.

·        Participate in the evaluation and selection of data technologies and tools.

·        Stay current with industry trends in data integration and virtualization.

 

Requirements

·        Bachelor's degree in Computer Science, Information Technology, or a related field.

·        10+ years of experience in data architecture, with a focus on Denodo solutions.

·        Strong knowledge of data virtualization principles and practices.

·        Experience with SQL and data modeling techniques.

·        Familiarity with ETL processes and data integration tools.

·        Excellent communication and presentation skills.

·        Ability to lead technical discussions and provide strategic insights.

·        Certifications related to Denodo or data architecture are a plus


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Annie Varghese
Posted by Annie Varghese
Bengaluru (Bangalore)
14 - 21 yrs
Best in industry
skill iconPython
snowflake
Amazon Redshift
ETL
SQL
+3 more

Role: Data Engineer (14+ years of experience)

Location: Whitefield, Bangalore

Mode of Work: Hybrid (3 days from office)

Notice period: Immediate/ Serving with 30days left

Location: Candidate should be based out of Bangalore as one round has to be taken F2F


Job Summary:

Role and Responsibilities

● Design and implement scalable data pipelines for ingesting, transforming, and loading data from various tools and sources.

● Design data models to support data analysis and reporting.

● Automate data engineering tasks using scripting languages and tools.

● Collaborate with engineers, process managers, data scientists to understand their needs and design solutions.

● Act as a bridge between the engineering and the business team in all areas related to Data.

● Automate monitoring and alerting mechanism on data pipelines, products and Dashboards and troubleshoot any issues. On call requirements.

● SQL creation and optimization - including modularization and optimization which might need views, table creation in the sources etc.

● Defining best practices for data validation and automating as much as possible; aligning with the enterprise standards

● QA environment data management - e.g Test Data Management etc

Qualifications

● 14+ years of experience as a Data engineer or related role.

● Experience with Agile engineering practices.

● Strong experience in writing queries for RDBMS, cloud-based data warehousing solutions like Snowflake and Redshift.

● Experience with SQL and NoSQL databases.

● Ability to work independently or as part of a team.

● Experience with cloud platforms, preferably AWS.

● Strong experience with data warehousing and data lake technologies (Snowflake)

● Expertise in data modelling

● Experience with ETL/LT tools and methodologies .

● 5+ years of experience in application development including Python, SQL, Scala, or Java

● Experience working on real-time Data Streaming and Data Streaming platform.


NOTE: IT IS MANDATORY TO GIVE ONE TECHNICHAL ROUND FACE TO FACE.

Read more
Deqode

at Deqode

1 recruiter
Roshni Maji
Posted by Roshni Maji
Gurugram
6 - 8 yrs
₹8L - ₹22L / yr
Automation
skill iconDocker
SQL
skill iconAmazon Web Services (AWS)
azure
+4 more

Role: Automation Tester – Data Engineering

Experience: 6+ years

Work Mode: Hybrid (2–3 days onsite/week)

Locations: Gurgaon

Notice Period: Immediate Joiners Preferred


Mandatory Skills:

  • Hands-on automation testing experience in Data Engineering or Data Warehousing
  • Proficiency in Docker
  • Experience working on any Cloud platform (AWS, Azure, or GCP)
  • Experience in ETL Testing is a must
  • Automation testing using Pytest or Scalatest
  • Strong SQL skills and data validation techniques
  • Familiarity with data processing tools such as ETL, Hadoop, Spark, Hive
  • Sound knowledge of SDLC and Agile methodologies
  • Ability to write efficient, clean, and maintainable test scripts
  • Strong problem-solving, debugging, and communication skills


Good to Have:

  • Exposure to additional test frameworks like Selenium, TestNG, or JUnit


Key Responsibilities:

  • Develop, execute, and maintain automation scripts for data pipelines
  • Perform comprehensive data validation and quality assurance
  • Collaborate with data engineers, developers, and stakeholders
  • Troubleshoot issues and improve test reliability
  • Ensure consistent testing standards across development cycles
Read more
Enqubes

Enqubes

Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Remote only
7 - 10 yrs
₹10L - ₹15L / yr
SAP BODS
SAP HANA
HANA
ETL management
ETL
+3 more

Job Title: SAP BODS Developer

  • Experience: 7–10 Years
  • Location: Remote (India-based candidates only)
  • Employment Type: Permanent (Full-Time)
  • Salary Range: ₹20 – ₹25 LPA (Fixed CTC)


Required Skills & Experience:

- 7–10 years of hands-on experience as a SAP BODS Developer.

- Strong experience in S/4HANA implementation or upgrade projects with large-scale data migration.

- Proficient in ETL development, job optimization, and performance tuning using SAP BODS.

- Solid understanding of SAP data structures (FI, MM, SD, etc.) from a technical perspective.

- Skilled in SQL scripting, error resolution, and job monitoring.

- Comfortable working independently in a remote, spec-driven development environment.


Read more
Tecblic Private LImited
Ahmedabad
4 - 5 yrs
₹8L - ₹12L / yr
Microsoft Windows Azure
SQL
skill iconPython
PySpark
ETL
+2 more

🚀 We Are Hiring: Data Engineer | 4+ Years Experience 🚀


Job description

🔍 Job Title: Data Engineer

📍 Location: Ahmedabad

🚀 Work Mode: On-Site Opportunity

📅 Experience: 4+ Years

🕒 Employment Type: Full-Time

⏱️ Availability : Immediate Joiner Preferred


Join Our Team as a Data Engineer

We are seeking a passionate and experienced Data Engineer to be a part of our dynamic and forward-thinking team in Ahmedabad. This is an exciting opportunity for someone who thrives on transforming raw data into powerful insights and building scalable, high-performance data infrastructure.

As a Data Engineer, you will work closely with data scientists, analysts, and cross-functional teams to design robust data pipelines, optimize data systems, and enable data-driven decision-making across the organization.


Your Key Responsibilities

Architect, build, and maintain scalable and reliable data pipelines from diverse data sources.

Design effective data storage, retrieval mechanisms, and data models to support analytics and business needs.

Implement data validation, transformation, and quality monitoring processes.

Collaborate with cross-functional teams to deliver impactful, data-driven solutions.

Proactively identify bottlenecks and optimize existing workflows and processes.

Provide guidance and mentorship to junior engineers in the team.


Skills & Expertise We’re Looking For

3+ years of hands-on experience in Data Engineering or related roles.

Strong expertise in Python and data pipeline design.

Experience working with Big Data tools like Hadoop, Spark, Hive.

Proficiency with SQL, NoSQL databases, and data warehousing solutions.

Solid experience in cloud platforms - Azure

Familiar with distributed computing, data modeling, and performance tuning.

Understanding of DevOps, Power Automate, and Microsoft Fabric is a plus.

Strong analytical thinking, collaboration skills, Excellent Communication Skill and the ability to work independently or as part of a team.


Qualifications

Bachelor’s degree in Computer Science, Data Science, or a related field.

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort