14+ ELT Jobs in India
Apply to 14+ ELT Jobs on CutShort.io. Find your next job, effortlessly. Browse ELT Jobs and apply today!

Role: Cleo EDI Solution Architect / Sr EDI Developer
Location : Remote
Start Date – asap
This is a niche technology (Cleo EDI), which enables the integration of ERP with Transp. Mgt/Extended Supply Chain etc
Expertise in designing and developing end-to-end integration solutions, especially B2B integrations involving EDI (Electronic Data Interchange) and APIs.
Familiarity with Cleo Integration Cloud or similar EDI platforms.
Strong experience with Azure Integration Services, particularly:
- Azure Data Factory – for orchestrating data movement and transformation
- Azure Functions – for serverless compute tasks in integration pipelines
- Azure Logic Apps or Service Bus – for message handling and triggering workflows
Understanding of ETL/ELT processes and data mapping.
Solid grasp of EDI standards (e.g., X12, EDIFACT) and workflows.
Experience working with EDI developers and analysts to align business requirements with technical implementation.
Familiarity with Cleo EDI tools or similar platforms.
Develop and maintain EDI integrations using Cleo Integration Cloud (CIC), Cleo Clarify, or similar Cleo solutions.
Create, test, and deploy EDI maps for transactions such as 850, 810, 856, etc., and other EDI/X12/EDIFACT documents.
Configure trading partner setups, including communication protocols (AS2, SFTP, FTP, HTTPS).
Monitor EDI transaction flows, identify errors, troubleshoot, and implement fixes.
Collaborate with business analysts, ERP teams, and external partners to gather and analyze EDI requirements.
Document EDI processes, mappings, and configurations for ongoing support and knowledge sharing.
Provide timely support for EDI-related incidents, ensuring minimal disruption to business operations.
Participate in EDI onboarding projects for new trading partners and customers.


About Moative
Moative, an Applied AI company, designs and builds transformation AI solutions for traditional industries in energy, utilities, healthcare & lifesciences, and more. Through Moative Labs, we build AI micro-products and launch AI startups with partners in vertical markets that align with our theses.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Our Team: Our team of 20+ employees consist of data scientists, AI/ML Engineers, and mathematicians from top engineering and research institutes such as IITs, CERN, IISc, UZH, Ph.Ds. Our team includes academicians, IBM Research Fellows, and former founders.
Work you’ll do
As a Data Engineer, you will work on data architecture, large-scale processing systems, and data flow management. You will build and maintain optimal data architecture and data pipelines, assemble large, complex data sets, and ensure that data is readily available to data scientists, analysts, and other users. In close collaboration with ML engineers, data scientists, and domain experts, you’ll deliver robust, production-grade solutions that directly impact business outcomes. Ultimately, you will be responsible for developing and implementing systems that optimize the organization’s data use and data quality.
Responsibilities
- Create and maintain optimal data architecture and data pipelines on cloud infrastructure (such as AWS/ Azure/ GCP)
- Assemble large, complex data sets that meet functional / non-functional business requirements
- Identify, design, and implement internal process improvements
- Build the pipeline infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
- Support development of analytics that utilize the data pipeline to provide actionable insights into key business metrics
- Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs
Who you are
You are a passionate and results-oriented engineer who understands the importance of data architecture and data quality to impact solution development, enhance products, and ultimately improve business applications. You thrive in dynamic environments and are comfortable navigating ambiguity. You possess a strong sense of ownership and are eager to take initiative, advocating for your technical decisions while remaining open to feedback and collaboration.
You have experience in developing and deploying data pipelines to support real-world applications. You have a good understanding of data structures and are excellent at writing clean, efficient code to extract, create and manage large data sets for analytical uses. You have the ability to conduct regular testing and debugging to ensure optimal data pipeline performance. You are excited at the possibility of contributing to intelligent applications that can directly impact business services and make a positive difference to users.
Skills & Requirements
- 3+ years of hands-on experience as a data engineer, data architect or similar role, with a good understanding of data structures and data engineering.
- Solid knowledge of cloud infra and data-related services on AWS (EC2, EMR, RDS, Redshift) and/ or Azure.
- Advanced knowledge of SQL, including writing complex queries, stored procedures, views, etc.
- Strong experience with data pipeline and workflow management tools (such as Luigi, Airflow).
- Experience with common relational SQL, NoSQL and Graph databases.
- Strong experience with scripting languages: Python, PySpark, Scala, etc.
- Practical experience with basic DevOps concepts: CI/CD, containerization (Docker, Kubernetes), etc
- Experience with big data tools (Spark, Kafka, etc) and stream processing.
- Excellent communication skills to collaborate with colleagues from both technical and business backgrounds, discuss and convey ideas and findings effectively.
- Ability to analyze complex problems, think critically for troubleshooting and develop robust data solutions.
- Ability to identify and tackle issues efficiently and proactively, conduct thorough research and collaborate to find long-term, scalable solutions.
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less. Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that out loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.
Position Summary:
As a CRM ETL Developer, you will be responsible for the analysis, transformation, and integration of data from legacy and external systems into CRM application. This includes developing ETL/ELT workflows, ensuring data quality through cleansing and survivorship rules, and supporting daily production loads. You will work in an Agile environment and play a vital role in building scalable, high-quality data integration solutions.
Key Responsibilities:
- Analyze data from legacy and external systems; develop ETL/ELT pipelines to ingest and process data.
- Cleanse, transform, and apply survivorship rules before loading into the CRM platform.
- Monitor, support, and troubleshoot production data loads (Tier 1 & Tier 2 support).
- Contribute to solution design, development, integration, and scaling of new/existing systems.
- Promote and implement best practices in data integration, performance tuning, and Agile development.
- Lead or support design reviews, technical documentation, and mentoring of junior developers.
- Collaborate with business analysts, QA, and cross-functional teams to resolve defects and clarify requirements.
- Deliver working solutions via quick POCs or prototypes for business scenarios.
Technical Skills:
- ETL/ELT Tools: 5+ years of hands-on experience in ETL processes using Siebel EIM.
- Programming & Databases: Strong SQL & PL/SQL development; experience with Oracle and/or SQL Server.
- Data Integration: Proven experience in integrating disparate data systems.
- Data Modelling: Good understanding of relational, dimensional modelling, and data warehousing concepts.
- Performance Tuning: Skilled in application and SQL query performance optimization.
- CRM Systems: Familiarity with Siebel CRM, Siebel Data Model, and Oracle SOA Suite is a plus.
- DevOps & Agile: Strong knowledge of DevOps pipelines and Agile methodologies.
- Documentation: Ability to write clear technical design documents and test cases.
Soft Skills & Attributes:
- Strong analytical and problem-solving skills.
- Excellent communication and interpersonal abilities.
- Experience working with cross-functional, globally distributed teams.
- Proactive mindset and eagerness to learn new technologies.
- Detail-oriented with a focus on reliability and accuracy.
Preferred Qualifications:
- Bachelor’s degree in Computer Science, Information Systems, or a related field.
- Experience in Tier 1 & Tier 2 application support roles.
- Exposure to real-time data integration systems is an advantage.
The role reports to the Head of Customer Support, and the position holder is part of the Product Team.
Main objectives of the role
· Focus on customer satisfaction with the product and provide the first-line support.
Specialisation
· Customer Support
· SaaS
· FMCG/CPG
Key processes in the role
· Build extensive knowledge of our SAAS product platform and support our customers in using it.
· Supporting end customers with complex questions.
· Providing extended and elaborated answers on business & “how to” questions from customers.
· Participating in ongoing education for Customer Support Managers.
· Collaborate and communicate with the Development teams, Product Support and Customers
Requirements
· Bachelor’s degree in business, IT, Engineering or Economics.
· 4-8 years of experience in a similar role in the IT Industry.
· Solid knowledge of SaaS (Software as a Service).
· Multitasking is your second nature, and you have a proactive + Customer First mindset.
· 3+ years of experience providing support for ERP systems, preferably SAP.
· Familiarity with ERP/SAP integration processes and data migration.
· Understanding of ERP/SAP functionalities, modules and data structures.
· Understanding of technicalities like Integrations (API’s, ETL, ELT), analysing logs, identifying errors in logs, etc.
· Experience in looking into code, changing configuration, and analysing if it's a development bug or a product bug.
· Profound understanding of the support processes.
· Should know where to route tickets further and know how to manage customer escalations.
· Outstanding customer service skills.
· Knowledge of Fast-Moving Consumer Goods (FMCG)/ Consumer Packaged Goods (CPG) industry/domain is preferable.
Excellent verbal and written communication skills in the English language

About the Role:
We are seeking a talented Lead Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.
Responsibilities:
- Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
- Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
- Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
- Team Management: Able to handle team.
- Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
- Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
- Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
- Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.
Skills:
- Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
- Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
- Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
- Understanding of data modeling and data architecture concepts.
- Experience with ETL/ELT tools and frameworks.
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
Preferred Qualifications:
- Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
- Knowledge of machine learning and artificial intelligence concepts.
- Experience with data visualization tools (e.g., Tableau, Power BI).
- Certification in cloud platforms or data engineering.
Job Title : Data Engineer – Snowflake Expert
Location : Pune (Onsite)
Experience : 10+ Years
Employment Type : Contractual
Mandatory Skills : Snowflake, Advanced SQL, ETL/ELT (Snowpipe, Tasks, Streams), Data Modeling, Performance Tuning, Python, Cloud (preferably Azure), Security & Data Governance.
Job Summary :
We are seeking a seasoned Data Engineer with deep expertise in Snowflake to design, build, and maintain scalable data solutions.
The ideal candidate will have a strong background in data modeling, ETL/ELT, SQL optimization, and cloud data warehousing principles, with a passion for leveraging Snowflake to drive business insights.
Responsibilities :
- Collaborate with data teams to optimize and enhance data pipelines and models on Snowflake.
- Design and implement scalable ELT pipelines with performance and cost-efficiency in mind.
- Ensure high data quality, security, and adherence to governance frameworks.
- Conduct code reviews and align development with best practices.
Qualifications :
- Bachelor’s in Computer Science, Data Science, IT, or related field.
- Snowflake certifications (Pro/Architect) preferred.

About the Role:
We are seeking a talented Lead Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.
Responsibilities:
- Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
- Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
- Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
- Team Management: Able to handle team.
- Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
- Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
- Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
- Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.
Skills:
- Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
- Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
- Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
- Understanding of data modeling and data architecture concepts.
- Experience with ETL/ELT tools and frameworks.
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
Preferred Qualifications:
- Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
- Knowledge of machine learning and artificial intelligence concepts.
- Experience with data visualization tools (e.g., Tableau, Power BI).
- Certification in cloud platforms or data engineering.
Job Title : Senior AWS Data Engineer
Experience : 5+ Years
Location : Gurugram
Employment Type : Full-Time
Job Summary :
Seeking a Senior AWS Data Engineer with expertise in AWS to design, build, and optimize scalable data pipelines and data architectures. The ideal candidate will have experience in ETL/ELT, data warehousing, and big data technologies.
Key Responsibilities :
- Build and optimize data pipelines using AWS (Glue, EMR, Redshift, S3, etc.).
- Maintain data lakes & warehouses for analytics.
- Ensure data integrity through quality checks.
- Collaborate with data scientists & engineers to deliver solutions.
Qualifications :
- 7+ Years in Data Engineering.
- Expertise in AWS services, SQL, Python, Spark, Kafka.
- Experience with CI/CD, DevOps practices.
- Strong problem-solving skills.
Preferred Skills :
- Experience with Snowflake, Databricks.
- Knowledge of BI tools (Tableau, Power BI).
- Healthcare/Insurance domain experience is a plus.
Job Title : Tech Lead - Data Engineering (AWS, 7+ Years)
Location : Gurugram
Employment Type : Full-Time
Job Summary :
Seeking a Tech Lead - Data Engineering with expertise in AWS to design, build, and optimize scalable data pipelines and data architectures. The ideal candidate will have experience in ETL/ELT, data warehousing, and big data technologies.
Key Responsibilities :
- Build and optimize data pipelines using AWS (Glue, EMR, Redshift, S3, etc.).
- Maintain data lakes & warehouses for analytics.
- Ensure data integrity through quality checks.
- Collaborate with data scientists & engineers to deliver solutions.
Qualifications :
- 7+ Years in Data Engineering.
- Expertise in AWS services, SQL, Python, Spark, Kafka.
- Experience with CI/CD, DevOps practices.
- Strong problem-solving skills.
Preferred Skills :
- Experience with Snowflake, Databricks.
- Knowledge of BI tools (Tableau, Power BI).
- Healthcare/Insurance domain experience is a plus.
Job Description:
An Azure Data Engineer is responsible for designing, implementing, and maintaining pipelines and ETL/ ELT flow solutions on the Azure cloud platform. This role requires a strong understanding of migration database technologies and the ability to deploy and manage database solutions in the Azure cloud environment.
Key Skills:
· Min. 3+ years of Experience with data modeling, data warehousing, and building ETL pipelines.
· Must have a firm knowledge of SQL, NoSQL, SSIS SSRS, and ETL/ELT Concepts.
· Should have hands-on experience in Databricks, ADF (Azure Data Factory), ADLS, Cosmos DB.
· Excel in the design, creation, and management of very large datasets
· Detailed knowledge of cloud-based data warehouses, architecture, infrastructure components, ETL, and reporting analytics tools and environments.
· Skilled with writing, tuning, and troubleshooting SQL queries
· Experience with Big Data technologies such as Data storage, Data mining, Data analytics, and Data visualization.
· Should be familiar with programming and should be able to write and debug the code in any of the programming languages like Node, Python, C#, .Net, Java.
Technical Expertise and Familiarity:
- Cloud Technologies: Azure (ADF, ADB, Logic Apps, Azure SQL database, Azure Key Vaults, ADLS, Synapse)
- Database: CosmosDB, Document DB
- IDEs: Visual Studio, VS Code, MS SQL Server
- Data Modelling,ELT, ETL Methodology
- Creating and managing ETL/ELT pipelines based on requirements
- Build PowerBI dashboards and manage datasets needed.
- Work with stakeholders to identify data structures needed for future and perform any transformations including aggregations.
- Build data cubes for real-time visualisation needs and CXO dashboards.
Required Tech Skills
- Microsoft PowerBI & DAX
- Python, Pandas, PyArrow, Jupyter Noteboks, ApacheSpark
- Azure Synapse, Azure DataBricks, Azure HDInsight, Azure Data Factory

• Work with various stakeholders, understand requirements, and build solutions/data pipelines
that address the needs at scale
• Bring key workloads to the clients’ Snowflake environment using scalable, reusable data
ingestion and processing frameworks to transform a variety of datasets
• Apply best practices for Snowflake architecture, ELT and data models
Skills - 50% of below:
• A passion for all things data; understanding how to work with it at scale, and more importantly,
knowing how to get the most out of it
• Good understanding of native Snowflake capabilities like data ingestion, data sharing, zero-copy
cloning, tasks, Snowpipe etc
• Expertise in data modeling, with a good understanding of modeling approaches like Star
schema and/or Data Vault
• Experience in automating deployments
• Experience writing code in Python, Scala or Java or PHP
• Experience in ETL/ELT either via a code-first approach or using low-code tools like AWS Glue,
Appflow, Informatica, Talend, Matillion, Fivetran etc
• Experience in one or more of the AWS especially in relation to integration with Snowflake
• Familiarity with data visualization tools like Tableau or PowerBI or Domo or any similar tool
• Experience with Data Virtualization tools like Trino, Starburst, Denodo, Data Virtuality, Dremio
etc.
• Certified SnowPro Advanced: Data Engineer is a must.
We are looking for a Senior Data Engineer to join the Customer Innovation team, who will be responsible for acquiring, transforming, and integrating customer data onto our Data Activation Platform from customers’ clinical, claims, and other data sources. You will work closely with customers to build data and analytics solutions to support their business needs, and be the engine that powers the partnership that we build with them by delivering high-fidelity data assets.
In this role, you will work closely with our Product Managers, Data Scientists, and Software Engineers to build the solution architecture that will support customer objectives. You'll work with some of the brightest minds in the industry, work with one of the richest healthcare data sets in the world, use cutting-edge technology, and see your efforts affect products and people on a regular basis. The ideal candidate is someone that
- Has healthcare experience and is passionate about helping heal people,
- Loves working with data,
- Has an obsessive focus on data quality,
- Is comfortable with ambiguity and making decisions based on available data and reasonable assumptions,
- Has strong data interrogation and analysis skills,
- Defaults to written communication and delivers clean documentation, and,
- Enjoys working with customers and problem solving for them.
A day in the life at Innovaccer:
- Define the end-to-end solution architecture for projects by mapping customers’ business and technical requirements against the suite of Innovaccer products and Solutions.
- Measure and communicate impact to our customers.
- Enabling customers on how to activate data themselves using SQL, BI tools, or APIs to solve questions they have at speed.
What You Need:
- 4+ years of experience in a Data Engineering role, a Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field.
- 4+ years of experience working with relational databases like Snowflake, Redshift, or Postgres.
- Intermediate to advanced level SQL programming skills.
- Data Analytics and Visualization (using tools like PowerBI)
- The ability to engage with both the business and technical teams of a client - to document and explain technical problems or concepts in a clear and concise way.
- Ability to work in a fast-paced and agile environment.
- Easily adapt and learn new things whether it’s a new library, framework, process, or visual design concept.
What we offer:
- Industry certifications: We want you to be a subject matter expert in what you do. So, whether it’s our product or our domain, we’ll help you dive in and get certified.
- Quarterly rewards and recognition programs: We foster learning and encourage people to take risks. We recognize and reward your hard work.
- Health benefits: We cover health insurance for you and your loved ones.
- Sabbatical policy: We encourage people to take time off and rejuvenate, learn new skills, and pursue their interests so they can generate new ideas with Innovaccer.
- Pet-friendly office and open floor plan: No boring cubicles.
Experience Range |
2 Years - 10 Years |
Function | Information Technology |
Desired Skills |
Must Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
|
Education Type | Engineering |
Degree / Diploma | Bachelor of Engineering, Bachelor of Computer Applications, Any Engineering |
Specialization / Subject | Any Specialisation |
Job Type | Full Time |
Job ID | 000018 |
Department | Software Development |