50+ ETL Jobs in India
Apply to 50+ ETL Jobs on CutShort.io. Find your next job, effortlessly. Browse ETL Jobs and apply today!

We are looking for a Senior BI Engineer with an architect’s mindset and thought leadership approach on the Data & Analytics team. This role goes beyond development—it's about end-to-end thinking, mentoring and collaborating with a global team.
Key Responsibilities:
- Engineer Self-Service BI Solutions: Design and implement robust and intuitive Power BI models that empower superb reporting and strategic decisions.
- Data Warehouse Development and Integration: Leverage Snowflake and DBT to design and develop data warehouse tables, ensuring they are seamlessly incorporated into the Power BI ecosystem for comprehensive reporting.
- Simplify Complexity: Turn complex data and processes into intuitive and actionable assets
- Design solutions and processes that enable data engineers, BI, and analysts to accelerate the delivery of high-quality data assets
- Performance Optimization: Ensure optimal performance of Power BI solutions
- Stakeholder Collaboration: Deliver consistently on internal partner requests
- Mentorship and Support: Mentor coworkers and empower business partners to build their own reports using our Power BI and Snowflake models.
- Continuous Improvement: Stay on the cutting edge of tools and tech to continuously enhance our Power BI/Snowflake capabilities
- Technical Documentation: For internal development alignment and stakeholder enablement
Additional Qualifications:
- Power BI Version Control using Azure DevOps or GIT for scalable development
- Analytical Thinking: Strong analytical skills to interpret data and model for actionable insights.
- SaaS Experience: Preferred experience with subscription data in a SaaS environment.
- Python Experience: Preferred to automate processes and tap into the Power BI API.
About Us:
We are a dynamic SaaS company focused on delivering innovative solutions to our clients. Our team is passionate about leveraging data to drive business success. We offer a collaborative and inclusive work environment with opportunities for professional growth and development.
Why Join Us?
- Impactful Work: Contribute to projects that have a significant impact on our business and clients.
- Innovative Environment: Work with cutting-edge technologies and be part of innovative projects.
Job Title : Solution Architect – Denodo
Experience : 10+ Years
Location : Remote / Work from Home
Notice Period : Immediate joiners preferred
Job Overview :
We are looking for an experienced Solution Architect – Denodo to lead the design and implementation of data virtualization solutions. In this role, you will work closely with cross-functional teams to ensure our data architecture aligns with strategic business goals. The ideal candidate will bring deep expertise in Denodo, strong technical leadership, and a passion for driving data-driven decisions.
Mandatory Skills : Denodo, Data Virtualization, Data Architecture, SQL, Data Modeling, ETL, Data Integration, Performance Optimization, Communication Skills.
Key Responsibilities :
- Architect and design scalable data virtualization solutions using Denodo.
- Collaborate with business analysts and engineering teams to understand requirements and define technical specifications.
- Ensure adherence to best practices in data governance, performance, and security.
- Integrate Denodo with diverse data sources and optimize system performance.
- Mentor and train team members on Denodo platform capabilities.
- Lead tool evaluations and recommend suitable data integration technologies.
- Stay updated with emerging trends in data virtualization and integration.
Required Qualifications :
- Bachelor’s degree in Computer Science, IT, or a related field.
- 10+ Years of experience in data architecture and integration.
- Proven expertise in Denodo and data virtualization frameworks.
- Strong proficiency in SQL and data modeling.
- Hands-on experience with ETL processes and data integration tools.
- Excellent communication, presentation, and stakeholder management skills.
- Ability to lead technical discussions and influence architectural decisions.
- Denodo or data architecture certifications are a strong plus.

Primary skill set: QA Automation, Python, BDD, SQL
As Senior Data Quality Engineer you will:
- Evaluate product functionality and create test strategies and test cases to assess product quality.
- Work closely with the on-shore and the offshore team.
- Work on multiple reports validation against the databases by running medium to complex SQL queries.
- Better understanding of Automation Objects and Integrations across various platforms/applications etc.
- Individual contributor exploring opportunities to improve performance and suggest/articulate the areas of improvements importance/advantages to management.
- Integrate with SCM infrastructure to establish a continuous build and test cycle using CICD tools.
- Comfortable working on Linux/Windows environment(s) and Hybrid infrastructure models hosted on Cloud platforms.
- Establish processes and tools set to maintain automation scripts and generate regular test reports.
- Peer review to provide feedback and to make sure the test scripts are flaw-less.
Core/Must have skills:
- Excellent understanding and hands on experience in ETL/DWH testing preferably DataBricks paired with Python experience.
- Hands on experience SQL (Analytical Functions and complex queries) along with knowledge of using SQL client utilities effectively.
- Clear & crisp communication and commitment towards deliverables
- Experience on BigData Testing will be an added advantage.
- Knowledge on Spark and Scala, Hive/Impala, Python will be an added advantage.
Good to have skills:
- Test automation using BDD/Cucumber / TestNG combined with strong hands-on experience with Java with Selenium. Especially working experience in WebDriver.IO
- Ability to effectively articulate technical challenges and solutions
- Work experience in qTest, Jira, WebDriver.IO

Senior Data Engineer – Hyderabad (Immediate Joiners),
Experience: 7+ years,
We’re hiring Senior Data Engineers to develop high-volume, high-performance data integration solutions in a cloud-based environment. You'll design ETL pipelines, optimize queries, work on distributed systems, and support scalable data models to enable business decision-making across massive data lakes.
Mandatory Skills:
- SQL (Expert) – This is a key requirement.
- Query & Database Performance Tuning (Expert).
- ETL, Data Integrations & Transformations (Proficient).
- Python Scripting (Proficient).
- AWS Core Services – S3, Lambda, IAM (Intermediate),
Job Description: As a Data Engineer, you will be working in one of the world's largest cloud-based data lakes. You should be skilled in the architecture of data warehouse solutions for the Enterprise using multiple platforms (EMR, RDBMS, Columnar, Cloud). You should have extensive experience in the design, creation, management, and business use of extremely large datasets. You should have excellent business and communication skills to be able to work with business owners to develop and define key business questions and to build data sets that answer those questions. Above all, you should be passionate about working with huge data volumes and someone who loves to bring datasets together to answer business questions to drive Business growth.
Responsibilities
· Design and architect data virtualization solutions using Denodo.
· Collaborate with business analysts and data engineers to understand data requirements and translate them into technical specifications.
· Implement best practices for data governance and security within Denodo environments.
· Lead the integration of Denodo with various data sources, ensuring performance optimization.
· Conduct training sessions and provide guidance to technical teams on Denodo capabilities.
· Participate in the evaluation and selection of data technologies and tools.
· Stay current with industry trends in data integration and virtualization.
Requirements
· Bachelor's degree in Computer Science, Information Technology, or a related field.
· 10+ years of experience in data architecture, with a focus on Denodo solutions.
· Strong knowledge of data virtualization principles and practices.
· Experience with SQL and data modeling techniques.
· Familiarity with ETL processes and data integration tools.
· Excellent communication and presentation skills.
· Ability to lead technical discussions and provide strategic insights.
· Certifications related to Denodo or data architecture are a plus

Role: Data Engineer (14+ years of experience)
Location: Whitefield, Bangalore
Mode of Work: Hybrid (3 days from office)
Notice period: Immediate/ Serving with 30days left
Location: Candidate should be based out of Bangalore as one round has to be taken F2F
Job Summary:
Role and Responsibilities
● Design and implement scalable data pipelines for ingesting, transforming, and loading data from various tools and sources.
● Design data models to support data analysis and reporting.
● Automate data engineering tasks using scripting languages and tools.
● Collaborate with engineers, process managers, data scientists to understand their needs and design solutions.
● Act as a bridge between the engineering and the business team in all areas related to Data.
● Automate monitoring and alerting mechanism on data pipelines, products and Dashboards and troubleshoot any issues. On call requirements.
● SQL creation and optimization - including modularization and optimization which might need views, table creation in the sources etc.
● Defining best practices for data validation and automating as much as possible; aligning with the enterprise standards
● QA environment data management - e.g Test Data Management etc
Qualifications
● 14+ years of experience as a Data engineer or related role.
● Experience with Agile engineering practices.
● Strong experience in writing queries for RDBMS, cloud-based data warehousing solutions like Snowflake and Redshift.
● Experience with SQL and NoSQL databases.
● Ability to work independently or as part of a team.
● Experience with cloud platforms, preferably AWS.
● Strong experience with data warehousing and data lake technologies (Snowflake)
● Expertise in data modelling
● Experience with ETL/LT tools and methodologies .
● 5+ years of experience in application development including Python, SQL, Scala, or Java
● Experience working on real-time Data Streaming and Data Streaming platform.
NOTE: IT IS MANDATORY TO GIVE ONE TECHNICHAL ROUND FACE TO FACE.
Role: Automation Tester – Data Engineering
Experience: 6+ years
Work Mode: Hybrid (2–3 days onsite/week)
Locations: Gurgaon
Notice Period: Immediate Joiners Preferred
Mandatory Skills:
- Hands-on automation testing experience in Data Engineering or Data Warehousing
- Proficiency in Docker
- Experience working on any Cloud platform (AWS, Azure, or GCP)
- Experience in ETL Testing is a must
- Automation testing using Pytest or Scalatest
- Strong SQL skills and data validation techniques
- Familiarity with data processing tools such as ETL, Hadoop, Spark, Hive
- Sound knowledge of SDLC and Agile methodologies
- Ability to write efficient, clean, and maintainable test scripts
- Strong problem-solving, debugging, and communication skills
Good to Have:
- Exposure to additional test frameworks like Selenium, TestNG, or JUnit
Key Responsibilities:
- Develop, execute, and maintain automation scripts for data pipelines
- Perform comprehensive data validation and quality assurance
- Collaborate with data engineers, developers, and stakeholders
- Troubleshoot issues and improve test reliability
- Ensure consistent testing standards across development cycles
Job Title: SAP BODS Developer
- Experience: 7–10 Years
- Location: Remote (India-based candidates only)
- Employment Type: Permanent (Full-Time)
- Salary Range: ₹20 – ₹25 LPA (Fixed CTC)
Required Skills & Experience:
- 7–10 years of hands-on experience as a SAP BODS Developer.
- Strong experience in S/4HANA implementation or upgrade projects with large-scale data migration.
- Proficient in ETL development, job optimization, and performance tuning using SAP BODS.
- Solid understanding of SAP data structures (FI, MM, SD, etc.) from a technical perspective.
- Skilled in SQL scripting, error resolution, and job monitoring.
- Comfortable working independently in a remote, spec-driven development environment.

🚀 We Are Hiring: Data Engineer | 4+ Years Experience 🚀
Job description
🔍 Job Title: Data Engineer
📍 Location: Ahmedabad
🚀 Work Mode: On-Site Opportunity
📅 Experience: 4+ Years
🕒 Employment Type: Full-Time
⏱️ Availability : Immediate Joiner Preferred
Join Our Team as a Data Engineer
We are seeking a passionate and experienced Data Engineer to be a part of our dynamic and forward-thinking team in Ahmedabad. This is an exciting opportunity for someone who thrives on transforming raw data into powerful insights and building scalable, high-performance data infrastructure.
As a Data Engineer, you will work closely with data scientists, analysts, and cross-functional teams to design robust data pipelines, optimize data systems, and enable data-driven decision-making across the organization.
Your Key Responsibilities
Architect, build, and maintain scalable and reliable data pipelines from diverse data sources.
Design effective data storage, retrieval mechanisms, and data models to support analytics and business needs.
Implement data validation, transformation, and quality monitoring processes.
Collaborate with cross-functional teams to deliver impactful, data-driven solutions.
Proactively identify bottlenecks and optimize existing workflows and processes.
Provide guidance and mentorship to junior engineers in the team.
Skills & Expertise We’re Looking For
3+ years of hands-on experience in Data Engineering or related roles.
Strong expertise in Python and data pipeline design.
Experience working with Big Data tools like Hadoop, Spark, Hive.
Proficiency with SQL, NoSQL databases, and data warehousing solutions.
Solid experience in cloud platforms - Azure
Familiar with distributed computing, data modeling, and performance tuning.
Understanding of DevOps, Power Automate, and Microsoft Fabric is a plus.
Strong analytical thinking, collaboration skills, Excellent Communication Skill and the ability to work independently or as part of a team.
Qualifications
Bachelor’s degree in Computer Science, Data Science, or a related field.

About the Company
Hypersonix.ai is disrupting the e-commerce space with AI, ML and advanced decision capabilities to drive real-time business insights. Hypersonix.ai has been built ground up with new age technology to simplify the consumption of data for our customers in various industry verticals. Hypersonix.ai is seeking a well-rounded, hands-on product leader to help lead product management of key capabilities and features.
About the Role
We are looking for talented and driven Data Engineers at various levels to work with customers to build the data warehouse, analytical dashboards and ML capabilities as per customer needs.
Roles and Responsibilities
- Create and maintain optimal data pipeline architecture
- Assemble large, complex data sets that meet functional / non-functional business requirements; should write complex queries in an optimized way
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies
- Run ad-hoc analysis utilizing the data pipeline to provide actionable insights
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs
- Keep our data separated and secure across national boundaries through multiple data centers and AWS regions
- Work with analytics and data scientist team members and assist them in building and optimizing our product into an innovative industry leader
Requirements
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
- Strong analytic skills related to working with unstructured datasets
- Build processes supporting data transformation, data structures, metadata, dependency and workload management
- A successful history of manipulating, processing and extracting value from large disconnected datasets
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores
- Experience supporting and working with cross-functional teams in a dynamic environment
- We are looking for a candidate with 7+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Information Technology or completed MCA.
Position Summary:
As a CRM ETL Developer, you will be responsible for the analysis, transformation, and integration of data from legacy and external systems into CRM application. This includes developing ETL/ELT workflows, ensuring data quality through cleansing and survivorship rules, and supporting daily production loads. You will work in an Agile environment and play a vital role in building scalable, high-quality data integration solutions.
Key Responsibilities:
- Analyze data from legacy and external systems; develop ETL/ELT pipelines to ingest and process data.
- Cleanse, transform, and apply survivorship rules before loading into the CRM platform.
- Monitor, support, and troubleshoot production data loads (Tier 1 & Tier 2 support).
- Contribute to solution design, development, integration, and scaling of new/existing systems.
- Promote and implement best practices in data integration, performance tuning, and Agile development.
- Lead or support design reviews, technical documentation, and mentoring of junior developers.
- Collaborate with business analysts, QA, and cross-functional teams to resolve defects and clarify requirements.
- Deliver working solutions via quick POCs or prototypes for business scenarios.
Technical Skills:
- ETL/ELT Tools: 5+ years of hands-on experience in ETL processes using Siebel EIM.
- Programming & Databases: Strong SQL & PL/SQL development; experience with Oracle and/or SQL Server.
- Data Integration: Proven experience in integrating disparate data systems.
- Data Modelling: Good understanding of relational, dimensional modelling, and data warehousing concepts.
- Performance Tuning: Skilled in application and SQL query performance optimization.
- CRM Systems: Familiarity with Siebel CRM, Siebel Data Model, and Oracle SOA Suite is a plus.
- DevOps & Agile: Strong knowledge of DevOps pipelines and Agile methodologies.
- Documentation: Ability to write clear technical design documents and test cases.
Soft Skills & Attributes:
- Strong analytical and problem-solving skills.
- Excellent communication and interpersonal abilities.
- Experience working with cross-functional, globally distributed teams.
- Proactive mindset and eagerness to learn new technologies.
- Detail-oriented with a focus on reliability and accuracy.
Preferred Qualifications:
- Bachelor’s degree in Computer Science, Information Systems, or a related field.
- Experience in Tier 1 & Tier 2 application support roles.
- Exposure to real-time data integration systems is an advantage.
*Key Responsibilities: *
? Database Design & Development: o Design, develop, and optimize relational database structures.
? Technical Skills: o Strong proficiency in SQL (MS SQL Server, or other RDBMS).
Experience with database indexing, partitioning, and query optimization.
Knowledge of database security best practices.
Familiarity with ETL processes, data warehousing, and reporting tools.
Experience with Azure cloud-based databases
Scripting knowledge (Python, Shell, PowerShell) is an advantage.
Job Title: Tableau BI Developer
Years of Experience: 4-8Yrs
12$ per hour fte engagement
8 hrs. working
Required Skills and Experience:
● 4–8 years of hands-on experience developing solutions using Tableau Desktop and Tableau Server/Tableau Cloud.
● Proven experience with embedding Tableau dashboards into portals, apps, or third-party systems using JavaScript API, REST API, and other embedding techniques.
● Proficient in writing complex SQL queries and working with large datasets.
● Strong experience with at least one RDBMS (e.g., Snowflake, Redshift, SQL Server, PostgreSQL, etc.).
● Familiarity with web technologies including JavaScript, HTML, and CSS for embedded visual customization.
● Experience working with data pipelines and ETL processes.
● Solid understanding of data visualization principles and storytelling.
● Ability to work independently and manage multiple projects with tight deadlines.
● Strong verbal and written communication skills, including experience working with non-technical stakeholders.
Company name: PulseData labs Pvt Ltd (captive Unit for URUS, USA)
About URUS
We are the URUS family (US), a global leader in products and services for Agritech.
SENIOR DATA ENGINEER
This role is responsible for the design, development, and maintenance of data integration and reporting solutions. The ideal candidate will possess expertise in Databricks and strong skills in SQL Server, SSIS and SSRS, and experience with other modern data engineering tools such as Azure Data Factory. This position requires a proactive and results-oriented individual with a passion for data and a strong understanding of data warehousing principles.
Responsibilities
Data Integration
- Design, develop, and maintain robust and efficient ETL pipelines and processes on Databricks.
- Troubleshoot and resolve Databricks pipeline errors and performance issues.
- Maintain legacy SSIS packages for ETL processes.
- Troubleshoot and resolve SSIS package errors and performance issues.
- Optimize data flow performance and minimize data latency.
- Implement data quality checks and validations within ETL processes.
Databricks Development
- Develop and maintain Databricks pipelines and datasets using Python, Spark and SQL.
- Migrate legacy SSIS packages to Databricks pipelines.
- Optimize Databricks jobs for performance and cost-effectiveness.
- Integrate Databricks with other data sources and systems.
- Participate in the design and implementation of data lake architectures.
Data Warehousing
- Participate in the design and implementation of data warehousing solutions.
- Support data quality initiatives and implement data cleansing procedures.
Reporting and Analytics
- Collaborate with business users to understand data requirements for department driven reporting needs.
- Maintain existing library of complex SSRS reports, dashboards, and visualizations.
- Troubleshoot and resolve SSRS report issues, including performance bottlenecks and data inconsistencies.
Collaboration and Communication
- Comfortable in entrepreneurial, self-starting, and fast-paced environment, working both independently and with our highly skilled teams.
- Collaborate effectively with business users, data analysts, and other IT teams.
- Communicate technical information clearly and concisely, both verbally and in writing.
- Document all development work and procedures thoroughly.
Continuous Growth
- Keep abreast of the latest advancements in data integration, reporting, and data engineering technologies.
- Continuously improve skills and knowledge through training and self-learning.
This job description reflects managements assignment of essential functions; it does not prescribe or restrict the tasks that may be assigned.
Requirements
- Bachelor's degree in computer science, Information Systems, or a related field.
- 7+ years of experience in data integration and reporting.
- Extensive experience with Databricks, including Python, Spark, and Delta Lake.
- Strong proficiency in SQL Server, including T-SQL, stored procedures, and functions.
- Experience with SSIS (SQL Server Integration Services) development and maintenance.
- Experience with SSRS (SQL Server Reporting Services) report design and development.
- Experience with data warehousing concepts and best practices.
- Experience with Microsoft Azure cloud platform and Microsoft Fabric desirable.
- Strong analytical and problem-solving skills.
- Excellent communication and interpersonal skills.
- Ability to work independently and as part of a team.
- Experience with Agile methodologies.
Job Title : Cognos BI Developer
Experience : 6+ Years
Location : Bangalore / Hyderabad (Hybrid)
Notice Period : Immediate Joiners Preferred (Candidates serving notice with 10–15 days left can be considered)
Interview Mode : Virtual
Job Description :
We are seeking an experienced Cognos BI Developer with strong data modeling, dashboarding, and reporting expertise to join our growing team. The ideal candidate should have a solid background in business intelligence, data visualization, and performance analysis, and be comfortable working in a hybrid setup from Bangalore or Hyderabad.
Mandatory Skills :
Cognos BI, Framework Manager, Cognos Dashboarding, SQL, Data Modeling, Report Development (charts, lists, cross tabs, maps), ETL Concepts, KPIs, Drill-through, Macros, Prompts, Filters, Calculations.
Key Responsibilities :
- Understand business requirements in the BI context and design data models using Framework Manager to transform raw data into meaningful insights.
- Develop interactive dashboards and reports using Cognos Dashboard.
- Identify and define KPIs and create reports to monitor them effectively.
- Analyze data and present actionable insights to support business decision-making.
- Translate business requirements into technical specifications and determine timelines for execution.
- Design and develop models in Framework Manager, publish packages, manage security, and create reports based on these packages.
- Develop various types of reports, including charts, lists, cross tabs, and maps, and design dashboards combining multiple reports.
- Implement reports using macros, prompts, filters, and calculations.
- Perform data warehouse development activities and ensure seamless data flow.
- Write and optimize SQL queries to investigate data and resolve performance issues.
- Utilize Cognos features such as master-detail reports, drill-throughs, bookmarks, and page sets.
- Analyze and improve ETL processes to enhance data integration.
- Apply technical enhancements to existing BI systems to improve their performance and usability.
- Possess solid understanding of database fundamentals, including relational and multidimensional database design.
- Hands-on experience with Cognos Data Modules (data modeling) and dashboarding.

A leader in telecom, fintech, AI-led marketing automation.

We are looking for a talented MERN Developer with expertise in MongoDB/MySQL, Kubernetes, Python, ETL, Hadoop, and Spark. The ideal candidate will design, develop, and optimize scalable applications while ensuring efficient source code management and implementing Non-Functional Requirements (NFRs).
Key Responsibilities:
- Develop and maintain robust applications using MERN Stack (MongoDB, Express.js, React.js, Node.js).
- Design efficient database architectures (MongoDB/MySQL) for scalable data handling.
- Implement and manage Kubernetes-based deployment strategies for containerized applications.
- Ensure compliance with Non-Functional Requirements (NFRs), including source code management, development tools, and security best practices.
- Develop and integrate Python-based functionalities for data processing and automation.
- Work with ETL pipelines for smooth data transformations.
- Leverage Hadoop and Spark for processing and optimizing large-scale data operations.
- Collaborate with solution architects, DevOps teams, and data engineers to enhance system performance.
- Conduct code reviews, troubleshooting, and performance optimization to ensure seamless application functionality.
Required Skills & Qualifications:
- Proficiency in MERN Stack (MongoDB, Express.js, React.js, Node.js).
- Strong understanding of database technologies (MongoDB/MySQL).
- Experience working with Kubernetes for container orchestration.
- Hands-on knowledge of Non-Functional Requirements (NFRs) in application development.
- Expertise in Python, ETL pipelines, and big data technologies (Hadoop, Spark).
- Strong problem-solving and debugging skills.
- Knowledge of microservices architecture and cloud computing frameworks.
Preferred Qualifications:
- Certifications in cloud computing, Kubernetes, or database management.
- Experience in DevOps, CI/CD automation, and infrastructure management.
- Understanding of security best practices in application development.

What We’re Looking For:
- Strong experience in Python (3+ years).
- Hands-on experience with any database (SQL or NoSQL).
- Experience with frameworks like Flask, FastAPI, or Django.
- Knowledge of ORMs, API development, and unit testing.
- Familiarity with Git and Agile methodologies.

- Strong Snowflake Cloud database experience Database developer.
- Knowledge of Spark and Databricks is desirable.
- Strong technical background in data modelling, database design and optimization for data warehouses, specifically on column oriented MPP architecture
- Familiar with technologies relevant to data lakes such as Snowflake
- Candidate should have strong ETL & database design/modelling skills.
- Experience creating data pipelines
- Strong SQL skills and debugging knowledge and Performance Tuning exp.
- Experience with Databricks / Azure is add on /good to have .
- Experience working with global teams and global application environments
- Strong understanding of SDLC methodologies with track record of high quality deliverables and data quality, including detailed technical design documentation desired
The role reports to the Head of Customer Support, and the position holder is part of the Product Team.
Main objectives of the role
· Focus on customer satisfaction with the product and provide the first-line support.
Specialisation
· Customer Support
· SaaS
· FMCG/CPG
Key processes in the role
· Build extensive knowledge of our SAAS product platform and support our customers in using it.
· Supporting end customers with complex questions.
· Providing extended and elaborated answers on business & “how to” questions from customers.
· Participating in ongoing education for Customer Support Managers.
· Collaborate and communicate with the Development teams, Product Support and Customers
Requirements
· Bachelor’s degree in business, IT, Engineering or Economics.
· 4-8 years of experience in a similar role in the IT Industry.
· Solid knowledge of SaaS (Software as a Service).
· Multitasking is your second nature, and you have a proactive + Customer First mindset.
· 3+ years of experience providing support for ERP systems, preferably SAP.
· Familiarity with ERP/SAP integration processes and data migration.
· Understanding of ERP/SAP functionalities, modules and data structures.
· Understanding of technicalities like Integrations (API’s, ETL, ELT), analysing logs, identifying errors in logs, etc.
· Experience in looking into code, changing configuration, and analysing if it's a development bug or a product bug.
· Profound understanding of the support processes.
· Should know where to route tickets further and know how to manage customer escalations.
· Outstanding customer service skills.
· Knowledge of Fast-Moving Consumer Goods (FMCG)/ Consumer Packaged Goods (CPG) industry/domain is preferable.
Excellent verbal and written communication skills in the English language

About the Role:
We are seeking a talented Lead Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.
Responsibilities:
- Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
- Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
- Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
- Team Management: Able to handle team.
- Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
- Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
- Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
- Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.
Skills:
- Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
- Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
- Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
- Understanding of data modeling and data architecture concepts.
- Experience with ETL/ELT tools and frameworks.
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
Preferred Qualifications:
- Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
- Knowledge of machine learning and artificial intelligence concepts.
- Experience with data visualization tools (e.g., Tableau, Power BI).
- Certification in cloud platforms or data engineering.
Job Title : Data Engineer – Snowflake Expert
Location : Pune (Onsite)
Experience : 10+ Years
Employment Type : Contractual
Mandatory Skills : Snowflake, Advanced SQL, ETL/ELT (Snowpipe, Tasks, Streams), Data Modeling, Performance Tuning, Python, Cloud (preferably Azure), Security & Data Governance.
Job Summary :
We are seeking a seasoned Data Engineer with deep expertise in Snowflake to design, build, and maintain scalable data solutions.
The ideal candidate will have a strong background in data modeling, ETL/ELT, SQL optimization, and cloud data warehousing principles, with a passion for leveraging Snowflake to drive business insights.
Responsibilities :
- Collaborate with data teams to optimize and enhance data pipelines and models on Snowflake.
- Design and implement scalable ELT pipelines with performance and cost-efficiency in mind.
- Ensure high data quality, security, and adherence to governance frameworks.
- Conduct code reviews and align development with best practices.
Qualifications :
- Bachelor’s in Computer Science, Data Science, IT, or related field.
- Snowflake certifications (Pro/Architect) preferred.

About the Role:
We are seeking a talented Lead Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.
Responsibilities:
- Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
- Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
- Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
- Team Management: Able to handle team.
- Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
- Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
- Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
- Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.
Skills:
- Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
- Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
- Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
- Understanding of data modeling and data architecture concepts.
- Experience with ETL/ELT tools and frameworks.
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
Preferred Qualifications:
- Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
- Knowledge of machine learning and artificial intelligence concepts.
- Experience with data visualization tools (e.g., Tableau, Power BI).
- Certification in cloud platforms or data engineering.

Company Overview
We are a dynamic startup dedicated to empowering small businesses through innovative technology solutions. Our mission is to level the playing field for small businesses by providing them with powerful tools to compete effectively in the digital marketplace. Join us as we revolutionize the way small businesses operate online, bringing innovation and growth to local communities.
Job Description
We are seeking a skilled and experienced Data Engineer to join our team. In this role, you will develop systems on cloud platforms capable of processing millions of interactions daily, leveraging the latest cloud computing and machine learning technologies while creating custom in-house data solutions. The ideal candidate should have hands-on experience with SQL, PL/SQL, and any standard ETL tools. You must be able to thrive in a fast-paced environment and possess a strong passion for coding and problem-solving.
Required Skills and Experience
- Minimum 5 years of experience in software development.
- 3+ years of experience in data management and SQL expertise – PL/SQL, Teradata, and Snowflake experience strongly preferred.
- Expertise in big data technologies such as Hadoop, HiveQL, and Spark (Scala/Python).
- Expertise in cloud technologies – AWS (S3, Glue, Terraform, Lambda, Aurora, Redshift, EMR).
- Experience with queuing systems (e.g., SQS, Kafka) and caching systems (e.g., Ehcache, Memcached).
- Experience with container management tools (e.g., Docker Swarm, Kubernetes).
- Familiarity with data stores, including at least one of the following: Postgres, MongoDB, Cassandra, or Redis.
- Ability to create advanced visualizations and dashboards to communicate complex findings (e.g., Looker Studio, Power BI, Tableau).
- Strong skills in manipulating and transforming complex datasets for in-depth analysis.
- Technical proficiency in writing code in Python and advanced SQL queries.
- Knowledge of AI/ML infrastructure, best practices, and tools is a plus.
- Experience in analyzing and resolving code issues.
- Hands-on experience with software architecture concepts such as Separation of Concerns (SoC) and micro frontends with theme packages.
- Proficiency with the Git version control system.
- Experience with Agile development methodologies.
- Strong problem-solving skills and the ability to learn quickly.
- Exposure to Docker and Kubernetes.
- Familiarity with AWS or other cloud platforms.
Responsibilities
- Develop and maintain our inhouse search and reporting platform
- Create data solutions to complement core products to improve performance and data quality
- Collaborate with the development team to design, develop, and maintain our suite of products.
- Write clean, efficient, and maintainable code, adhering to coding standards and best practices.
- Participate in code reviews and testing to ensure high-quality code.
- Troubleshoot and debug application issues as needed.
- Stay up-to-date with emerging trends and technologies in the development community.
How to apply?
- If you are passionate about designing user-centric products and want to be part of a forward-thinking company, we would love to hear from you. Please send your resume, a brief cover letter outlining your experience and your current CTC (Cost to Company) as a part of the application.
Join us in shaping the future of e-commerce!

- A bachelor’s degree in Computer Science or a related field.
- 5-7 years of experience working as a hands-on developer in Sybase, DB2, ETL technologies.
- Worked extensively on data integration, designing, and developing reusable interfaces Advanced experience in Python, DB2, Sybase, shell scripting, Unix, Perl scripting, DB platforms, database design and modeling.
- Expert level understanding of data warehouse, core database concepts and relational database design.
- Experience in writing stored procedures, optimization, and performance tuning Strong Technology acumen and a deep strategic mindset.
- Proven track record of delivering results
- Proven analytical skills and experience making decisions based on hard and soft data
- A desire and openness to learning and continuous improvement, both of yourself and your team members.
- Hands-on experience on development of APIs is a plus
- Good to have experience with Business Intelligence tools, Source to Pay applications such as SAP Ariba, and Accounts Payable system Skills Required
- Familiarity with Postgres and Python is a plus
Job Summary:
Seeking a seasoned SQL + ETL Developer with 4+ years of experience in managing large-scale datasets and cloud-based data pipelines. The ideal candidate is hands-on with MySQL, PySpark, AWS Glue, and ETL workflows, with proven expertise in AWS migration and performance optimization.
Key Responsibilities:
- Develop and optimize complex SQL queries and stored procedures to handle large datasets (100+ million records).
- Build and maintain scalable ETL pipelines using AWS Glue and PySpark.
- Work on data migration tasks in AWS environments.
- Monitor and improve database performance; automate key performance indicators and reports.
- Collaborate with cross-functional teams to support data integration and delivery requirements.
- Write shell scripts for automation and manage ETL jobs efficiently.
Required Skills:
- Strong experience with MySQL, complex SQL queries, and stored procedures.
- Hands-on experience with AWS Glue, PySpark, and ETL processes.
- Good understanding of AWS ecosystem and migration strategies.
- Proficiency in shell scripting.
- Strong communication and collaboration skills.
Nice to Have:
- Working knowledge of Python.
- Experience with AWS RDS.
Role - ETL Developer
Work Mode - Hybrid
Experience- 4+ years
Location - Pune, Gurgaon, Bengaluru, Mumbai
Required Skills - AWS, AWS Glue, Pyspark, ETL, SQL
Required Skills:
- 4+ years of hands-on experience in MySQL, including SQL queries and procedure development
- Experience in Pyspark, AWS, AWS Glue
- Experience in AWS ,Migration
- Experience with automated scripting and tracking KPIs/metrics for database performance
- Proficiency in shell scripting and ETL.
- Strong communication skills and a collaborative team player
- Knowledge of Python and AWS RDS is a plus

TL;DR
Founding Software Engineer (Next.js / React / TypeScript) — ₹17,000–₹24,000 net ₹/mo — 100% remote (India) — ~40 h/wk — green-field stack, total autonomy, ship every week. If you can own the full lifecycle and prove impact every Friday, apply.
🏢 Mega Style Apartments
We rent beautifully furnished 1- to 4-bedroom flats that feel like home but run like a hotel—so travellers can land, unlock the door, and live like locals from hour one. Tech is now the growth engine, and you’ll be employee #1 in engineering, laying the cornerstone for a tech platform that will redefine the premium furnished apartment experience.
✨ Why This Role Rocks
💡 Green-field Everything
Choose the stack, CI, even the linter.
🎯 Visible Impact & Ambition
Every deploy reaches real guests this week. Lay rails for ML that can boost revenue 20%.
⏱️ Radical Autonomy
Plan sprints, own deploys; no committees.
- Direct line to decision-makers → zero red tape
- Modern DX: Next.js & React (latest stable), Tailwind, Prisma/Drizzle, Vercel, optional AI copilots – building mostly server-rendered, edge-ready flows.
- Async-first, with structured weekly 1-on-1s to ensure you’re supported, not micromanaged.
- Unmatched Career Acceleration: Build an entire tech foundation from zero, making decisions that will define your trajectory and our company's success.
🗓️ Your Daily Rhythm
- Morning: Check metrics, pick highest-impact task
- Day: Build → ship → measure
- Evening: 10-line WhatsApp update (done, next, blockers)
- Friday: Live demo of working software (no mock-ups)
📈 Success Milestones
- Week 1: First feature in production
- Month 1: Automation that saves ≥10 h/week for ops
- Month 3: Core platform stable; conversion up, load times down (aiming for <1s LCP); ready for future ML pricing (stretch goal: +20% revenue within 12 months).
🔑 What You’ll Own
- Ship guest-facing features with Next.js (App Router / RSC / Server Actions).
- Automate ops—dashboards & LLM helpers that delete busy-work.
- Full lifecycle: idea → spec → code → deploy → measure → iterate.
- Set up CI/CD & observability on Vercel; a dedicated half-day refactor slot each sprint keeps tech-debt low.
- Optimise for outcomes—conversion, CWV, security, reliability; laying the groundwork for future capabilities in dynamic pricing and guest personalization.
Prototype > promise. Results > hours-in-chair.
💻 Must-Have Skills
Frontend Focus:
- Next.js (App Router/RSC/Server Actions)
- React (latest stable), TypeScript
- Tailwind CSS + shadcn/ui
- State mgmt (TanStack Query / Zustand / Jotai)
Backend & DevOps Focus:
- Node.js APIs, Prisma/Drizzle ORM
- Solid SQL schema design (e.g., PostgreSQL)
- Auth.js / Better-Auth, web security best practices
- GitHub Flow, automated tests, CI, Vercel deploys
- Excellent English; explain trade-offs to non-tech peers
- Self-starter—comfortable as the engineer (for now)
🌱 Nice-to-Haves (Learn Here or Teach Us)
A/B testing & CRO, Python/basic ML, ETL pipelines, Advanced SEO & CWV, Payment APIs (Stripe, Merchant Warrior), n8n automation
🎁 Perks & Benefits
- 100% remote anywhere in 🇮🇳
- Flexible hours (~40 h/wk)
- 12 paid days off (holiday + sick)
- ₹1,700/mo health insurance reimbursement (post-probation)
- Performance bonuses for measurable wins
- 6-month paid probation → permanent role & full benefits (this is a full-time employment role)
- Blank-canvas stack—your decisions live on
- Equity is not offered at this time; we compensate via performance bonuses and a clear path for growth, with future leadership opportunities as the company and engineering team scales.
⏩ Hiring Process (7–10 Days, Fast & Fair)
All stages are async & remote.
- Apply: 5-min form + short quiz (approx. 15 min total)
- Test 1: TypeScript & logic (1 h)
- Test 2: Next.js / React / Node / SQL deep-dive (1 h)
- Final: AI Video interview (1 h)
.
🚫 Who Shouldn’t Apply
- Need daily hand-holding
- Prefer consensus to decisions
- Chase perfect code over shipped value
- “Move fast & learn” culture feels scary
🚀 Ready to Own the Stack?
If you read this and thought “Finally—no bureaucracy,” and you're ready to set the technical standard for a growing company, show us something you’ve built and apply here →


Senior Data Engineer
Location: Bangalore, Gurugram (Hybrid)
Experience: 4-8 Years
Type: Full Time | Permanent
Job Summary:
We are looking for a results-driven Senior Data Engineer to join our engineering team. The ideal candidate will have hands-on expertise in data pipeline development, cloud infrastructure, and BI support, with a strong command of modern data stacks. You’ll be responsible for building scalable ETL/ELT workflows, managing data lakes and marts, and enabling seamless data delivery to analytics and business intelligence teams.
This role requires deep technical know-how in PostgreSQL, Python scripting, Apache Airflow, AWS or other cloud environments, and a working knowledge of modern data and BI tools.
Key Responsibilities:
PostgreSQL & Data Modeling
· Design and optimize complex SQL queries, stored procedures, and indexes
· Perform performance tuning and query plan analysis
· Contribute to schema design and data normalization
Data Migration & Transformation
· Migrate data from multiple sources to cloud or ODS platforms
· Design schema mapping and implement transformation logic
· Ensure consistency, integrity, and accuracy in migrated data
Python Scripting for Data Engineering
· Build automation scripts for data ingestion, cleansing, and transformation
· Handle file formats (JSON, CSV, XML), REST APIs, cloud SDKs (e.g., Boto3)
· Maintain reusable script modules for operational pipelines
Data Orchestration with Apache Airflow
· Develop and manage DAGs for batch/stream workflows
· Implement retries, task dependencies, notifications, and failure handling
· Integrate Airflow with cloud services, data lakes, and data warehouses
Cloud Platforms (AWS / Azure / GCP)
· Manage data storage (S3, GCS, Blob), compute services, and data pipelines
· Set up permissions, IAM roles, encryption, and logging for security
· Monitor and optimize cost and performance of cloud-based data operations
Data Marts & Analytics Layer
· Design and manage data marts using dimensional models
· Build star/snowflake schemas to support BI and self-serve analytics
· Enable incremental load strategies and partitioning
Modern Data Stack Integration
· Work with tools like DBT, Fivetran, Redshift, Snowflake, BigQuery, or Kafka
· Support modular pipeline design and metadata-driven frameworks
· Ensure high availability and scalability of the stack
BI & Reporting Tools (Power BI / Superset / Supertech)
· Collaborate with BI teams to design datasets and optimize queries
· Support development of dashboards and reporting layers
· Manage access, data refreshes, and performance for BI tools
Required Skills & Qualifications:
· 4–6 years of hands-on experience in data engineering roles
· Strong SQL skills in PostgreSQL (tuning, complex joins, procedures)
· Advanced Python scripting skills for automation and ETL
· Proven experience with Apache Airflow (custom DAGs, error handling)
· Solid understanding of cloud architecture (especially AWS)
· Experience with data marts and dimensional data modeling
· Exposure to modern data stack tools (DBT, Kafka, Snowflake, etc.)
· Familiarity with BI tools like Power BI, Apache Superset, or Supertech BI
· Version control (Git) and CI/CD pipeline knowledge is a plus
· Excellent problem-solving and communication skills
- 8-10 years of experience in ETL Testing, Snowflake, DWH Concepts.
- Strong SQL knowledge & debugging skills are a must.
- Experience in Azure and Snowflake Testing is plus
- Experience with Qlik Replicate and Compose tools (Change Data Capture) tools is considered a plus
- Strong Data warehousing Concepts, ETL tools like Talend Cloud Data Integration, Pentaho/Kettle tool
- Experience in JIRA, Xray defect management tool is good to have.
- Exposure to the financial domain knowledge is considered a plus.
- Testing the data-readiness (data quality) address code or data issues
- Demonstrated ability to rationalize problems and use judgment and innovation to define clear and concise solutions
- Demonstrate strong collaborative experience across regions (APAC, EMEA and NA) to effectively and efficiently identify root cause of code/data issues and come up with a permanent solution
- Prior experience with State Street and Charles River Development (CRD) considered a plus
- Experience in tools such as PowerPoint, Excel, SQL
- Exposure to Third party data providers such as Bloomberg, Reuters, MSCI and other Rating agencies is a plus
We’re looking for an experienced SQL Developer with 3+ years of hands-on experience to join our growing team. In this role, you’ll be responsible for designing, developing, and maintaining SQL queries, procedures, and data systems that support our business operations and decision-making processes. You should be passionate about data, highly analytical, and capable of working both independently and collaboratively with cross-functional teams.
Key Responsibilities:
Design, develop, and maintain complex SQL queries, stored procedures, functions, and views.
Optimize existing queries for performance and efficiency.
Collaborate with data analysts, developers, and stakeholders to understand requirements and translate them into robust SQL solutions.
Design and implement ETL processes to move and transform data between systems.
Perform data validation, troubleshooting, and quality checks.
Maintain and improve existing databases, ensuring data integrity, security, and accessibility.
Document code, processes, and data models to support scalability and maintainability.
Monitor database performance and provide recommendations for improvement.
Work with BI tools and support dashboard/report development as needed.
Requirements:
3+ years of proven experience as an SQL Developer or in a similar role.
Strong knowledge of SQL and relational database systems (e.g., MS SQL Server, PostgreSQL, MySQL, Oracle).
Experience with performance tuning and optimization.
Proficiency in writing complex queries and working with large datasets.
Experience with ETL tools and data pipeline creation.
Familiarity with data warehousing concepts and BI reporting.
Solid understanding of database security, backup, and recovery.
Excellent problem-solving skills and attention to detail.
Good communication skills and ability to work in a team environment.
Nice to Have:
Experience with cloud-based databases (AWS RDS, Google BigQuery, Azure SQL).
Knowledge of Python, Power BI, or other scripting/analytics tools.
Experience working in Agile or Scrum environments.

Job Summary:
We are looking for a motivated and detail-oriented Data Engineer with 1–2 years of experience to join our data engineering team. The ideal candidate should have solid foundational skills in SQL and Python, along with exposure to building or maintaining data pipelines. You’ll play a key role in helping to ingest, process, and transform data to support various business and analytical needs.
Key Responsibilities:
- Assist in the design, development, and maintenance of scalable and efficient data pipelines.
- Write clean, maintainable, and performance-optimized SQL queries.
- Develop data transformation scripts and automation using Python.
- Support data ingestion processes from various internal and external sources.
- Monitor data pipeline performance and help troubleshoot issues.
- Collaborate with data analysts, data scientists, and other engineers to ensure data quality and consistency.
- Work with cloud-based data solutions and tools (e.g., AWS, Azure, GCP – as applicable).
- Document technical processes and pipeline architecture.
Core Skills Required:
- Proficiency in SQL (data querying, joins, aggregations, performance tuning).
- Experience with Python, especially in the context of data manipulation (e.g., pandas, NumPy).
- Exposure to ETL/ELT pipelines and data workflow orchestration tools (e.g., Airflow, Prefect, Luigi – preferred).
- Understanding of relational databases and data warehouse concepts.
- Familiarity with version control systems like Git.
Preferred Qualifications:
- Experience with cloud data services (AWS S3, Redshift, Azure Data Lake, etc.)
- Familiarity with data modeling and data integration concepts.
- Basic knowledge of CI/CD practices for data pipelines.
- Bachelor’s degree in Computer Science, Engineering, or related field.

As a Solution Architect, you will collaborate with our sales, presales and COE teams to provide technical expertise and support throughout the new business acquisition process. You will play a crucial role in understanding customer requirements, presenting our solutions, and demonstrating the value of our products.
You thrive in high-pressure environments, maintaining a positive outlook and understanding that career growth is a journey that requires making strategic choices. You possess good communication skills, both written and verbal, enabling you to convey complex technical concepts clearly and effectively. You are a team player, customer-focused, self-motivated, responsible individual who can work under pressure with a positive attitude. You must have experience in managing and handling RFPs/ RFIs, client demos and presentations, and converting opportunities into winning bids. You possess a strong work ethic, positive attitude, and enthusiasm to embrace new challenges. You can multi-task and prioritize (good time management skills), willing to display and learn. You should be able to work independently with less or no supervision. You should be process-oriented, have a methodical approach and demonstrate a quality-first approach.
Ability to convert client’s business challenges/ priorities into winning proposal/ bid through excellence in technical solution will be the key performance indicator for this role.
What you’ll do
- Architecture & Design: Develop high-level architecture designs for scalable, secure, and robust solutions.
- Technology Evaluation: Select appropriate technologies, frameworks, and platforms for business needs.
- Cloud & Infrastructure: Design cloud-native, hybrid, or on-premises solutions using AWS, Azure, or GCP.
- Integration: Ensure seamless integration between various enterprise applications, APIs, and third-party services.
- Design and develop scalable, secure, and performant data architectures on Microsoft Azure and/or new generation analytics platform like MS Fabric.
- Translate business needs into technical solutions by designing secure, scalable, and performant data architectures on cloud platforms.
- Select and recommend appropriate Data services (e.g. Fabric, Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, Power BI etc) to meet specific data storage, processing, and analytics needs.
- Develop and recommend data models that optimize data access and querying. Design and implement data pipelines for efficient data extraction, transformation, and loading (ETL/ELT) processes.
- Ability to understand Conceptual/Logical/Physical Data Modelling.
- Choose and implement appropriate data storage, processing, and analytics services based on specific data needs (e.g., data lakes, data warehouses, data pipelines).
- Understand and recommend data governance practices, including data lineage tracking, access control, and data quality monitoring.
What you will Bring
- 10+ years of working in data analytics and AI technologies from consulting, implementation and design perspectives
- Certifications in data engineering, analytics, cloud, AI will be a certain advantage
- Bachelor’s in engineering/ technology or an MCA from a reputed college is a must
- Prior experience of working as a solution architect during presales cycle will be an advantage
Soft Skills
- Communication Skills
- Presentation Skills
- Flexible and Hard-working
Technical Skills
- Knowledge of Presales Processes
- Basic understanding of business analytics and AI
- High IQ and EQ
Why join us?
- Work with a passionate and innovative team in a fast-paced, growth-oriented environment.
- Gain hands-on experience in content marketing with exposure to real-world projects.
- Opportunity to learn from experienced professionals and enhance your marketing skills.
- Contribute to exciting initiatives and make an impact from day one.
- Competitive stipend and potential for growth within the company.
- Recognized for excellence in data and AI solutions with industry awards and accolades.
Job Summary:
As a Data Engineering Lead, your role will involve designing, developing, and implementing interactive dashboards and reports using data engineering tools. You will work closely with stakeholders to gather requirements and translate them into effective data visualizations that provide valuable insights. Additionally, you will be responsible for extracting, transforming, and loading data from multiple sources into Power BI, ensuring its accuracy and integrity. Your expertise in Power BI and data analytics will contribute to informed decision-making and support the organization in driving data-centric strategies and initiatives.
1) Required Experience : 6+ years
2) Lead Experience : 2+ years
3) Mandatory Skills : Power BI , SQL , Azure Data Factory
4) Budget Range : 28 - 32 LPA
5) Locations : Hyderabad , Indore and Ahmedabad
6) Immediate joiners preferrable
7) Total 4 rounds will be conducted and Candidate should attend 1 round in F2F in Hyderabad , Indore and Ahmedabad locations
8) Candidate should be available to work all 5 days in work from office
We are looking for you!
---> As an ideal candidate for the Data Engineering Lead position, you embody the qualities of a team player with a relentless get-it-done attitude. Your intellectual curiosity and customer focus drive you to continuously seek new ways to add value to your job accomplishments. You thrive under pressure, maintaining a positive attitude and understanding that your career is a journey. You are willing to make the right choices to support your growth. In addition to your excellent communication skills, both written and verbal, you have a proven ability to create visually compelling designs using tools like Power BI and Tableau that effectively communicate our core values.
---> You build high-performing, scalable, enterprise-grade applications and teams. Your creativity and proactive nature enable you to think differently, find innovative solutions, deliver high-quality outputs, and ensure customers remain referenceable. With over eight years of experience in data engineering, you possess a strong sense of self-motivation and take ownership of your responsibilities. You prefer to work independently with little to no supervision.
---> You are process-oriented, adopt a methodical approach, and demonstrate a quality-first mindset. You have led mid to large-size teams and accounts, consistently using constructive feedback mechanisms to improve productivity, accountability, and performance within the team. Your track record showcases your results-driven approach, as you have consistently delivered successful projects with customer case studies published on public platforms. Overall, you possess a unique combination of skills, qualities, and experiences that make you an ideal fit to lead our data engineering team(s). You value inclusivity and want to join a culture that empowers you to show up as your authentic self.
---> You know that success hinges on commitment, our differences make us stronger, and the finish line is always sweeter when the whole team crosses together. In your role, you should be driving the team using data, data, and more data. You will manage multiple teams, oversee agile stories and their statuses, handle escalations and mitigations, plan ahead, identify hiring needs, collaborate with recruitment teams for hiring, enable sales with pre-sales teams, and work closely with development managers/leads for solutioning and delivery statuses, as well as architects for technology research and solutions.
What You Will Do:
- Analyze Business Requirements.
- Analyze the Data Model and do GAP analysis with Business Requirements and Power BI.
- Design and Model Power BI schema.
- Transformation of Data in Power BI/SQL/ETL Tool.
- Create DAX Formula, Reports, and Dashboards. Able to write DAX formulas.
- Experience writing SQL Queries and stored procedures.
- Design effective Power BI solutions based on business requirements.
- Manage a team of Power BI developers and guide their work.
- Integrate data from various sources into Power BI for analysis.
- Optimize performance of reports and dashboards for smooth usage.
- Collaborate with stakeholders to align Power BI projects with goals.
- Knowledge of Data Warehousing(must), Data Engineering is a plus
What we need?
• B. Tech computer science or equivalent
• Minimum 6+ years of relevant experience
Founded in 2002, Zafin offers a SaaS product and pricing platform that simplifies core modernization for top banks worldwide. Our platform enables business users to work collaboratively to design and manage pricing, products, and packages, while technologists streamline core banking systems.
With Zafin, banks accelerate time to market for new products and offers while lowering the cost of change and achieving tangible business and risk outcomes. The Zafin platform increases business agility while enabling personalized pricing and dynamic responses to evolving customer and market needs.
Zafin is headquartered in Vancouver, Canada, with offices and customers around the globe including ING, CIBC, HSBC, Wells Fargo, PNC, and ANZ. Zafin is proud to be recognized as a top employer and certified Great Place to Work® in Canada, India and the UK.
Job Summary:
We are looking for a highly skilled and detail-oriented Data & Visualisation Specialist to join our team. The ideal candidate will have a strong background in Business Intelligence (BI), data analysis, and visualisation, with advanced technical expertise in Azure Data Factory (ADF), SQL, Azure Analysis Services, and Power BI. In this role, you will be responsible for performing ETL operations, designing interactive dashboards, and delivering actionable insights to support strategic decision-making.
Key Responsibilities:
· Azure Data Factory: Design, build, and manage ETL pipelines in Azure Data Factory to facilitate seamless data integration across systems.
· SQL & Data Management: Develop and optimize SQL queries for extracting, transforming, and loading data while ensuring data quality and accuracy.
· Data Transformation & Modelling: Build and maintain data models using Azure Analysis Services (AAS), optimizing for performance and usability.
· Power BI Development: Create, maintain, and enhance complex Power BI reports and dashboards tailored to business requirements.
· DAX Expertise: Write and optimize advanced DAX queries and calculations to deliver dynamic and insightful reports.
· Collaboration: Work closely with stakeholders to gather requirements, deliver insights, and help drive data-informed decision-making across the organization.
· Attention to Detail: Ensure data consistency and accuracy through rigorous validation and testing processes. o Presentation & Reporting:
· Effectively communicate insights and updates to stakeholders, delivering clear and concise documentation.
Skills and Qualifications:
Technical Expertise:
· Proficient in Azure Data Factory for building ETL pipelines and managing data flows.
· Strong experience with SQL, including query optimization and data transformation.
· Knowledge of Azure Analysis Services for data modelling
· Advanced Power BI skills, including DAX, report development, and data modelling.
· Familiarity with Microsoft Fabric and Azure Analytics (a plus)
· Analytical Thinking: Ability to work with complex datasets, identify trends, and tackle ambiguous challenges effectively
Communication Skills:
· Excellent verbal and written communication skills, with the ability to convey complex technical information to non-technical stakeholders.
· Educational Qualification: Minimum of a Bachelor's degree, preferably in a quantitative field such as Mathematics, Statistics, Computer Science, Engineering, or a related discipline
What’s in it for you
Joining our team means being part of a culture that values diversity, teamwork, and high-quality work. We offer competitive salaries, annual bonus potential, generous paid time off, paid volunteering days, wellness benefits, and robust opportunities for professional growth and career advancement.

Experience: 5-8 Years
Work Mode: Remote
Job Type: Fulltime
Mandatory Skills: Python,SQL, Snowflake, Airflow, ETL, Data Pipelines, Elastic Search, & AWS.
Role Overview:
We are looking for a talented and passionate Senior Data Engineer to join our growing data team. In this role, you will play a key part in building and scaling our data infrastructure, enabling data-driven decision-making across the organization. You will be responsible for designing, developing, and maintaining efficient and reliable data pipelines for both ELT (Extract, Load, Transform) and ETL (Extract, Transform, Load) processes.
Responsibilities:
- Design, develop, and maintain robust and scalable data pipelines for ELT and ETL processes, ensuring data accuracy, completeness, and timeliness.
- Work with stakeholders to understand data requirements and translate them into efficient data models and pipelines.
- Build and optimize data pipelines using a variety of technologies, including Elastic Search, AWS S3, Snowflake, and NFS.
- Develop and maintain data warehouse schemas and ETL/ELT processes to support business intelligence and analytics needs.
- Implement data quality checks and monitoring to ensure data integrity and identify potential issues.
- Collaborate with data scientists and analysts to ensure data accessibility and usability for various analytical purposes.
- Stay current with industry best practices, CI/CD/DevSecFinOps, Scrum and emerging technologies in data engineering.
- Contribute to the development and enhancement of our data warehouse architecture
Required Skills:
- Bachelor's degree in Computer Science, Engineering, or a related field.
- 5+ years of experience as a Data Engineer with a strong focus on ELT/ETL processes.
- At least 3+ years of exp in Snowflake data warehousing technologies.
- At least 3+ years of exp in creating and maintaining Airflow ETL pipelines.
- Minimum 3+ years of professional level experience with Python languages for data manipulation and automation.
- Working experience with Elastic Search and its application in data pipelines.
- Proficiency in SQL and experience with data modelling techniques.
- Strong understanding of cloud-based data storage solutions such as AWS S3.
- Experience working with NFS and other file storage systems.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration skills.
Job Description :
As a Data & Analytics Architect, you will lead key data initiatives, including cloud transformation, data governance, and AI projects. You'll define cloud architectures, guide data science teams in model development, and ensure alignment with data architecture principles across complex solutions. Additionally, you will create and govern architectural blueprints, ensuring standards are met and promoting best practices for data integration and consumption.
Responsibilities :
- Play a key role in driving a number of data and analytics initiatives including cloud data transformation, data governance, data quality, data standards, CRM, MDM, Generative AI and data science.
- Define cloud reference architectures to promote reusable patterns and promote best practices for data integration and consumption.
- Guide the data science team in implementing data models and analytics models.
- Serve as a data science architect delivering technology and architecture services to the data science community.
- In addition, you will also guide application development teams in the data design of complex solutions, in a large data eco-system, and ensure that teams are in alignment with the data architecture principles, standards, strategies, and target states.
- Create, maintain, and govern architectural views and blueprints depicting the Business and IT landscape in its current, transitional, and future state.
- Define and maintain standards for artifacts containing architectural content within the operating model.
Requirements :
- Strong cloud data architecture knowledge (preference for Microsoft Azure)
- 8-10+ years of experience in data architecture, with proven experience in cloud data transformation, MDM, data governance, and data science capabilities.
- Design reusable data architecture and best practices to support batch/streaming ingestion, efficient batch, real-time, and near real-time integration/ETL, integrating quality rules, and structuring data for analytic consumption by end uses.
- Ability to lead software evaluations including RFP development, capabilities assessment, formal scoring models, and delivery of executive presentations supporting a final - recommendation.
- Well versed in the Data domains (Data Warehousing, Data Governance, MDM, Data Quality, Data Standards, Data Catalog, Analytics, BI, Operational Data Store, Metadata, Unstructured Data, non-traditional data and multi-media, ETL, ESB).
- Experience with cloud data technologies such as Azure data factory, Azure Data Fabric, Azure storage, Azure data lake storage, Azure data bricks, Azure AD, Azure ML etc.
- Experience with big data technologies such as Cloudera, Spark, Sqoop, Hive, HDFS, Flume, Storm, and Kafka.
Role & Responsibilities
About the Role:
We are seeking a highly skilled Senior Data Engineer with 5-7 years of experience to join our dynamic team. The ideal candidate will have a strong background in data engineering, with expertise in data warehouse architecture, data modeling, ETL processes, and building both batch and streaming pipelines. The candidate should also possess advanced proficiency in Spark, Databricks, Kafka, Python, SQL, and Change Data Capture (CDC) methodologies.
Key responsibilities:
Design, develop, and maintain robust data warehouse solutions to support the organization's analytical and reporting needs.
Implement efficient data modeling techniques to optimize performance and scalability of data systems.
Build and manage data lakehouse infrastructure, ensuring reliability, availability, and security of data assets.
Develop and maintain ETL pipelines to ingest, transform, and load data from various sources into the data warehouse and data lakehouse.
Utilize Spark and Databricks to process large-scale datasets efficiently and in real-time.
Implement Kafka for building real-time streaming pipelines and ensure data consistency and reliability.
Design and develop batch pipelines for scheduled data processing tasks.
Collaborate with cross-functional teams to gather requirements, understand data needs, and deliver effective data solutions.
Perform data analysis and troubleshooting to identify and resolve data quality issues and performance bottlenecks.
Stay updated with the latest technologies and industry trends in data engineering and contribute to continuous improvement initiatives.
Profile: Product Support Engineer
🔴 Experience: 1 year as Product Support Engineer.
🔴 Location: Mumbai (Andheri).
🔴 5 days of working from office.
Skills Required:
🔷 Experience in providing support for ETL or data warehousing is preferred.
🔷 Good Understanding on Unix and Databases concepts.
🔷 Experience working with SQL and No-SQL databases and writing simple
queries to get data for debugging issues.
🔷 Being able to creatively come up with solutions for various problems and
implement them.
🔷 Experience working with REST APIs and debugging requests and
responses using tools like Postman.
🔷 Quick troubleshooting and diagnosing skills.
🔷 Knowledge of customer success processes.
🔷 Experience in document creation.
🔷 High availability for fast response to customers.
🔷 Language knowledge required in one of NodeJs, Python, Java.
🔷 Background in AWS, Docker, Kubernetes, Networking - an advantage.
🔷 Experience in SAAS B2B software companies - an advantage.
🔷 Ability to join the dots around multiple events occurring concurrently and
spot patterns.
Here is the Job Description -
Location -- Viman Nagar, Pune
Mode - 5 Days Working
Required Tech Skills:
● Strong at PySpark, Python
● Good understanding of Data Structure
● Good at SQL query/optimization
● Strong fundamentals of OOPs programming
● Good understanding of AWS Cloud, Big Data.
● Data Lake, AWS Glue, Athena, S3, Kinesis, SQL/NoSQL DB
Job Description for QA Engineer:
- 6-10 years of experience in ETL Testing, Snowflake, DWH Concepts.
- Strong SQL knowledge & debugging skills are a must.
- Experience on Azure and Snowflake Testing is plus
- Experience with Qlik Replicate and Compose tools (Change Data Capture) tools is considered a plus
- Strong Data warehousing Concepts, ETL tools like Talend Cloud Data Integration, Pentaho/Kettle tool
- Experience in JIRA, Xray defect management toolis good to have.
- Exposure to the financial domain knowledge is considered a plus
- Testing the data-readiness (data quality) address code or data issues
- Demonstrated ability to rationalize problems and use judgment and innovation to define clear and concise solutions
- Demonstrate strong collaborative experience across regions (APAC, EMEA and NA) to effectively and efficiently identify root cause of code/data issues and come up with a permanent solution
- Prior experience with State Street and Charles River Development (CRD) considered a plus
- Experience in tools such as PowerPoint, Excel, SQL
- Exposure to Third party data providers such as Bloomberg, Reuters, MSCI and other Rating agencies is a plus
Key Attributes include:
- Team player with professional and positive approach
- Creative, innovative and able to think outside of the box
- Strong attention to detail during root cause analysis and defect issue resolution
- Self-motivated & self-sufficient
- Effective communicator both written and verbal
- Brings a high level of energy with enthusiasm to generate excitement and motivate the team
- Able to work under pressure with tight deadlines and/or multiple projects
- Experience in negotiation and conflict resolution
Senior ETL developer in SAS
We are seeking a skilled and experienced ETL Developer with strong SAS expertise to join
our growing Data Management team in Kolkata. The ideal candidate will be responsible for
designing, developing, implementing, and maintaining ETL processes to extract, transform,
and load data from various source systems into Banking data warehouse and other data
repositories of BFSI. This role requires a strong understanding of Banking data warehousing
concepts, ETL methodologies, and proficiency in SAS programming for data manipulation
and analysis.
Responsibilities:
• Design, develop, and implement ETL solutions using industry best practices and
tools, with a strong focus on SAS.
• Develop and maintain SAS programs for data extraction, transformation, and loading.
• Work with source system owners and data analysts to understand data requirements
and translate them into ETL specifications.
• Build and maintain data pipelines for Banking database to ensure data quality,
integrity, and consistency.
• Perform data profiling, data cleansing, and data validation to ensure accuracy and
reliability of data.
• Troubleshoot and resolve Bank’s ETL-related issues, including data quality problems
and performance bottlenecks.
• Optimize ETL processes for performance and scalability.
• Document ETL processes, data flows, and technical specifications.
• Collaborate with other team members, including data architects, data analysts, and
business users.
• Stay up-to-date with the latest SAS related ETL technologies and best practices,
particularly within the banking and financial services domain.
• Ensure compliance with data governance policies and security standards.
Qualifications:
• Bachelor's degree in Computer Science, Information Technology, or a related field.
• Proven experience as an ETL Developer, preferably within the banking or financial
services industry.
• Strong proficiency in SAS programming for data manipulation and ETL processes.
• Experience with other ETL tools (e.g., Informatica PowerCenter, DataStage, Talend)
is a plus.
• Solid understanding of data warehousing concepts, including dimensional modeling
(star schema, snowflake schema).
• Experience working with relational databases (e.g., Oracle, SQL Server) and SQL.
• Familiarity with data quality principles and practices.
• Excellent analytical and problem-solving skills.
• Strong communication and interpersonal skills.
• Ability to work independently and as part of a team.
• Experience with data visualization tools (e.g., Tableau, Power BI) is a plus.
• Understanding of regulatory requirements in the banking sector (e.g., RBI guidelines)
is an advantage.
Preferred Skills:
• Experience with cloud-based data warehousing solutions (e.g., AWS Redshift, Azure
Synapse, Google BigQuery).
• Knowledge of big data technologies (e.g., Hadoop, Spark).
• Experience with agile development methodologies.
• Relevant certifications (e.g., SAS Certified Professional).
What We Offer:
• Competitive salary and benefits package.
• Opportunity to work with cutting-edge technologies in a dynamic environment.
• Exposure to the banking and financial services domain.
• Professional development and growth opportunities.
• A collaborative and supportive work culture.
Hi Kirti,
Job Title: Data Analytics Engineer
Experience: 3 to 6 years
Location: Gurgaon (Hybrid)
Employment Type: Full-time
Job Description:
We are seeking a highly skilled Data Analytics Engineer with expertise in Qlik Replicate, Qlik Compose, and Data Warehousing to build and maintain robust data pipelines. The ideal candidate will have hands-on experience with Change Data Capture (CDC) pipelines from various sources, an understanding of Bronze, Silver, and Gold data layers, SQL querying for data warehouses like Amazon Redshift, and experience with Data Lakes using S3. A foundational understanding of Apache Parquet and Python is also desirable.
Key Responsibilities:
1. Data Pipeline Development & Maintenance
- Design, develop, and maintain ETL/ELT pipelines using Qlik Replicate and Qlik Compose.
- Ensure seamless data replication and transformation across multiple systems.
- Implement and optimize CDC-based data pipelines from various source systems.
2. Data Layering & Warehouse Management
- Implement Bronze, Silver, and Gold layer architectures to optimize data workflows.
- Design and manage data pipelines for structured and unstructured data.
- Ensure data integrity and quality within Redshift and other analytical data stores.
3. Database Management & SQL Development
- Write, optimize, and troubleshoot complex SQL queries for data warehouses like Redshift.
- Design and implement data models that support business intelligence and analytics use cases.
4. Data Lakes & Storage Optimization
- Work with AWS S3-based Data Lakes to store and manage large-scale datasets.
- Optimize data ingestion and retrieval using Apache Parquet.
5. Data Integration & Automation
- Integrate diverse data sources into a centralized analytics platform.
- Automate workflows to improve efficiency and reduce manual effort.
- Leverage Python for scripting, automation, and data manipulation where necessary.
6. Performance Optimization & Monitoring
- Monitor data pipelines for failures and implement recovery strategies.
- Optimize data flows for better performance, scalability, and cost-effectiveness.
- Troubleshoot and resolve ETL and data replication issues proactively.
Technical Expertise Required:
- 3 to 6 years of experience in Data Engineering, ETL Development, or related roles.
- Hands-on experience with Qlik Replicate & Qlik Compose for data integration.
- Strong SQL expertise, with experience in writing and optimizing queries for Redshift.
- Experience working with Bronze, Silver, and Gold layer architectures.
- Knowledge of Change Data Capture (CDC) pipelines from multiple sources.
- Experience working with AWS S3 Data Lakes.
- Experience working with Apache Parquet for data storage optimization.
- Basic understanding of Python for automation and data processing.
- Experience in cloud-based data architectures (AWS, Azure, GCP) is a plus.
- Strong analytical and problem-solving skills.
- Ability to work in a fast-paced, agile environment.
Preferred Qualifications:
- Experience in performance tuning and cost optimization in Redshift.
- Familiarity with big data technologies such as Spark or Hadoop.
- Understanding of data governance and security best practices.
- Exposure to data visualization tools such as Qlik Sense, Tableau, or Power BI.
Work life balance / Startup / Learning / Good Environment .................................................................
Job Description for QA Engineer:
- 6-10 years of experience in ETL Testing, Snowflake, DWH Concepts.
- Strong SQL knowledge & debugging skills are a must.
- Experience on Azure and Snowflake Testing is plus
- Experience with Qlik Replicate and Compose tools (Change Data Capture) tools is considered a plus
- Strong Data warehousing Concepts, ETL tools like Talend Cloud Data Integration, Pentaho/Kettle tool
- Experience in JIRA, Xray defect management toolis good to have.
- Exposure to the financial domain knowledge is considered a plus
- Testing the data-readiness (data quality) address code or data issues
- Demonstrated ability to rationalize problems and use judgment and innovation to define clear and concise solutions
- Demonstrate strong collaborative experience across regions (APAC, EMEA and NA) to effectively and efficiently identify root cause of code/data issues and come up with a permanent solution
- Prior experience with State Street and Charles River Development (CRD) considered a plus
- Experience in tools such as PowerPoint, Excel, SQL
- Exposure to Third party data providers such as Bloomberg, Reuters, MSCI and other Rating agencies is a plus
Key Attributes include:
- Team player with professional and positive approach
- Creative, innovative and able to think outside of the box
- Strong attention to detail during root cause analysis and defect issue resolution
- Self-motivated & self-sufficient
- Effective communicator both written and verbal
- Brings a high level of energy with enthusiasm to generate excitement and motivate the team
- Able to work under pressure with tight deadlines and/or multiple projects
- Experience in negotiation and conflict resolution
Interested candidates are requested to email their resumes with the subject line "Application for [Job Title]".
Only applications received via email will be reviewed. Applications through other channels will not be considered.
Position Overview
We are seeking an experienced SAP Cutover/Data Migration Consultant with over 5 years of expertise in managing end-to-end cutover activities and data migration processes in SAP implementation or upgrade projects. The ideal candidate will have a deep understanding of SAP data structures, migration tools, and methodologies, along with exceptional project management and collaboration skills.
Key Responsibilities :
Cutover Planning and Execution
Develop detailed cutover plans, including timelines, dependencies, roles, and responsibilities.
Coordinate with business, technical, and functional teams to ensure seamless execution of cutover activities.
Identify risks and develop mitigation strategies for a smooth transition to the production environment.
Execute and monitor the cutover plan during go-live, ensuring minimal business disruption.
Data Migration Management
Lead the end-to-end SAP data migration process, including extraction, transformation, cleansing, validation, and loading.
Work closely with business stakeholders to define data migration scope, strategies, and rules.
Develop and execute data mapping and transformation scripts using tools like LSMW, BODS, or SAP Migration Cockpit.
Ensure data quality and integrity by performing rigorous testing and validation activities.
Testing and Validation
Collaborate with functional and technical teams to define and execute data migration test plans.
Perform mock migrations, reconciliation, and post-load validation to ensure successful data loads.
Resolve data discrepancies and provide root cause analysis during testing phases.
Documentation and Reporting
Prepare and maintain detailed documentation, including cutover plans, data migration scripts, and issue logs.
Provide regular status updates to project stakeholders on cutover and migration progress.
Stakeholder Collaboration
Act as a liaison between business users, technical teams, and project managers to ensure alignment of migration activities.
Conduct workshops and training sessions for business users on data readiness and cutover processes.
Required Skills & Qualifications
Education: Bachelor’s degree in Computer Science, Information Technology, or related field.
Experience: Minimum 5+ years of experience in SAP cutover planning and data migration.
Technical Skills:
Expertise in SAP data migration tools such as LSMW, SAP BODS, SAP Data Migration Cockpit, or custom ETL tools.
Strong knowledge of SAP modules (e.g., SD, MM, FICO) and their data structures.
Proficiency in data extraction, transformation, and loading techniques.
Experience with S/4HANA migrations and understanding of HANA-specific data structures (preferred).
Hands-on experience in managing legacy system data extraction and reconciliation processes.
Soft Skills:
Strong analytical, organizational, and problem-solving skills.
Excellent communication and interpersonal skills to collaborate with cross-functional teams.
Ability to manage multiple priorities and deliver within tight timelines.
Preferred Certifications:
SAP Certified Application Associate – Data Migration.
SAP Activate Project Manager Certification (preferred).
Skills & Requirements
Cutover planning, execution, SAP data migration, LSMW, SAP BODS, SAP Migration Cockpit, ETL tools, SAP SD, SAP MM, SAP FICO, data extraction, data transformation, data loading, S/4HANA migration, HANA data structures, Legacy system data reconciliation, Analytical skills, Communication skills.

Job Summary:
We are seeking a skilled Senior Tableau Developer to join our data team. In this role, you will design and build interactive dashboards, collaborate with data teams to deliver impactful insights, and optimize data pipelines using Airflow. If you are passionate about data visualization, process automation, and driving business decisions through analytics, we want to hear from you.
Key Responsibilities:
- Develop and maintain dynamic Tableau dashboards and visualizations to provide actionable business insights.
- Partner with data teams to gather reporting requirements and translate them into effective data solutions.
- Ensure data accuracy by integrating various data sources and optimizing data pipelines.
- Utilize Airflow for task orchestration, workflow scheduling, and monitoring.
- Enhance dashboard performance by streamlining data processing and improving query efficiency.
Requirements:
- 5+ years of hands-on experience in Tableau development.
- Proficiency in Airflow for building and automating data pipelines.
- Strong skills in data transformation, ETL processes, and data modeling.
- Solid understanding of SQL and database management.
- Excellent problem-solving skills and the ability to work collaboratively across teams.
Nice to Have:
- Experience with cloud platforms like AWS, GCP, or Azure.
- Familiarity with programming languages such as Python or R.
Why Join Us?
- Work on impactful data projects with a talented and collaborative team.
- Opportunity to innovate and shape data visualization strategies.
- Competitive compensation and professional growth opportunities

Job Title : Sr. Data Engineer
Experience : 5+ Years
Location : Noida (Hybrid – 3 Days in Office)
Shift Timing : 2-11 PM
Availability : Immediate
Job Description :
- We are seeking a Senior Data Engineer to design, develop, and optimize data solutions.
- The role involves building ETL pipelines, integrating data into BI tools, and ensuring data quality while working with SQL, Python (Pandas, NumPy), and cloud platforms (AWS/GCP).
- You will also develop dashboards using Looker Studio and work with AWS services like S3, Lambda, Glue ETL, Athena, RDS, and Redshift.
- Strong debugging, collaboration, and communication skills are essential.

We are seeking a highly skilled and experienced Offshore Data Engineer . The role involves designing, implementing, and testing data pipelines and products.
Qualifications & Experience:
bachelor's or master's degree in computer science, Information Systems, or a related field.
5+ years of experience in data engineering, with expertise in data architecture and pipeline development.
☁️ Proven experience with GCP, Big Query, Databricks, Airflow, Spark, DBT, and GCP Services.
️ Hands-on experience with ETL processes, SQL, PostgreSQL, MySQL, MongoDB, Cassandra.
Strong proficiency in Python and data modelling.
Experience in testing and validation of data pipelines.
Preferred: Experience with eCommerce systems, data visualization tools (Tableau, Looker), and cloud certifications.
If you meet the above criteria and are interested, please share your updated CV along with the following details:
Total Experience:
Current CTC:
Expected CTC:
Current Location:
Preferred Location:
Notice Period / Last Working Day (if serving notice):
⚠️ Kindly share your details only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response!

We are looking for skilled Data Engineer to design, build, and maintain robust data pipelines and infrastructure. You will play a pivotal role in optimizing data flow, ensuring scalability, and enabling seamless access to structured/unstructured data across the organization. This role requires technical expertise in Python, SQL, ETL/ELT frameworks, and cloud data warehouses, along with strong collaboration skills to partner with cross-functional teams.
Company: BigThinkCode Technologies
URL:
Location: Chennai (Work from office / Hybrid)
Experience: 4 - 6 years
Key Responsibilities:
- Design, develop, and maintain scalable ETL/ELT pipelines to process structured and unstructured data.
- Optimize and manage SQL queries for performance and efficiency in large-scale datasets.
- Experience working with data warehouse solutions (e.g., Redshift, BigQuery, Snowflake) for analytics and reporting.
- Collaborate with data scientists, analysts, and business stakeholders to translate requirements into technical solutions.
- Experience in Implementing solutions for streaming data (e.g., Apache Kafka, AWS Kinesis) is preferred but not mandatory.
- Ensure data quality, governance, and security across pipelines and storage systems.
- Document architectures, processes, and workflows for clarity and reproducibility.
Required Technical Skills:
- Proficiency in Python for scripting, automation, and pipeline development.
- Expertise in SQL (complex queries, optimization, and database design).
- Hands-on experience with ETL/ELT tools (e.g., Apache Airflow, dbt, AWS Glue).
- Experience working with structured data (RDBMS) and unstructured data (JSON, Parquet, Avro).
- Familiarity with cloud-based data warehouses (Redshift, BigQuery, Snowflake).
- Knowledge of version control systems (e.g., Git) and CI/CD practices.
Preferred Qualifications:
- Experience with streaming data technologies (e.g., Kafka, Kinesis, Spark Streaming).
- Exposure to cloud platforms (AWS, GCP, Azure) and their data services.
- Understanding of data modelling (dimensional, star schema) and optimization techniques.
Soft Skills:
- Team player with a collaborative mindset and ability to mentor junior engineers.
- Strong stakeholder management skills to align technical solutions with business goals.
- Excellent communication skills to explain technical concepts to non-technical audiences.
- Proactive problem-solving and adaptability in fast-paced environments.
If interested, apply / reply by sharing your updated profile to connect and discuss.
Regards
We are seeking a highly skilled and experienced Power BI Lead / Architect to join our growing team. The ideal candidate will have a strong understanding of data warehousing, data modeling, and business intelligence best practices. This role will be responsible for leading the design, development, and implementation of complex Power BI solutions that provide actionable insights to key stakeholders across the organization.
Location - Pune (Hybrid 3 days)
Responsibilities:
Lead the design, development, and implementation of complex Power BI dashboards, reports, and visualizations.
Develop and maintain data models (star schema, snowflake schema) for optimal data analysis and reporting.
Perform data analysis, data cleansing, and data transformation using SQL and other ETL tools.
Collaborate with business stakeholders to understand their data needs and translate them into effective and insightful reports.
Develop and maintain data pipelines and ETL processes to ensure data accuracy and consistency.
Troubleshoot and resolve technical issues related to Power BI dashboards and reports.
Provide technical guidance and mentorship to junior team members.
Stay abreast of the latest trends and technologies in the Power BI ecosystem.
Ensure data security, governance, and compliance with industry best practices.
Contribute to the development and improvement of the organization's data and analytics strategy.
May lead and mentor a team of junior Power BI developers.
Qualifications:
8-12 years of experience in Business Intelligence and Data Analytics.
Proven expertise in Power BI development, including DAX, advanced data modeling techniques.
Strong SQL skills, including writing complex queries, stored procedures, and views.
Experience with ETL/ELT processes and tools.
Experience with data warehousing concepts and methodologies.
Excellent analytical, problem-solving, and communication skills.
Strong teamwork and collaboration skills.
Ability to work independently and proactively.
Bachelor's degree in Computer Science, Information Systems, or a related field preferred.