50+ ETL Jobs in India
Apply to 50+ ETL Jobs on CutShort.io. Find your next job, effortlessly. Browse ETL Jobs and apply today!

Position Overview: We are looking for an experienced and highly skilled Senior Data Engineer to join our team and help design, implement, and optimize data systems that support high-end analytical solutions for our clients. As a customer-centric Data Engineer, you will work closely with clients to understand their business needs and translate them into robust, scalable, and efficient technical solutions. You will be responsible for end-to-end data modelling, integration workflows, and data transformation processes while ensuring security, privacy, and compliance.In this role, you will also leverage the latest advancements in artificial intelligence, machine learning, and large language models (LLMs) to deliver high-impact solutions that drive business success. The ideal candidate will have a deep understanding of data infrastructure, optimization techniques, and cost-effective data management
Key Responsibilities:
• Customer Collaboration:
– Partner with clients to gather and understand their business
requirements, translating them into actionable technical specifications.
– Act as the primary technical consultant to guide clients through data challenges and deliver tailored solutions that drive value.
•Data Modeling & Integration:
– Design and implement scalable, efficient, and optimized data models to support business operations and analytical needs.
– Develop and maintain data integration workflows to seamlessly extract, transform, and load (ETL) data from various sources into data repositories.
– Ensure smooth integration between multiple data sources and platforms, including cloud and on-premise systems
• Data Processing & Optimization:
– Develop, optimize, and manage data processing pipelines to enable real-time and batch data processing at scale.
– Continuously evaluate and improve data processing performance, optimizing for throughput while minimizing infrastructure costs.
• Data Governance & Security:
–Implement and enforce data governance policies and best practices, ensuring data security, privacy, and compliance with relevant industry regulations (e.g., GDPR, HIPAA).
–Collaborate with security teams to safeguard sensitive data and maintain privacy controls across data environments.
• Cross-Functional Collaboration:
– Work closely with data engineers, data scientists, and business
analysts to ensure that the data architecture aligns with organizational objectives and delivers actionable insights.
– Foster collaboration across teams to streamline data workflows and optimize solution delivery.
• Leveraging Advanced Technologies:
– Utilize AI, machine learning models, and large language models (LLMs) to automate processes, accelerate delivery, and provide
smart, data-driven solutions to business challenges.
– Identify opportunities to apply cutting-edge technologies to improve the efficiency, speed, and quality of data processing and analytics.
• Cost Optimization:
–Proactively manage infrastructure and cloud resources to optimize throughput while minimizing operational costs.
–Make data-driven recommendations to reduce infrastructure overhead and increase efficiency without sacrificing performance.
Qualifications:
• Experience:
– Proven experience (5+ years) as a Data Engineer or similar role, designing and implementing data solutions at scale.
– Strong expertise in data modelling, data integration (ETL), and data transformation processes.
– Experience with cloud platforms (AWS, Azure, Google Cloud) and big data technologies (e.g., Hadoop, Spark).
• Technical Skills:
– Advanced proficiency in SQL, data modelling tools (e.g., Erwin,PowerDesigner), and data integration frameworks (e.g., Apache
NiFi, Talend).
– Strong understanding of data security protocols, privacy regulations, and compliance requirements.
– Experience with data storage solutions (e.g., data lakes, data warehouses, NoSQL, relational databases).
• AI & Machine Learning Exposure:
– Familiarity with leveraging AI and machine learning technologies (e.g., TensorFlow, PyTorch, scikit-learn) to optimize data processing and analytical tasks.
–Ability to apply advanced algorithms and automation techniques to improve business processes.
• Soft Skills:
– Excellent communication skills to collaborate with clients, stakeholders, and cross-functional teams.
– Strong problem-solving ability with a customer-centric approach to solution design.
– Ability to translate complex technical concepts into clear, understandable terms for non-technical audiences.
• Education:
– Bachelor’s or Master’s degree in Computer Science, Information Systems, Data Science, or a related field (or equivalent practical experience).
LIFE AT FOUNTANE:
- Fountane offers an environment where all members are supported, challenged, recognized & given opportunities to grow to their fullest potential.
- Competitive pay
- Health insurance for spouses, kids, and parents.
- PF/ESI or equivalent
- Individual/team bonuses
- Employee stock ownership plan
- Fun/challenging variety of projects/industries
- Flexible workplace policy - remote/physical
- Flat organization - no micromanagement
- Individual contribution - set your deadlines
- Above all - culture that helps you grow exponentially!
A LITTLE BIT ABOUT THE COMPANY:
Established in 2017, Fountane Inc is a Ventures Lab incubating and investing in new competitive technology businesses from scratch. Thus far, we’ve created half a dozen multi-million valuation companies in the US and a handful of sister ventures for large corporations, including Target, US Ventures, and Imprint Engine.
We’re a team of 120+ strong from around the world that are radically open-minded and believes in excellence, respecting one another, and pushing our boundaries to the furthest it's ever been.

About Us:
At Vahan, we are building India’s first AI powered recruitment marketplace for India’s 300 million strong Blue Collar workforce, opening doors to economic opportunities and brighter futures. Already India’s largest recruitment platform, Vahan is supported by marquee investors like Khosla Ventures, Bharti Airtel, Vijay Shekhar Sharma (CEO, Paytm), and leading executives from Google and Facebook. Our customers include names like Swiggy, Zomato, Rapido, Zepto, and many more. We leverage cutting-edge technology and AI to recruit for the workforces of some of the most recognized companies in the country.
Our vision is ambitious: to become the go-to platform for blue-collar professionals worldwide, empowering them with not just earning opportunities but also the tools, benefits, and support they need to thrive. We aim to impact over a billion lives worldwide, creating a future where everyone has access to economic prosperity. If our vision excites you, Vahan might just be your next adventure. We’re on the hunt for driven individuals who love tackling big challenges. If this sounds like your kind of journey, dive into the details and see where you can make your mark.
What you will be doing:
- Architect and Implement Data Infrastructure: Design, build, and maintain robust and scalable data pipelines and a data warehouse/lake solution using open-source and cloud-based technologies, optimized for both high-frequency small file and large file data ingestion, and real-time data streams. This includes implementing efficient mechanisms for handling high volumes of data arriving at frequent intervals.
- Develop and Optimize Data Processes: Create custom tools, primarily using Python, for data validation, processing, analysis, and automation. Continuously improve ETL/ELT processes for efficiency, reliability, and scalability. This includes building processes to bridge gaps between different databases and data sources, ensuring data consistency and accessibility. This also includes processing and integrating data from streaming sources.
- Lead and Mentor: Collaborate with product, engineering, and business teams to understand data requirements and provide data-driven solutions. Mentor and guide junior data engineers (as the team grows) and foster a culture of data excellence.
- Data Quality and Governance: Proactively identify and address data quality issues. Implement and maintain robust data quality monitoring, alerting, and measurement systems to ensure the accuracy, completeness, and consistency of our data assets. Implement and enforce data governance and security best practices, taking proactive ownership.
- Research: Research and adapt newer technologies to suit the requirements.
You will thrive in this role if you:
- Are a Hands-On Technical Leader: You possess deep technical expertise in data engineering and are comfortable leading by example, diving into code, and setting technical direction.
- Are a Startup-Minded Problem Solver: You thrive in a fast-paced, dynamic environment, are comfortable with ambiguity, and are eager to build from the ground up. You proactively identify and address challenges.
- Are a Collaborative Communicator: You can effectively communicate complex technical concepts to both technical and non-technical audiences and build strong relationships with stakeholders.
- Are a Strategic Thinker: You can think ahead and architect long lasting systems.
At Vahan, you’ll have the opportunity to make a real impact in a sector that touches millions of lives. We’re committed to not only advancing the livelihoods of our workforce but also in taking care of the people who make this mission possible. Here’s what we offer:
- Unlimited PTO: Trust and flexibility to manage your time in the way that works best for you.
- Comprehensive Medical Insurance: We’ve got you covered with plans designed to support you and your loved ones.
- Monthly Wellness Leaves: Regular time off to recharge and focus on what matters most.
- Competitive Pay: Your contributions are recognized and rewarded with a compensation package that reflects your impact.
Join us, and be part of something bigger—where your work drives real, positive change in the world.
- Job Requirements:A Bachelors (required) or Masters degree (preferred) in Computer Science, Engineering, or a related discipline.
- 12+ years of experience in IT, with at least 5+ years working with Informatica.
- 10 years of experience in software architecture and development, using any data integration tool in a technical role across architecture, design and development.
- Handson experience in designing integration workflows with API integrations (REST, SOAP) and webhooks.
- Knowledge of emerging technologies and trends such as Lakehouse Architecture, Cloud Architecture, Microservices, etc.
- Having capabilities in API Management , API modeling, scripting/coding languages and experience with relational databases.
- Demonstrated ability in developing and successfully executing plans for projects including an ability to oversee projects from conception to completion.
- Should have experience interacting with senior management and business stakeholders.
- Structured and analytical way of working and distinct ability to do presentations
- Ability to work with cross-functional teams in dynamic situations with short timelines and strict deadlines
- Good experience in Agile/Scrum techniques
- Knowledge about Continuous Integration & Continuous Deployment (CI/CD) would be beneficial.
- Proficiency in wiki documentation, Lucid Charts, PowerPoint presentations and requirement specification.
- Desired Qualifications:IICS - Advance level Certified candidates are preferred.
- Basic understanding of coding (e.g., Java, Python) would be beneficial.
- Familiarity with Data Integration tools (e.g., Informatica, Tray.io , AWS Glue, DBT etc.) can aid in troubleshooting and analysis.
- Extensive knowledge of database technologies (e.g., Snowflake, Postgres, MS-SQL Server, etc.)
- Familiarity with version control systems, such as Git, and project management tools like Jira.
1. Software Development Engineer - Salesforce
What we ask for
We are looking for strong engineers to build best in class systems for commercial &
wholesale banking at Bank, using Salesforce service cloud. We seek experienced
developers who bring deep understanding of salesforce development practices, patterns,
anti-patterns, governor limits, sharing & security model that will allow us to architect &
develop robust applications.
You will work closely with business, product teams to build applications which provide end
users with intuitive, clean, minimalist, easy to navigate experience
Develop systems by implementing software development principles and clean code
practices scalable, secure, highly resilient, have low latency
Should be open to work in a start-up environment and have confidence to deal with complex
issues keeping focus on solutions and project objectives as your guiding North Star
Technical Skills:
● Strong hands-on frontend development using JavaScript and LWC
● Expertise in backend development using Apex, Flows, Async Apex
● Understanding of Database concepts: SOQL, SOSL and SQL
● Hands-on experience in API integration using SOAP, REST API, graphql
● Experience with ETL tools , Data migration, and Data governance
● Experience with Apex Design Patterns, Integration Patterns and Apex testing
framework
● Follow agile, iterative execution model using CI-CD tools like Azure Devops, gitlab,
bitbucket
● Should have worked with at least one programming language - Java, python, c++
and have good understanding of data structures
Preferred qualifications
● Graduate degree in engineering
● Experience developing with India stack
● Experience in fintech or banking domain

We are looking for a skilled and motivated Data Engineer with strong experience in Python programming and Google Cloud Platform (GCP) to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust and scalable ETL (Extract, Transform, Load) data pipelines. The role involves working with various GCP services, implementing data ingestion and transformation logic, and ensuring data quality and consistency across systems.
Key Responsibilities:
- Design, develop, test, and maintain scalable ETL data pipelines using Python.
- Work extensively on Google Cloud Platform (GCP) services such as:
- Dataflow for real-time and batch data processing
- Cloud Functions for lightweight serverless compute
- BigQuery for data warehousing and analytics
- Cloud Composer for orchestration of data workflows (based on Apache Airflow)
- Google Cloud Storage (GCS) for managing data at scale
- IAM for access control and security
- Cloud Run for containerized applications
- Perform data ingestion from various sources and apply transformation and cleansing logic to ensure high-quality data delivery.
- Implement and enforce data quality checks, validation rules, and monitoring.
- Collaborate with data scientists, analysts, and other engineering teams to understand data needs and deliver efficient data solutions.
- Manage version control using GitHub and participate in CI/CD pipeline deployments for data projects.
- Write complex SQL queries for data extraction and validation from relational databases such as SQL Server, Oracle, or PostgreSQL.
- Document pipeline designs, data flow diagrams, and operational support procedures.
Required Skills:
- 4–8 years of hands-on experience in Python for backend or data engineering projects.
- Strong understanding and working experience with GCP cloud services (especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.).
- Solid understanding of data pipeline architecture, data integration, and transformation techniques.
- Experience in working with version control systems like GitHub and knowledge of CI/CD practices.
- Strong experience in SQL with at least one enterprise database (SQL Server, Oracle, PostgreSQL, etc.).
Job Title: Informatica Developer (Mid-Senior)
Location: Bangalore
Job Type: 6 + months extendable may turnto C2H
Work mode: Hybrid
Experience Required: 7+ years
Job Description:
We are looking for an experienced Informatica Developer (Mid-Senior) to join our team for a 6-month contract-to-hire opportunity. The ideal candidate will have extensive experience in Informatica, strong knowledge of Unix, and advanced proficiency in SQL. You will play a key role in designing, developing, and implementing ETL processes for our data integration needs.
Key Responsibilities:
Design, develop, test, and deploy ETL solutions using Informatica PowerCenter or other related tools.
Develop complex SQL queries for data manipulation, transformation, and reporting.
Work with Unix-based environments for automation and scripting tasks.
Participate in troubleshooting, debugging, and performance optimization of ETL processes.
Work with cross-functional teams to ensure timely and quality delivery of projects.
Ensure adherence to coding standards and best practices for Informatica and ETL processes.
Provide support in migration, deployment, and continuous improvement of data integration processes.
Collaborate with business analysts and data teams to gather requirements and translate them into technical solutions.
Prepare technical documentation for developed solutions and ETL workflows.
Skills Required:
Informatica
Strong Unix skills, including shell scripting and environment management.
Advanced knowledge of SQL for data querying and manipulation (especially with large data sets).
Good understanding of data warehousing concepts and ETL frameworks.
Experience in performance tuning of ETL processes.
Ability to work in an agile environment with fast-paced project requirements.
Education & Experience:
Bachelor's degree in Computer Science, Information Technology, or a related field.
7+ years of relevant experience in Informatica development, SQL, and Unix scripting.
Teradata Developer Job Description
A Teradata Developer is responsible for designing, developing, and implementing data warehousing solutions using Teradata. Here's a brief overview:
Key Responsibilities
- Data Warehousing: Design and develop data warehousing solutions using Teradata.
- ETL Development: Develop ETL (Extract, Transform, Load) processes to load data into Teradata.
- SQL Development: Write complex SQL queries to support business intelligence and reporting.
- Performance Optimization: Optimize Teradata database performance for fast query execution.
- Data Modeling: Design and implement data models to support business requirements.
Technical Skills
- Teradata: Strong understanding of Teradata database management system.
- SQL: Proficiency in writing complex SQL queries.
- ETL: Experience with ETL tools like Informatica PowerCenter or Teradata PT.
- Data Warehousing: Knowledge of data warehousing concepts and best practices.
- Data Modeling: Understanding of data modeling concepts and techniques.
Responsibility of / Expectations from the Role
1 Driving the Informatica changes and supporting production issues.
2 Customer interaction as required
3 Requirement gathering and analysis
4 Resolution of Server related issues, GUI related issues
5 Taking the end to end ownership of Informatica changes.


About Us
Alfred Capital - Alfred Capital is a next-generation on-chain proprietary quantitative trading technology provider, pioneering fully autonomous algorithmic systems that reshape trading and capital allocation in decentralized finance.
As a sister company of Deqode — a 400+ person blockchain innovation powerhouse — we operate at the cutting edge of quant research, distributed infrastructure, and high-frequency execution.
What We Build
- Alpha Discovery via On‑Chain Intelligence — Developing trading signals using blockchain data, CEX/DEX markets, and protocol mechanics.
- DeFi-Native Execution Agents — Automated systems that execute trades across decentralized platforms.
- ML-Augmented Infrastructure — Machine learning pipelines for real-time prediction, execution heuristics, and anomaly detection.
- High-Throughput Systems — Resilient, low-latency engines that operate 24/7 across EVM and non-EVM chains tuned for high-frequency trading (HFT) and real-time response
- Data-Driven MEV Analysis & Strategy — We analyze mempools, order flow, and validator behaviors to identify and capture MEV opportunities ethically—powering strategies that interact deeply with the mechanics of block production and inclusion.
Evaluation Process
- HR Discussion – A brief conversation to understand your motivation and alignment with the role.
- Initial Technical Interview – A quick round focused on fundamentals and problem-solving approach.
- Take-Home Assignment – Assesses research ability, learning agility, and structured thinking.
- Assignment Presentation – Deep-dive into your solution, design choices, and technical reasoning.
- Final Interview – A concluding round to explore your background, interests, and team fit in depth.
- Optional Interview – In specific cases, an additional round may be scheduled to clarify certain aspects or conduct further assessment before making a final decision.
Job Description : Blockchain Data & ML Engineer
As a Blockchain Data & ML Engineer, you’ll work on ingesting and modelling on-chain behaviour, building scalable data pipelines, and designing systems that support intelligent, autonomous market interaction.
What You’ll Work On
- Build and maintain ETL pipelines for ingesting and processing blockchain data.
- Assist in designing, training, and validating machine learning models for prediction and anomaly detection.
- Evaluate model performance, tune hyperparameters, and document experimental results.
- Develop monitoring tools to track model accuracy, data drift, and system health.
- Collaborate with infrastructure and execution teams to integrate ML components into production systems.
- Design and maintain databases and storage systems to efficiently manage large-scale datasets.
Ideal Traits
- Strong in data structures, algorithms, and core CS fundamentals.
- Proficiency in any programming language
- Familiarity with backend systems, APIs, and database design, along with a basic understanding of machine learning and blockchain fundamentals.
- Curiosity about how blockchain systems and crypto markets work under the hood.
- Self-motivated, eager to experiment and learn in a dynamic environment.
Bonus Points For
- Hands-on experience with pandas, numpy, scikit-learn, or PyTorch.
- Side projects involving automated ML workflows, ETL pipelines, or crypto protocols.
- Participation in hackathons or open-source contributions.
What You’ll Gain
- Cutting-Edge Tech Stack: You'll work on modern infrastructure and stay up to date with the latest trends in technology.
- Idea-Driven Culture: We welcome and encourage fresh ideas. Your input is valued, and you're empowered to make an impact from day one.
- Ownership & Autonomy: You’ll have end-to-end ownership of projects. We trust our team and give them the freedom to make meaningful decisions.
- Impact-Focused: Your work won’t be buried under bureaucracy. You’ll see it go live and make a difference in days, not quarters
What We Value:
- Craftsmanship over shortcuts: We appreciate engineers who take the time to understand the problem deeply and build durable solutions—not just quick fixes.
- Depth over haste: If you're the kind of person who enjoys going one level deeper to really "get" how something works, you'll thrive here.
- Invested mindset: We're looking for people who don't just punch tickets, but care about the long-term success of the systems they build.
- Curiosity with follow-through: We admire those who take the time to explore and validate new ideas, not just skim the surface.
Compensation:
- INR 6 - 12 LPA
- Performance Bonuses: Linked to contribution, delivery, and impact.

Location: Bangalore – Hebbal – 5 Days - WFO
Type: Contract – 6 Months to start with, extendable
Experience Required: 5+ years in Data Analysis, with ERP migration experience
Key Responsibilities:
- Analyze and map data from SAP to JD Edwards structures.
- Define data transformation rules and business logic.
- Assist with data extraction, cleansing, and enrichment.
- Collaborate with technical teams to design and execute ETL processes.
- Perform data validation and reconciliation before and after migration.
- Work closely with business stakeholders to understand master and transactional data requirements.
- Support the creation of reports to validate data accuracy in JDE.
- Document data mapping, cleansing rules, and transformation processes.
- Participate in testing cycles and assist with UAT data validation.
Required Skills and Qualifications:
- Strong experience in SAP ERP data models (FI, MM, SD, etc.).
- Knowledge of JD Edwards EnterpriseOne data structure is a plus.
- Proficiency in Excel, SQL, and data profiling tools.
- Experience in data migration tools like SAP BODS, Talend, or Informatica.
- Strong analytical, problem-solving, and documentation skills.
- Excellent communication and collaboration skills.
- ERP migration project experience is essential.
DataStage Developer Job Description
A DataStage Developer is responsible for designing, developing, and implementing data integration solutions using IBM InfoSphere DataStage. Here's a brief overview:
Key Responsibilities
- Data Integration: Design and develop data integration jobs using DataStage to extract, transform, and load (ETL) data from various sources.
- Job Development: Develop and test DataStage jobs to meet business requirements.
- Data Transformation: Use DataStage transformations to cleanse, aggregate, and transform data.
- Performance Optimization: Optimize DataStage jobs for performance and scalability.
- Troubleshooting: Troubleshoot DataStage issues and resolve data integration problems.
Technical Skills
- DataStage: Strong understanding of IBM InfoSphere DataStage and its components.
- ETL: Experience with ETL concepts and data integration best practices.
- Data Transformation: Knowledge of data transformation techniques and DataStage transformations.
- SQL: Familiarity with SQL and database concepts.
- Data Modeling: Understanding of data modeling concepts and data warehousing.

About Us
Alfred Capital - Alfred Capital is a next-generation on-chain proprietary quantitative trading technology provider, pioneering fully autonomous algorithmic systems that reshape trading and capital allocation in decentralized finance.
As a sister company of Deqode — a 400+ person blockchain innovation powerhouse — we operate at the cutting edge of quant research, distributed infrastructure, and high-frequency execution.
What We Build
- Alpha Discovery via On‑Chain Intelligence — Developing trading signals using blockchain data, CEX/DEX markets, and protocol mechanics.
- DeFi-Native Execution Agents — Automated systems that execute trades across decentralized platforms.
- ML-Augmented Infrastructure — Machine learning pipelines for real-time prediction, execution heuristics, and anomaly detection.
- High-Throughput Systems — Resilient, low-latency engines that operate 24/7 across EVM and non-EVM chains tuned for high-frequency trading (HFT) and real-time response
- Data-Driven MEV Analysis & Strategy — We analyze mempools, order flow, and validator behaviors to identify and capture MEV opportunities ethically—powering strategies that interact deeply with the mechanics of block production and inclusion.
Evaluation Process
- HR Discussion – A brief conversation to understand your motivation and alignment with the role.
- Initial Technical Interview – A quick round focused on fundamentals and problem-solving approach.
- Take-Home Assignment – Assesses research ability, learning agility, and structured thinking.
- Assignment Presentation – Deep-dive into your solution, design choices, and technical reasoning.
- Final Interview – A concluding round to explore your background, interests, and team fit in depth.
- Optional Interview – In specific cases, an additional round may be scheduled to clarify certain aspects or conduct further assessment before making a final decision.
Blockchain Data & ML Engineer
As a Blockchain Data & ML Engineer, you’ll work on ingesting and modeling on-chain behavior, building scalable data pipelines, and designing systems that support intelligent, autonomous market interaction.
What You’ll Work On
- Build and maintain ETL pipelines for ingesting and processing blockchain data.
- Assist in designing, training, and validating machine learning models for prediction and anomaly detection.
- Evaluate model performance, tune hyperparameters, and document experimental results.
- Develop monitoring tools to track model accuracy, data drift, and system health.
- Collaborate with infrastructure and execution teams to integrate ML components into production systems.
- Design and maintain databases and storage systems to efficiently manage large-scale datasets.
Ideal Traits
- Strong in data structures, algorithms, and core CS fundamentals.
- Proficiency in any programming language
- Curiosity about how blockchain systems and crypto markets work under the hood.
- Self-motivated, eager to experiment and learn in a dynamic environment.
Bonus Points For
- Hands-on experience with pandas, numpy, scikit-learn, or PyTorch.
- Side projects involving automated ML workflows, ETL pipelines, or crypto protocols.
- Participation in hackathons or open-source contributions.
What You’ll Gain
- Cutting-Edge Tech Stack: You'll work on modern infrastructure and stay up to date with the latest trends in technology.
- Idea-Driven Culture: We welcome and encourage fresh ideas. Your input is valued, and you're empowered to make an impact from day one.
- Ownership & Autonomy: You’ll have end-to-end ownership of projects. We trust our team and give them the freedom to make meaningful decisions.
- Impact-Focused: Your work won’t be buried under bureaucracy. You’ll see it go live and make a difference in days, not quarters
What We Value:
- Craftsmanship over shortcuts: We appreciate engineers who take the time to understand the problem deeply and build durable solutions—not just quick fixes.
- Depth over haste: If you're the kind of person who enjoys going one level deeper to really "get" how something works, you'll thrive here.
- Invested mindset: We're looking for people who don't just punch tickets, but care about the long-term success of the systems they build.
- Curiosity with follow-through: We admire those who take the time to explore and validate new ideas, not just skim the surface.
Compensation:
- INR 6 - 12 LPA
- Performance Bonuses: Linked to contribution, delivery, and impact.
Job Title : Solution Architect – Denodo
Experience : 10+ Years
Location : Remote / Work from Home
Notice Period : Immediate joiners preferred
Job Overview :
We are looking for an experienced Solution Architect – Denodo to lead the design and implementation of data virtualization solutions. In this role, you will work closely with cross-functional teams to ensure our data architecture aligns with strategic business goals. The ideal candidate will bring deep expertise in Denodo, strong technical leadership, and a passion for driving data-driven decisions.
Mandatory Skills : Denodo, Data Virtualization, Data Architecture, SQL, Data Modeling, ETL, Data Integration, Performance Optimization, Communication Skills.
Key Responsibilities :
- Architect and design scalable data virtualization solutions using Denodo.
- Collaborate with business analysts and engineering teams to understand requirements and define technical specifications.
- Ensure adherence to best practices in data governance, performance, and security.
- Integrate Denodo with diverse data sources and optimize system performance.
- Mentor and train team members on Denodo platform capabilities.
- Lead tool evaluations and recommend suitable data integration technologies.
- Stay updated with emerging trends in data virtualization and integration.
Required Qualifications :
- Bachelor’s degree in Computer Science, IT, or a related field.
- 10+ Years of experience in data architecture and integration.
- Proven expertise in Denodo and data virtualization frameworks.
- Strong proficiency in SQL and data modeling.
- Hands-on experience with ETL processes and data integration tools.
- Excellent communication, presentation, and stakeholder management skills.
- Ability to lead technical discussions and influence architectural decisions.
- Denodo or data architecture certifications are a strong plus.

Primary skill set: QA Automation, Python, BDD, SQL
As Senior Data Quality Engineer you will:
- Evaluate product functionality and create test strategies and test cases to assess product quality.
- Work closely with the on-shore and the offshore team.
- Work on multiple reports validation against the databases by running medium to complex SQL queries.
- Better understanding of Automation Objects and Integrations across various platforms/applications etc.
- Individual contributor exploring opportunities to improve performance and suggest/articulate the areas of improvements importance/advantages to management.
- Integrate with SCM infrastructure to establish a continuous build and test cycle using CICD tools.
- Comfortable working on Linux/Windows environment(s) and Hybrid infrastructure models hosted on Cloud platforms.
- Establish processes and tools set to maintain automation scripts and generate regular test reports.
- Peer review to provide feedback and to make sure the test scripts are flaw-less.
Core/Must have skills:
- Excellent understanding and hands on experience in ETL/DWH testing preferably DataBricks paired with Python experience.
- Hands on experience SQL (Analytical Functions and complex queries) along with knowledge of using SQL client utilities effectively.
- Clear & crisp communication and commitment towards deliverables
- Experience on BigData Testing will be an added advantage.
- Knowledge on Spark and Scala, Hive/Impala, Python will be an added advantage.
Good to have skills:
- Test automation using BDD/Cucumber / TestNG combined with strong hands-on experience with Java with Selenium. Especially working experience in WebDriver.IO
- Ability to effectively articulate technical challenges and solutions
- Work experience in qTest, Jira, WebDriver.IO
Responsibilities
· Design and architect data virtualization solutions using Denodo.
· Collaborate with business analysts and data engineers to understand data requirements and translate them into technical specifications.
· Implement best practices for data governance and security within Denodo environments.
· Lead the integration of Denodo with various data sources, ensuring performance optimization.
· Conduct training sessions and provide guidance to technical teams on Denodo capabilities.
· Participate in the evaluation and selection of data technologies and tools.
· Stay current with industry trends in data integration and virtualization.
Requirements
· Bachelor's degree in Computer Science, Information Technology, or a related field.
· 10+ years of experience in data architecture, with a focus on Denodo solutions.
· Strong knowledge of data virtualization principles and practices.
· Experience with SQL and data modeling techniques.
· Familiarity with ETL processes and data integration tools.
· Excellent communication and presentation skills.
· Ability to lead technical discussions and provide strategic insights.
· Certifications related to Denodo or data architecture are a plus

Role: Data Engineer (14+ years of experience)
Location: Whitefield, Bangalore
Mode of Work: Hybrid (3 days from office)
Notice period: Immediate/ Serving with 30days left
Location: Candidate should be based out of Bangalore as one round has to be taken F2F
Job Summary:
Role and Responsibilities
● Design and implement scalable data pipelines for ingesting, transforming, and loading data from various tools and sources.
● Design data models to support data analysis and reporting.
● Automate data engineering tasks using scripting languages and tools.
● Collaborate with engineers, process managers, data scientists to understand their needs and design solutions.
● Act as a bridge between the engineering and the business team in all areas related to Data.
● Automate monitoring and alerting mechanism on data pipelines, products and Dashboards and troubleshoot any issues. On call requirements.
● SQL creation and optimization - including modularization and optimization which might need views, table creation in the sources etc.
● Defining best practices for data validation and automating as much as possible; aligning with the enterprise standards
● QA environment data management - e.g Test Data Management etc
Qualifications
● 14+ years of experience as a Data engineer or related role.
● Experience with Agile engineering practices.
● Strong experience in writing queries for RDBMS, cloud-based data warehousing solutions like Snowflake and Redshift.
● Experience with SQL and NoSQL databases.
● Ability to work independently or as part of a team.
● Experience with cloud platforms, preferably AWS.
● Strong experience with data warehousing and data lake technologies (Snowflake)
● Expertise in data modelling
● Experience with ETL/LT tools and methodologies .
● 5+ years of experience in application development including Python, SQL, Scala, or Java
● Experience working on real-time Data Streaming and Data Streaming platform.
NOTE: IT IS MANDATORY TO GIVE ONE TECHNICHAL ROUND FACE TO FACE.
Role: Automation Tester – Data Engineering
Experience: 6+ years
Work Mode: Hybrid (2–3 days onsite/week)
Locations: Gurgaon
Notice Period: Immediate Joiners Preferred
Mandatory Skills:
- Hands-on automation testing experience in Data Engineering or Data Warehousing
- Proficiency in Docker
- Experience working on any Cloud platform (AWS, Azure, or GCP)
- Experience in ETL Testing is a must
- Automation testing using Pytest or Scalatest
- Strong SQL skills and data validation techniques
- Familiarity with data processing tools such as ETL, Hadoop, Spark, Hive
- Sound knowledge of SDLC and Agile methodologies
- Ability to write efficient, clean, and maintainable test scripts
- Strong problem-solving, debugging, and communication skills
Good to Have:
- Exposure to additional test frameworks like Selenium, TestNG, or JUnit
Key Responsibilities:
- Develop, execute, and maintain automation scripts for data pipelines
- Perform comprehensive data validation and quality assurance
- Collaborate with data engineers, developers, and stakeholders
- Troubleshoot issues and improve test reliability
- Ensure consistent testing standards across development cycles
Job Title: SAP BODS Developer
- Experience: 7–10 Years
- Location: Remote (India-based candidates only)
- Employment Type: Permanent (Full-Time)
- Salary Range: ₹20 – ₹25 LPA (Fixed CTC)
Required Skills & Experience:
- 7–10 years of hands-on experience as a SAP BODS Developer.
- Strong experience in S/4HANA implementation or upgrade projects with large-scale data migration.
- Proficient in ETL development, job optimization, and performance tuning using SAP BODS.
- Solid understanding of SAP data structures (FI, MM, SD, etc.) from a technical perspective.
- Skilled in SQL scripting, error resolution, and job monitoring.
- Comfortable working independently in a remote, spec-driven development environment.

🚀 We Are Hiring: Data Engineer | 4+ Years Experience 🚀
Job description
🔍 Job Title: Data Engineer
📍 Location: Ahmedabad
🚀 Work Mode: On-Site Opportunity
📅 Experience: 4+ Years
🕒 Employment Type: Full-Time
⏱️ Availability : Immediate Joiner Preferred
Join Our Team as a Data Engineer
We are seeking a passionate and experienced Data Engineer to be a part of our dynamic and forward-thinking team in Ahmedabad. This is an exciting opportunity for someone who thrives on transforming raw data into powerful insights and building scalable, high-performance data infrastructure.
As a Data Engineer, you will work closely with data scientists, analysts, and cross-functional teams to design robust data pipelines, optimize data systems, and enable data-driven decision-making across the organization.
Your Key Responsibilities
Architect, build, and maintain scalable and reliable data pipelines from diverse data sources.
Design effective data storage, retrieval mechanisms, and data models to support analytics and business needs.
Implement data validation, transformation, and quality monitoring processes.
Collaborate with cross-functional teams to deliver impactful, data-driven solutions.
Proactively identify bottlenecks and optimize existing workflows and processes.
Provide guidance and mentorship to junior engineers in the team.
Skills & Expertise We’re Looking For
3+ years of hands-on experience in Data Engineering or related roles.
Strong expertise in Python and data pipeline design.
Experience working with Big Data tools like Hadoop, Spark, Hive.
Proficiency with SQL, NoSQL databases, and data warehousing solutions.
Solid experience in cloud platforms - Azure
Familiar with distributed computing, data modeling, and performance tuning.
Understanding of DevOps, Power Automate, and Microsoft Fabric is a plus.
Strong analytical thinking, collaboration skills, Excellent Communication Skill and the ability to work independently or as part of a team.
Qualifications
Bachelor’s degree in Computer Science, Data Science, or a related field.

About the Company
Hypersonix.ai is disrupting the e-commerce space with AI, ML and advanced decision capabilities to drive real-time business insights. Hypersonix.ai has been built ground up with new age technology to simplify the consumption of data for our customers in various industry verticals. Hypersonix.ai is seeking a well-rounded, hands-on product leader to help lead product management of key capabilities and features.
About the Role
We are looking for talented and driven Data Engineers at various levels to work with customers to build the data warehouse, analytical dashboards and ML capabilities as per customer needs.
Roles and Responsibilities
- Create and maintain optimal data pipeline architecture
- Assemble large, complex data sets that meet functional / non-functional business requirements; should write complex queries in an optimized way
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies
- Run ad-hoc analysis utilizing the data pipeline to provide actionable insights
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs
- Keep our data separated and secure across national boundaries through multiple data centers and AWS regions
- Work with analytics and data scientist team members and assist them in building and optimizing our product into an innovative industry leader
Requirements
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
- Strong analytic skills related to working with unstructured datasets
- Build processes supporting data transformation, data structures, metadata, dependency and workload management
- A successful history of manipulating, processing and extracting value from large disconnected datasets
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores
- Experience supporting and working with cross-functional teams in a dynamic environment
- We are looking for a candidate with 7+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Information Technology or completed MCA.
Position Summary:
As a CRM ETL Developer, you will be responsible for the analysis, transformation, and integration of data from legacy and external systems into CRM application. This includes developing ETL/ELT workflows, ensuring data quality through cleansing and survivorship rules, and supporting daily production loads. You will work in an Agile environment and play a vital role in building scalable, high-quality data integration solutions.
Key Responsibilities:
- Analyze data from legacy and external systems; develop ETL/ELT pipelines to ingest and process data.
- Cleanse, transform, and apply survivorship rules before loading into the CRM platform.
- Monitor, support, and troubleshoot production data loads (Tier 1 & Tier 2 support).
- Contribute to solution design, development, integration, and scaling of new/existing systems.
- Promote and implement best practices in data integration, performance tuning, and Agile development.
- Lead or support design reviews, technical documentation, and mentoring of junior developers.
- Collaborate with business analysts, QA, and cross-functional teams to resolve defects and clarify requirements.
- Deliver working solutions via quick POCs or prototypes for business scenarios.
Technical Skills:
- ETL/ELT Tools: 5+ years of hands-on experience in ETL processes using Siebel EIM.
- Programming & Databases: Strong SQL & PL/SQL development; experience with Oracle and/or SQL Server.
- Data Integration: Proven experience in integrating disparate data systems.
- Data Modelling: Good understanding of relational, dimensional modelling, and data warehousing concepts.
- Performance Tuning: Skilled in application and SQL query performance optimization.
- CRM Systems: Familiarity with Siebel CRM, Siebel Data Model, and Oracle SOA Suite is a plus.
- DevOps & Agile: Strong knowledge of DevOps pipelines and Agile methodologies.
- Documentation: Ability to write clear technical design documents and test cases.
Soft Skills & Attributes:
- Strong analytical and problem-solving skills.
- Excellent communication and interpersonal abilities.
- Experience working with cross-functional, globally distributed teams.
- Proactive mindset and eagerness to learn new technologies.
- Detail-oriented with a focus on reliability and accuracy.
Preferred Qualifications:
- Bachelor’s degree in Computer Science, Information Systems, or a related field.
- Experience in Tier 1 & Tier 2 application support roles.
- Exposure to real-time data integration systems is an advantage.
Job Title: Tableau BI Developer
Years of Experience: 4-8Yrs
12$ per hour fte engagement
8 hrs. working
Required Skills & Experience:
✅ 4–8 years of experience in BI development and data engineering
✅ Expertise in BigQuery and/or Snowflake for large-scale data processing
✅ Strong SQL skills with experience writing complex analytical queries
✅ Experience in creating dashboards in tools like Power BI, Looker, or similar
✅ Hands-on experience with ETL/ELT tools and data pipeline orchestration
✅ Familiarity with cloud platforms (GCP, AWS, or Azure)
✅ Strong understanding of data modeling, data warehousing, and analytics best practices
✅ Excellent communication skills with the ability to explain technical concepts to non-technical stakeholders
Company name: PulseData labs Pvt Ltd (captive Unit for URUS, USA)
About URUS
We are the URUS family (US), a global leader in products and services for Agritech.
SENIOR DATA ENGINEER
This role is responsible for the design, development, and maintenance of data integration and reporting solutions. The ideal candidate will possess expertise in Databricks and strong skills in SQL Server, SSIS and SSRS, and experience with other modern data engineering tools such as Azure Data Factory. This position requires a proactive and results-oriented individual with a passion for data and a strong understanding of data warehousing principles.
Responsibilities
Data Integration
- Design, develop, and maintain robust and efficient ETL pipelines and processes on Databricks.
- Troubleshoot and resolve Databricks pipeline errors and performance issues.
- Maintain legacy SSIS packages for ETL processes.
- Troubleshoot and resolve SSIS package errors and performance issues.
- Optimize data flow performance and minimize data latency.
- Implement data quality checks and validations within ETL processes.
Databricks Development
- Develop and maintain Databricks pipelines and datasets using Python, Spark and SQL.
- Migrate legacy SSIS packages to Databricks pipelines.
- Optimize Databricks jobs for performance and cost-effectiveness.
- Integrate Databricks with other data sources and systems.
- Participate in the design and implementation of data lake architectures.
Data Warehousing
- Participate in the design and implementation of data warehousing solutions.
- Support data quality initiatives and implement data cleansing procedures.
Reporting and Analytics
- Collaborate with business users to understand data requirements for department driven reporting needs.
- Maintain existing library of complex SSRS reports, dashboards, and visualizations.
- Troubleshoot and resolve SSRS report issues, including performance bottlenecks and data inconsistencies.
Collaboration and Communication
- Comfortable in entrepreneurial, self-starting, and fast-paced environment, working both independently and with our highly skilled teams.
- Collaborate effectively with business users, data analysts, and other IT teams.
- Communicate technical information clearly and concisely, both verbally and in writing.
- Document all development work and procedures thoroughly.
Continuous Growth
- Keep abreast of the latest advancements in data integration, reporting, and data engineering technologies.
- Continuously improve skills and knowledge through training and self-learning.
This job description reflects managements assignment of essential functions; it does not prescribe or restrict the tasks that may be assigned.
Requirements
- Bachelor's degree in computer science, Information Systems, or a related field.
- 7+ years of experience in data integration and reporting.
- Extensive experience with Databricks, including Python, Spark, and Delta Lake.
- Strong proficiency in SQL Server, including T-SQL, stored procedures, and functions.
- Experience with SSIS (SQL Server Integration Services) development and maintenance.
- Experience with SSRS (SQL Server Reporting Services) report design and development.
- Experience with data warehousing concepts and best practices.
- Experience with Microsoft Azure cloud platform and Microsoft Fabric desirable.
- Strong analytical and problem-solving skills.
- Excellent communication and interpersonal skills.
- Ability to work independently and as part of a team.
- Experience with Agile methodologies.
Job Title : Cognos BI Developer
Experience : 6+ Years
Location : Bangalore / Hyderabad (Hybrid)
Notice Period : Immediate Joiners Preferred (Candidates serving notice with 10–15 days left can be considered)
Interview Mode : Virtual
Job Description :
We are seeking an experienced Cognos BI Developer with strong data modeling, dashboarding, and reporting expertise to join our growing team. The ideal candidate should have a solid background in business intelligence, data visualization, and performance analysis, and be comfortable working in a hybrid setup from Bangalore or Hyderabad.
Mandatory Skills :
Cognos BI, Framework Manager, Cognos Dashboarding, SQL, Data Modeling, Report Development (charts, lists, cross tabs, maps), ETL Concepts, KPIs, Drill-through, Macros, Prompts, Filters, Calculations.
Key Responsibilities :
- Understand business requirements in the BI context and design data models using Framework Manager to transform raw data into meaningful insights.
- Develop interactive dashboards and reports using Cognos Dashboard.
- Identify and define KPIs and create reports to monitor them effectively.
- Analyze data and present actionable insights to support business decision-making.
- Translate business requirements into technical specifications and determine timelines for execution.
- Design and develop models in Framework Manager, publish packages, manage security, and create reports based on these packages.
- Develop various types of reports, including charts, lists, cross tabs, and maps, and design dashboards combining multiple reports.
- Implement reports using macros, prompts, filters, and calculations.
- Perform data warehouse development activities and ensure seamless data flow.
- Write and optimize SQL queries to investigate data and resolve performance issues.
- Utilize Cognos features such as master-detail reports, drill-throughs, bookmarks, and page sets.
- Analyze and improve ETL processes to enhance data integration.
- Apply technical enhancements to existing BI systems to improve their performance and usability.
- Possess solid understanding of database fundamentals, including relational and multidimensional database design.
- Hands-on experience with Cognos Data Modules (data modeling) and dashboarding.

A leader in telecom, fintech, AI-led marketing automation.

We are looking for a talented MERN Developer with expertise in MongoDB/MySQL, Kubernetes, Python, ETL, Hadoop, and Spark. The ideal candidate will design, develop, and optimize scalable applications while ensuring efficient source code management and implementing Non-Functional Requirements (NFRs).
Key Responsibilities:
- Develop and maintain robust applications using MERN Stack (MongoDB, Express.js, React.js, Node.js).
- Design efficient database architectures (MongoDB/MySQL) for scalable data handling.
- Implement and manage Kubernetes-based deployment strategies for containerized applications.
- Ensure compliance with Non-Functional Requirements (NFRs), including source code management, development tools, and security best practices.
- Develop and integrate Python-based functionalities for data processing and automation.
- Work with ETL pipelines for smooth data transformations.
- Leverage Hadoop and Spark for processing and optimizing large-scale data operations.
- Collaborate with solution architects, DevOps teams, and data engineers to enhance system performance.
- Conduct code reviews, troubleshooting, and performance optimization to ensure seamless application functionality.
Required Skills & Qualifications:
- Proficiency in MERN Stack (MongoDB, Express.js, React.js, Node.js).
- Strong understanding of database technologies (MongoDB/MySQL).
- Experience working with Kubernetes for container orchestration.
- Hands-on knowledge of Non-Functional Requirements (NFRs) in application development.
- Expertise in Python, ETL pipelines, and big data technologies (Hadoop, Spark).
- Strong problem-solving and debugging skills.
- Knowledge of microservices architecture and cloud computing frameworks.
Preferred Qualifications:
- Certifications in cloud computing, Kubernetes, or database management.
- Experience in DevOps, CI/CD automation, and infrastructure management.
- Understanding of security best practices in application development.

What We’re Looking For:
- Strong experience in Python (3+ years).
- Hands-on experience with any database (SQL or NoSQL).
- Experience with frameworks like Flask, FastAPI, or Django.
- Knowledge of ORMs, API development, and unit testing.
- Familiarity with Git and Agile methodologies.

- Strong Snowflake Cloud database experience Database developer.
- Knowledge of Spark and Databricks is desirable.
- Strong technical background in data modelling, database design and optimization for data warehouses, specifically on column oriented MPP architecture
- Familiar with technologies relevant to data lakes such as Snowflake
- Candidate should have strong ETL & database design/modelling skills.
- Experience creating data pipelines
- Strong SQL skills and debugging knowledge and Performance Tuning exp.
- Experience with Databricks / Azure is add on /good to have .
- Experience working with global teams and global application environments
- Strong understanding of SDLC methodologies with track record of high quality deliverables and data quality, including detailed technical design documentation desired
The role reports to the Head of Customer Support, and the position holder is part of the Product Team.
Main objectives of the role
· Focus on customer satisfaction with the product and provide the first-line support.
Specialisation
· Customer Support
· SaaS
· FMCG/CPG
Key processes in the role
· Build extensive knowledge of our SAAS product platform and support our customers in using it.
· Supporting end customers with complex questions.
· Providing extended and elaborated answers on business & “how to” questions from customers.
· Participating in ongoing education for Customer Support Managers.
· Collaborate and communicate with the Development teams, Product Support and Customers
Requirements
· Bachelor’s degree in business, IT, Engineering or Economics.
· 4-8 years of experience in a similar role in the IT Industry.
· Solid knowledge of SaaS (Software as a Service).
· Multitasking is your second nature, and you have a proactive + Customer First mindset.
· 3+ years of experience providing support for ERP systems, preferably SAP.
· Familiarity with ERP/SAP integration processes and data migration.
· Understanding of ERP/SAP functionalities, modules and data structures.
· Understanding of technicalities like Integrations (API’s, ETL, ELT), analysing logs, identifying errors in logs, etc.
· Experience in looking into code, changing configuration, and analysing if it's a development bug or a product bug.
· Profound understanding of the support processes.
· Should know where to route tickets further and know how to manage customer escalations.
· Outstanding customer service skills.
· Knowledge of Fast-Moving Consumer Goods (FMCG)/ Consumer Packaged Goods (CPG) industry/domain is preferable.
Excellent verbal and written communication skills in the English language

About the Role:
We are seeking a talented Lead Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.
Responsibilities:
- Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
- Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
- Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
- Team Management: Able to handle team.
- Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
- Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
- Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
- Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.
Skills:
- Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
- Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
- Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
- Understanding of data modeling and data architecture concepts.
- Experience with ETL/ELT tools and frameworks.
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
Preferred Qualifications:
- Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
- Knowledge of machine learning and artificial intelligence concepts.
- Experience with data visualization tools (e.g., Tableau, Power BI).
- Certification in cloud platforms or data engineering.
Job Title : Data Engineer – Snowflake Expert
Location : Pune (Onsite)
Experience : 10+ Years
Employment Type : Contractual
Mandatory Skills : Snowflake, Advanced SQL, ETL/ELT (Snowpipe, Tasks, Streams), Data Modeling, Performance Tuning, Python, Cloud (preferably Azure), Security & Data Governance.
Job Summary :
We are seeking a seasoned Data Engineer with deep expertise in Snowflake to design, build, and maintain scalable data solutions.
The ideal candidate will have a strong background in data modeling, ETL/ELT, SQL optimization, and cloud data warehousing principles, with a passion for leveraging Snowflake to drive business insights.
Responsibilities :
- Collaborate with data teams to optimize and enhance data pipelines and models on Snowflake.
- Design and implement scalable ELT pipelines with performance and cost-efficiency in mind.
- Ensure high data quality, security, and adherence to governance frameworks.
- Conduct code reviews and align development with best practices.
Qualifications :
- Bachelor’s in Computer Science, Data Science, IT, or related field.
- Snowflake certifications (Pro/Architect) preferred.

About the Role:
We are seeking a talented Lead Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.
Responsibilities:
- Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
- Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
- Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
- Team Management: Able to handle team.
- Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
- Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
- Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
- Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.
Skills:
- Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
- Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
- Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
- Understanding of data modeling and data architecture concepts.
- Experience with ETL/ELT tools and frameworks.
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
Preferred Qualifications:
- Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
- Knowledge of machine learning and artificial intelligence concepts.
- Experience with data visualization tools (e.g., Tableau, Power BI).
- Certification in cloud platforms or data engineering.

Company Overview
We are a dynamic startup dedicated to empowering small businesses through innovative technology solutions. Our mission is to level the playing field for small businesses by providing them with powerful tools to compete effectively in the digital marketplace. Join us as we revolutionize the way small businesses operate online, bringing innovation and growth to local communities.
Job Description
We are seeking a skilled and experienced Data Engineer to join our team. In this role, you will develop systems on cloud platforms capable of processing millions of interactions daily, leveraging the latest cloud computing and machine learning technologies while creating custom in-house data solutions. The ideal candidate should have hands-on experience with SQL, PL/SQL, and any standard ETL tools. You must be able to thrive in a fast-paced environment and possess a strong passion for coding and problem-solving.
Required Skills and Experience
- Minimum 5 years of experience in software development.
- 3+ years of experience in data management and SQL expertise – PL/SQL, Teradata, and Snowflake experience strongly preferred.
- Expertise in big data technologies such as Hadoop, HiveQL, and Spark (Scala/Python).
- Expertise in cloud technologies – AWS (S3, Glue, Terraform, Lambda, Aurora, Redshift, EMR).
- Experience with queuing systems (e.g., SQS, Kafka) and caching systems (e.g., Ehcache, Memcached).
- Experience with container management tools (e.g., Docker Swarm, Kubernetes).
- Familiarity with data stores, including at least one of the following: Postgres, MongoDB, Cassandra, or Redis.
- Ability to create advanced visualizations and dashboards to communicate complex findings (e.g., Looker Studio, Power BI, Tableau).
- Strong skills in manipulating and transforming complex datasets for in-depth analysis.
- Technical proficiency in writing code in Python and advanced SQL queries.
- Knowledge of AI/ML infrastructure, best practices, and tools is a plus.
- Experience in analyzing and resolving code issues.
- Hands-on experience with software architecture concepts such as Separation of Concerns (SoC) and micro frontends with theme packages.
- Proficiency with the Git version control system.
- Experience with Agile development methodologies.
- Strong problem-solving skills and the ability to learn quickly.
- Exposure to Docker and Kubernetes.
- Familiarity with AWS or other cloud platforms.
Responsibilities
- Develop and maintain our inhouse search and reporting platform
- Create data solutions to complement core products to improve performance and data quality
- Collaborate with the development team to design, develop, and maintain our suite of products.
- Write clean, efficient, and maintainable code, adhering to coding standards and best practices.
- Participate in code reviews and testing to ensure high-quality code.
- Troubleshoot and debug application issues as needed.
- Stay up-to-date with emerging trends and technologies in the development community.
How to apply?
- If you are passionate about designing user-centric products and want to be part of a forward-thinking company, we would love to hear from you. Please send your resume, a brief cover letter outlining your experience and your current CTC (Cost to Company) as a part of the application.
Join us in shaping the future of e-commerce!

- A bachelor’s degree in Computer Science or a related field.
- 5-7 years of experience working as a hands-on developer in Sybase, DB2, ETL technologies.
- Worked extensively on data integration, designing, and developing reusable interfaces Advanced experience in Python, DB2, Sybase, shell scripting, Unix, Perl scripting, DB platforms, database design and modeling.
- Expert level understanding of data warehouse, core database concepts and relational database design.
- Experience in writing stored procedures, optimization, and performance tuning Strong Technology acumen and a deep strategic mindset.
- Proven track record of delivering results
- Proven analytical skills and experience making decisions based on hard and soft data
- A desire and openness to learning and continuous improvement, both of yourself and your team members.
- Hands-on experience on development of APIs is a plus
- Good to have experience with Business Intelligence tools, Source to Pay applications such as SAP Ariba, and Accounts Payable system Skills Required
- Familiarity with Postgres and Python is a plus
Job Summary:
Seeking a seasoned SQL + ETL Developer with 4+ years of experience in managing large-scale datasets and cloud-based data pipelines. The ideal candidate is hands-on with MySQL, PySpark, AWS Glue, and ETL workflows, with proven expertise in AWS migration and performance optimization.
Key Responsibilities:
- Develop and optimize complex SQL queries and stored procedures to handle large datasets (100+ million records).
- Build and maintain scalable ETL pipelines using AWS Glue and PySpark.
- Work on data migration tasks in AWS environments.
- Monitor and improve database performance; automate key performance indicators and reports.
- Collaborate with cross-functional teams to support data integration and delivery requirements.
- Write shell scripts for automation and manage ETL jobs efficiently.
Required Skills:
- Strong experience with MySQL, complex SQL queries, and stored procedures.
- Hands-on experience with AWS Glue, PySpark, and ETL processes.
- Good understanding of AWS ecosystem and migration strategies.
- Proficiency in shell scripting.
- Strong communication and collaboration skills.
Nice to Have:
- Working knowledge of Python.
- Experience with AWS RDS.
Role - ETL Developer
Work Mode - Hybrid
Experience- 4+ years
Location - Pune, Gurgaon, Bengaluru, Mumbai
Required Skills - AWS, AWS Glue, Pyspark, ETL, SQL
Required Skills:
- 4+ years of hands-on experience in MySQL, including SQL queries and procedure development
- Experience in Pyspark, AWS, AWS Glue
- Experience in AWS ,Migration
- Experience with automated scripting and tracking KPIs/metrics for database performance
- Proficiency in shell scripting and ETL.
- Strong communication skills and a collaborative team player
- Knowledge of Python and AWS RDS is a plus

TL;DR
Founding Software Engineer (Next.js / React / TypeScript) — ₹17,000–₹24,000 net ₹/mo — 100% remote (India) — ~40 h/wk — green-field stack, total autonomy, ship every week. If you can own the full lifecycle and prove impact every Friday, apply.
🏢 Mega Style Apartments
We rent beautifully furnished 1- to 4-bedroom flats that feel like home but run like a hotel—so travellers can land, unlock the door, and live like locals from hour one. Tech is now the growth engine, and you’ll be employee #1 in engineering, laying the cornerstone for a tech platform that will redefine the premium furnished apartment experience.
✨ Why This Role Rocks
💡 Green-field Everything
Choose the stack, CI, even the linter.
🎯 Visible Impact & Ambition
Every deploy reaches real guests this week. Lay rails for ML that can boost revenue 20%.
⏱️ Radical Autonomy
Plan sprints, own deploys; no committees.
- Direct line to decision-makers → zero red tape
- Modern DX: Next.js & React (latest stable), Tailwind, Prisma/Drizzle, Vercel, optional AI copilots – building mostly server-rendered, edge-ready flows.
- Async-first, with structured weekly 1-on-1s to ensure you’re supported, not micromanaged.
- Unmatched Career Acceleration: Build an entire tech foundation from zero, making decisions that will define your trajectory and our company's success.
🗓️ Your Daily Rhythm
- Morning: Check metrics, pick highest-impact task
- Day: Build → ship → measure
- Evening: 10-line WhatsApp update (done, next, blockers)
- Friday: Live demo of working software (no mock-ups)
📈 Success Milestones
- Week 1: First feature in production
- Month 1: Automation that saves ≥10 h/week for ops
- Month 3: Core platform stable; conversion up, load times down (aiming for <1s LCP); ready for future ML pricing (stretch goal: +20% revenue within 12 months).
🔑 What You’ll Own
- Ship guest-facing features with Next.js (App Router / RSC / Server Actions).
- Automate ops—dashboards & LLM helpers that delete busy-work.
- Full lifecycle: idea → spec → code → deploy → measure → iterate.
- Set up CI/CD & observability on Vercel; a dedicated half-day refactor slot each sprint keeps tech-debt low.
- Optimise for outcomes—conversion, CWV, security, reliability; laying the groundwork for future capabilities in dynamic pricing and guest personalization.
Prototype > promise. Results > hours-in-chair.
💻 Must-Have Skills
Frontend Focus:
- Next.js (App Router/RSC/Server Actions)
- React (latest stable), TypeScript
- Tailwind CSS + shadcn/ui
- State mgmt (TanStack Query / Zustand / Jotai)
Backend & DevOps Focus:
- Node.js APIs, Prisma/Drizzle ORM
- Solid SQL schema design (e.g., PostgreSQL)
- Auth.js / Better-Auth, web security best practices
- GitHub Flow, automated tests, CI, Vercel deploys
- Excellent English; explain trade-offs to non-tech peers
- Self-starter—comfortable as the engineer (for now)
🌱 Nice-to-Haves (Learn Here or Teach Us)
A/B testing & CRO, Python/basic ML, ETL pipelines, Advanced SEO & CWV, Payment APIs (Stripe, Merchant Warrior), n8n automation
🎁 Perks & Benefits
- 100% remote anywhere in 🇮🇳
- Flexible hours (~40 h/wk)
- 12 paid days off (holiday + sick)
- ₹1,700/mo health insurance reimbursement (post-probation)
- Performance bonuses for measurable wins
- 6-month paid probation → permanent role & full benefits (this is a full-time employment role)
- Blank-canvas stack—your decisions live on
- Equity is not offered at this time; we compensate via performance bonuses and a clear path for growth, with future leadership opportunities as the company and engineering team scales.
⏩ Hiring Process (7–10 Days, Fast & Fair)
All stages are async & remote.
- Apply: 5-min form + short quiz (approx. 15 min total)
- Test 1: TypeScript & logic (1 h)
- Test 2: Next.js / React / Node / SQL deep-dive (1 h)
- Final: AI Video interview (1 h)
.
🚫 Who Shouldn’t Apply
- Need daily hand-holding
- Prefer consensus to decisions
- Chase perfect code over shipped value
- “Move fast & learn” culture feels scary
🚀 Ready to Own the Stack?
If you read this and thought “Finally—no bureaucracy,” and you're ready to set the technical standard for a growing company, show us something you’ve built and apply here →


Senior Data Engineer
Location: Bangalore, Gurugram (Hybrid)
Experience: 4-8 Years
Type: Full Time | Permanent
Job Summary:
We are looking for a results-driven Senior Data Engineer to join our engineering team. The ideal candidate will have hands-on expertise in data pipeline development, cloud infrastructure, and BI support, with a strong command of modern data stacks. You’ll be responsible for building scalable ETL/ELT workflows, managing data lakes and marts, and enabling seamless data delivery to analytics and business intelligence teams.
This role requires deep technical know-how in PostgreSQL, Python scripting, Apache Airflow, AWS or other cloud environments, and a working knowledge of modern data and BI tools.
Key Responsibilities:
PostgreSQL & Data Modeling
· Design and optimize complex SQL queries, stored procedures, and indexes
· Perform performance tuning and query plan analysis
· Contribute to schema design and data normalization
Data Migration & Transformation
· Migrate data from multiple sources to cloud or ODS platforms
· Design schema mapping and implement transformation logic
· Ensure consistency, integrity, and accuracy in migrated data
Python Scripting for Data Engineering
· Build automation scripts for data ingestion, cleansing, and transformation
· Handle file formats (JSON, CSV, XML), REST APIs, cloud SDKs (e.g., Boto3)
· Maintain reusable script modules for operational pipelines
Data Orchestration with Apache Airflow
· Develop and manage DAGs for batch/stream workflows
· Implement retries, task dependencies, notifications, and failure handling
· Integrate Airflow with cloud services, data lakes, and data warehouses
Cloud Platforms (AWS / Azure / GCP)
· Manage data storage (S3, GCS, Blob), compute services, and data pipelines
· Set up permissions, IAM roles, encryption, and logging for security
· Monitor and optimize cost and performance of cloud-based data operations
Data Marts & Analytics Layer
· Design and manage data marts using dimensional models
· Build star/snowflake schemas to support BI and self-serve analytics
· Enable incremental load strategies and partitioning
Modern Data Stack Integration
· Work with tools like DBT, Fivetran, Redshift, Snowflake, BigQuery, or Kafka
· Support modular pipeline design and metadata-driven frameworks
· Ensure high availability and scalability of the stack
BI & Reporting Tools (Power BI / Superset / Supertech)
· Collaborate with BI teams to design datasets and optimize queries
· Support development of dashboards and reporting layers
· Manage access, data refreshes, and performance for BI tools
Required Skills & Qualifications:
· 4–6 years of hands-on experience in data engineering roles
· Strong SQL skills in PostgreSQL (tuning, complex joins, procedures)
· Advanced Python scripting skills for automation and ETL
· Proven experience with Apache Airflow (custom DAGs, error handling)
· Solid understanding of cloud architecture (especially AWS)
· Experience with data marts and dimensional data modeling
· Exposure to modern data stack tools (DBT, Kafka, Snowflake, etc.)
· Familiarity with BI tools like Power BI, Apache Superset, or Supertech BI
· Version control (Git) and CI/CD pipeline knowledge is a plus
· Excellent problem-solving and communication skills
- 8-10 years of experience in ETL Testing, Snowflake, DWH Concepts.
- Strong SQL knowledge & debugging skills are a must.
- Experience in Azure and Snowflake Testing is plus
- Experience with Qlik Replicate and Compose tools (Change Data Capture) tools is considered a plus
- Strong Data warehousing Concepts, ETL tools like Talend Cloud Data Integration, Pentaho/Kettle tool
- Experience in JIRA, Xray defect management tool is good to have.
- Exposure to the financial domain knowledge is considered a plus.
- Testing the data-readiness (data quality) address code or data issues
- Demonstrated ability to rationalize problems and use judgment and innovation to define clear and concise solutions
- Demonstrate strong collaborative experience across regions (APAC, EMEA and NA) to effectively and efficiently identify root cause of code/data issues and come up with a permanent solution
- Prior experience with State Street and Charles River Development (CRD) considered a plus
- Experience in tools such as PowerPoint, Excel, SQL
- Exposure to Third party data providers such as Bloomberg, Reuters, MSCI and other Rating agencies is a plus
We’re looking for an experienced SQL Developer with 3+ years of hands-on experience to join our growing team. In this role, you’ll be responsible for designing, developing, and maintaining SQL queries, procedures, and data systems that support our business operations and decision-making processes. You should be passionate about data, highly analytical, and capable of working both independently and collaboratively with cross-functional teams.
Key Responsibilities:
Design, develop, and maintain complex SQL queries, stored procedures, functions, and views.
Optimize existing queries for performance and efficiency.
Collaborate with data analysts, developers, and stakeholders to understand requirements and translate them into robust SQL solutions.
Design and implement ETL processes to move and transform data between systems.
Perform data validation, troubleshooting, and quality checks.
Maintain and improve existing databases, ensuring data integrity, security, and accessibility.
Document code, processes, and data models to support scalability and maintainability.
Monitor database performance and provide recommendations for improvement.
Work with BI tools and support dashboard/report development as needed.
Requirements:
3+ years of proven experience as an SQL Developer or in a similar role.
Strong knowledge of SQL and relational database systems (e.g., MS SQL Server, PostgreSQL, MySQL, Oracle).
Experience with performance tuning and optimization.
Proficiency in writing complex queries and working with large datasets.
Experience with ETL tools and data pipeline creation.
Familiarity with data warehousing concepts and BI reporting.
Solid understanding of database security, backup, and recovery.
Excellent problem-solving skills and attention to detail.
Good communication skills and ability to work in a team environment.
Nice to Have:
Experience with cloud-based databases (AWS RDS, Google BigQuery, Azure SQL).
Knowledge of Python, Power BI, or other scripting/analytics tools.
Experience working in Agile or Scrum environments.

Job Summary:
We are looking for a motivated and detail-oriented Data Engineer with 1–2 years of experience to join our data engineering team. The ideal candidate should have solid foundational skills in SQL and Python, along with exposure to building or maintaining data pipelines. You’ll play a key role in helping to ingest, process, and transform data to support various business and analytical needs.
Key Responsibilities:
- Assist in the design, development, and maintenance of scalable and efficient data pipelines.
- Write clean, maintainable, and performance-optimized SQL queries.
- Develop data transformation scripts and automation using Python.
- Support data ingestion processes from various internal and external sources.
- Monitor data pipeline performance and help troubleshoot issues.
- Collaborate with data analysts, data scientists, and other engineers to ensure data quality and consistency.
- Work with cloud-based data solutions and tools (e.g., AWS, Azure, GCP – as applicable).
- Document technical processes and pipeline architecture.
Core Skills Required:
- Proficiency in SQL (data querying, joins, aggregations, performance tuning).
- Experience with Python, especially in the context of data manipulation (e.g., pandas, NumPy).
- Exposure to ETL/ELT pipelines and data workflow orchestration tools (e.g., Airflow, Prefect, Luigi – preferred).
- Understanding of relational databases and data warehouse concepts.
- Familiarity with version control systems like Git.
Preferred Qualifications:
- Experience with cloud data services (AWS S3, Redshift, Azure Data Lake, etc.)
- Familiarity with data modeling and data integration concepts.
- Basic knowledge of CI/CD practices for data pipelines.
- Bachelor’s degree in Computer Science, Engineering, or related field.

As a Solution Architect, you will collaborate with our sales, presales and COE teams to provide technical expertise and support throughout the new business acquisition process. You will play a crucial role in understanding customer requirements, presenting our solutions, and demonstrating the value of our products.
You thrive in high-pressure environments, maintaining a positive outlook and understanding that career growth is a journey that requires making strategic choices. You possess good communication skills, both written and verbal, enabling you to convey complex technical concepts clearly and effectively. You are a team player, customer-focused, self-motivated, responsible individual who can work under pressure with a positive attitude. You must have experience in managing and handling RFPs/ RFIs, client demos and presentations, and converting opportunities into winning bids. You possess a strong work ethic, positive attitude, and enthusiasm to embrace new challenges. You can multi-task and prioritize (good time management skills), willing to display and learn. You should be able to work independently with less or no supervision. You should be process-oriented, have a methodical approach and demonstrate a quality-first approach.
Ability to convert client’s business challenges/ priorities into winning proposal/ bid through excellence in technical solution will be the key performance indicator for this role.
What you’ll do
- Architecture & Design: Develop high-level architecture designs for scalable, secure, and robust solutions.
- Technology Evaluation: Select appropriate technologies, frameworks, and platforms for business needs.
- Cloud & Infrastructure: Design cloud-native, hybrid, or on-premises solutions using AWS, Azure, or GCP.
- Integration: Ensure seamless integration between various enterprise applications, APIs, and third-party services.
- Design and develop scalable, secure, and performant data architectures on Microsoft Azure and/or new generation analytics platform like MS Fabric.
- Translate business needs into technical solutions by designing secure, scalable, and performant data architectures on cloud platforms.
- Select and recommend appropriate Data services (e.g. Fabric, Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, Power BI etc) to meet specific data storage, processing, and analytics needs.
- Develop and recommend data models that optimize data access and querying. Design and implement data pipelines for efficient data extraction, transformation, and loading (ETL/ELT) processes.
- Ability to understand Conceptual/Logical/Physical Data Modelling.
- Choose and implement appropriate data storage, processing, and analytics services based on specific data needs (e.g., data lakes, data warehouses, data pipelines).
- Understand and recommend data governance practices, including data lineage tracking, access control, and data quality monitoring.
What you will Bring
- 10+ years of working in data analytics and AI technologies from consulting, implementation and design perspectives
- Certifications in data engineering, analytics, cloud, AI will be a certain advantage
- Bachelor’s in engineering/ technology or an MCA from a reputed college is a must
- Prior experience of working as a solution architect during presales cycle will be an advantage
Soft Skills
- Communication Skills
- Presentation Skills
- Flexible and Hard-working
Technical Skills
- Knowledge of Presales Processes
- Basic understanding of business analytics and AI
- High IQ and EQ
Why join us?
- Work with a passionate and innovative team in a fast-paced, growth-oriented environment.
- Gain hands-on experience in content marketing with exposure to real-world projects.
- Opportunity to learn from experienced professionals and enhance your marketing skills.
- Contribute to exciting initiatives and make an impact from day one.
- Competitive stipend and potential for growth within the company.
- Recognized for excellence in data and AI solutions with industry awards and accolades.
Job Summary:
As a Data Engineering Lead, your role will involve designing, developing, and implementing interactive dashboards and reports using data engineering tools. You will work closely with stakeholders to gather requirements and translate them into effective data visualizations that provide valuable insights. Additionally, you will be responsible for extracting, transforming, and loading data from multiple sources into Power BI, ensuring its accuracy and integrity. Your expertise in Power BI and data analytics will contribute to informed decision-making and support the organization in driving data-centric strategies and initiatives.
1) Required Experience : 6+ years
2) Lead Experience : 2+ years
3) Mandatory Skills : Power BI , SQL , Azure Data Factory
4) Budget Range : 28 - 32 LPA
5) Locations : Hyderabad , Indore and Ahmedabad
6) Immediate joiners preferrable
7) Total 4 rounds will be conducted and Candidate should attend 1 round in F2F in Hyderabad , Indore and Ahmedabad locations
8) Candidate should be available to work all 5 days in work from office
We are looking for you!
---> As an ideal candidate for the Data Engineering Lead position, you embody the qualities of a team player with a relentless get-it-done attitude. Your intellectual curiosity and customer focus drive you to continuously seek new ways to add value to your job accomplishments. You thrive under pressure, maintaining a positive attitude and understanding that your career is a journey. You are willing to make the right choices to support your growth. In addition to your excellent communication skills, both written and verbal, you have a proven ability to create visually compelling designs using tools like Power BI and Tableau that effectively communicate our core values.
---> You build high-performing, scalable, enterprise-grade applications and teams. Your creativity and proactive nature enable you to think differently, find innovative solutions, deliver high-quality outputs, and ensure customers remain referenceable. With over eight years of experience in data engineering, you possess a strong sense of self-motivation and take ownership of your responsibilities. You prefer to work independently with little to no supervision.
---> You are process-oriented, adopt a methodical approach, and demonstrate a quality-first mindset. You have led mid to large-size teams and accounts, consistently using constructive feedback mechanisms to improve productivity, accountability, and performance within the team. Your track record showcases your results-driven approach, as you have consistently delivered successful projects with customer case studies published on public platforms. Overall, you possess a unique combination of skills, qualities, and experiences that make you an ideal fit to lead our data engineering team(s). You value inclusivity and want to join a culture that empowers you to show up as your authentic self.
---> You know that success hinges on commitment, our differences make us stronger, and the finish line is always sweeter when the whole team crosses together. In your role, you should be driving the team using data, data, and more data. You will manage multiple teams, oversee agile stories and their statuses, handle escalations and mitigations, plan ahead, identify hiring needs, collaborate with recruitment teams for hiring, enable sales with pre-sales teams, and work closely with development managers/leads for solutioning and delivery statuses, as well as architects for technology research and solutions.
What You Will Do:
- Analyze Business Requirements.
- Analyze the Data Model and do GAP analysis with Business Requirements and Power BI.
- Design and Model Power BI schema.
- Transformation of Data in Power BI/SQL/ETL Tool.
- Create DAX Formula, Reports, and Dashboards. Able to write DAX formulas.
- Experience writing SQL Queries and stored procedures.
- Design effective Power BI solutions based on business requirements.
- Manage a team of Power BI developers and guide their work.
- Integrate data from various sources into Power BI for analysis.
- Optimize performance of reports and dashboards for smooth usage.
- Collaborate with stakeholders to align Power BI projects with goals.
- Knowledge of Data Warehousing(must), Data Engineering is a plus
What we need?
• B. Tech computer science or equivalent
• Minimum 6+ years of relevant experience
Founded in 2002, Zafin offers a SaaS product and pricing platform that simplifies core modernization for top banks worldwide. Our platform enables business users to work collaboratively to design and manage pricing, products, and packages, while technologists streamline core banking systems.
With Zafin, banks accelerate time to market for new products and offers while lowering the cost of change and achieving tangible business and risk outcomes. The Zafin platform increases business agility while enabling personalized pricing and dynamic responses to evolving customer and market needs.
Zafin is headquartered in Vancouver, Canada, with offices and customers around the globe including ING, CIBC, HSBC, Wells Fargo, PNC, and ANZ. Zafin is proud to be recognized as a top employer and certified Great Place to Work® in Canada, India and the UK.
Job Summary:
We are looking for a highly skilled and detail-oriented Data & Visualisation Specialist to join our team. The ideal candidate will have a strong background in Business Intelligence (BI), data analysis, and visualisation, with advanced technical expertise in Azure Data Factory (ADF), SQL, Azure Analysis Services, and Power BI. In this role, you will be responsible for performing ETL operations, designing interactive dashboards, and delivering actionable insights to support strategic decision-making.
Key Responsibilities:
· Azure Data Factory: Design, build, and manage ETL pipelines in Azure Data Factory to facilitate seamless data integration across systems.
· SQL & Data Management: Develop and optimize SQL queries for extracting, transforming, and loading data while ensuring data quality and accuracy.
· Data Transformation & Modelling: Build and maintain data models using Azure Analysis Services (AAS), optimizing for performance and usability.
· Power BI Development: Create, maintain, and enhance complex Power BI reports and dashboards tailored to business requirements.
· DAX Expertise: Write and optimize advanced DAX queries and calculations to deliver dynamic and insightful reports.
· Collaboration: Work closely with stakeholders to gather requirements, deliver insights, and help drive data-informed decision-making across the organization.
· Attention to Detail: Ensure data consistency and accuracy through rigorous validation and testing processes. o Presentation & Reporting:
· Effectively communicate insights and updates to stakeholders, delivering clear and concise documentation.
Skills and Qualifications:
Technical Expertise:
· Proficient in Azure Data Factory for building ETL pipelines and managing data flows.
· Strong experience with SQL, including query optimization and data transformation.
· Knowledge of Azure Analysis Services for data modelling
· Advanced Power BI skills, including DAX, report development, and data modelling.
· Familiarity with Microsoft Fabric and Azure Analytics (a plus)
· Analytical Thinking: Ability to work with complex datasets, identify trends, and tackle ambiguous challenges effectively
Communication Skills:
· Excellent verbal and written communication skills, with the ability to convey complex technical information to non-technical stakeholders.
· Educational Qualification: Minimum of a Bachelor's degree, preferably in a quantitative field such as Mathematics, Statistics, Computer Science, Engineering, or a related discipline
What’s in it for you
Joining our team means being part of a culture that values diversity, teamwork, and high-quality work. We offer competitive salaries, annual bonus potential, generous paid time off, paid volunteering days, wellness benefits, and robust opportunities for professional growth and career advancement.

Experience: 5-8 Years
Work Mode: Remote
Job Type: Fulltime
Mandatory Skills: Python,SQL, Snowflake, Airflow, ETL, Data Pipelines, Elastic Search, & AWS.
Role Overview:
We are looking for a talented and passionate Senior Data Engineer to join our growing data team. In this role, you will play a key part in building and scaling our data infrastructure, enabling data-driven decision-making across the organization. You will be responsible for designing, developing, and maintaining efficient and reliable data pipelines for both ELT (Extract, Load, Transform) and ETL (Extract, Transform, Load) processes.
Responsibilities:
- Design, develop, and maintain robust and scalable data pipelines for ELT and ETL processes, ensuring data accuracy, completeness, and timeliness.
- Work with stakeholders to understand data requirements and translate them into efficient data models and pipelines.
- Build and optimize data pipelines using a variety of technologies, including Elastic Search, AWS S3, Snowflake, and NFS.
- Develop and maintain data warehouse schemas and ETL/ELT processes to support business intelligence and analytics needs.
- Implement data quality checks and monitoring to ensure data integrity and identify potential issues.
- Collaborate with data scientists and analysts to ensure data accessibility and usability for various analytical purposes.
- Stay current with industry best practices, CI/CD/DevSecFinOps, Scrum and emerging technologies in data engineering.
- Contribute to the development and enhancement of our data warehouse architecture
Required Skills:
- Bachelor's degree in Computer Science, Engineering, or a related field.
- 5+ years of experience as a Data Engineer with a strong focus on ELT/ETL processes.
- At least 3+ years of exp in Snowflake data warehousing technologies.
- At least 3+ years of exp in creating and maintaining Airflow ETL pipelines.
- Minimum 3+ years of professional level experience with Python languages for data manipulation and automation.
- Working experience with Elastic Search and its application in data pipelines.
- Proficiency in SQL and experience with data modelling techniques.
- Strong understanding of cloud-based data storage solutions such as AWS S3.
- Experience working with NFS and other file storage systems.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration skills.
Job Description :
As a Data & Analytics Architect, you will lead key data initiatives, including cloud transformation, data governance, and AI projects. You'll define cloud architectures, guide data science teams in model development, and ensure alignment with data architecture principles across complex solutions. Additionally, you will create and govern architectural blueprints, ensuring standards are met and promoting best practices for data integration and consumption.
Responsibilities :
- Play a key role in driving a number of data and analytics initiatives including cloud data transformation, data governance, data quality, data standards, CRM, MDM, Generative AI and data science.
- Define cloud reference architectures to promote reusable patterns and promote best practices for data integration and consumption.
- Guide the data science team in implementing data models and analytics models.
- Serve as a data science architect delivering technology and architecture services to the data science community.
- In addition, you will also guide application development teams in the data design of complex solutions, in a large data eco-system, and ensure that teams are in alignment with the data architecture principles, standards, strategies, and target states.
- Create, maintain, and govern architectural views and blueprints depicting the Business and IT landscape in its current, transitional, and future state.
- Define and maintain standards for artifacts containing architectural content within the operating model.
Requirements :
- Strong cloud data architecture knowledge (preference for Microsoft Azure)
- 8-10+ years of experience in data architecture, with proven experience in cloud data transformation, MDM, data governance, and data science capabilities.
- Design reusable data architecture and best practices to support batch/streaming ingestion, efficient batch, real-time, and near real-time integration/ETL, integrating quality rules, and structuring data for analytic consumption by end uses.
- Ability to lead software evaluations including RFP development, capabilities assessment, formal scoring models, and delivery of executive presentations supporting a final - recommendation.
- Well versed in the Data domains (Data Warehousing, Data Governance, MDM, Data Quality, Data Standards, Data Catalog, Analytics, BI, Operational Data Store, Metadata, Unstructured Data, non-traditional data and multi-media, ETL, ESB).
- Experience with cloud data technologies such as Azure data factory, Azure Data Fabric, Azure storage, Azure data lake storage, Azure data bricks, Azure AD, Azure ML etc.
- Experience with big data technologies such as Cloudera, Spark, Sqoop, Hive, HDFS, Flume, Storm, and Kafka.
Role & Responsibilities
About the Role:
We are seeking a highly skilled Senior Data Engineer with 5-7 years of experience to join our dynamic team. The ideal candidate will have a strong background in data engineering, with expertise in data warehouse architecture, data modeling, ETL processes, and building both batch and streaming pipelines. The candidate should also possess advanced proficiency in Spark, Databricks, Kafka, Python, SQL, and Change Data Capture (CDC) methodologies.
Key responsibilities:
Design, develop, and maintain robust data warehouse solutions to support the organization's analytical and reporting needs.
Implement efficient data modeling techniques to optimize performance and scalability of data systems.
Build and manage data lakehouse infrastructure, ensuring reliability, availability, and security of data assets.
Develop and maintain ETL pipelines to ingest, transform, and load data from various sources into the data warehouse and data lakehouse.
Utilize Spark and Databricks to process large-scale datasets efficiently and in real-time.
Implement Kafka for building real-time streaming pipelines and ensure data consistency and reliability.
Design and develop batch pipelines for scheduled data processing tasks.
Collaborate with cross-functional teams to gather requirements, understand data needs, and deliver effective data solutions.
Perform data analysis and troubleshooting to identify and resolve data quality issues and performance bottlenecks.
Stay updated with the latest technologies and industry trends in data engineering and contribute to continuous improvement initiatives.
Profile: Product Support Engineer
🔴 Experience: 1 year as Product Support Engineer.
🔴 Location: Mumbai (Andheri).
🔴 5 days of working from office.
Skills Required:
🔷 Experience in providing support for ETL or data warehousing is preferred.
🔷 Good Understanding on Unix and Databases concepts.
🔷 Experience working with SQL and No-SQL databases and writing simple
queries to get data for debugging issues.
🔷 Being able to creatively come up with solutions for various problems and
implement them.
🔷 Experience working with REST APIs and debugging requests and
responses using tools like Postman.
🔷 Quick troubleshooting and diagnosing skills.
🔷 Knowledge of customer success processes.
🔷 Experience in document creation.
🔷 High availability for fast response to customers.
🔷 Language knowledge required in one of NodeJs, Python, Java.
🔷 Background in AWS, Docker, Kubernetes, Networking - an advantage.
🔷 Experience in SAAS B2B software companies - an advantage.
🔷 Ability to join the dots around multiple events occurring concurrently and
spot patterns.
Here is the Job Description -
Location -- Viman Nagar, Pune
Mode - 5 Days Working
Required Tech Skills:
● Strong at PySpark, Python
● Good understanding of Data Structure
● Good at SQL query/optimization
● Strong fundamentals of OOPs programming
● Good understanding of AWS Cloud, Big Data.
● Data Lake, AWS Glue, Athena, S3, Kinesis, SQL/NoSQL DB
Job Description for QA Engineer:
- 6-10 years of experience in ETL Testing, Snowflake, DWH Concepts.
- Strong SQL knowledge & debugging skills are a must.
- Experience on Azure and Snowflake Testing is plus
- Experience with Qlik Replicate and Compose tools (Change Data Capture) tools is considered a plus
- Strong Data warehousing Concepts, ETL tools like Talend Cloud Data Integration, Pentaho/Kettle tool
- Experience in JIRA, Xray defect management toolis good to have.
- Exposure to the financial domain knowledge is considered a plus
- Testing the data-readiness (data quality) address code or data issues
- Demonstrated ability to rationalize problems and use judgment and innovation to define clear and concise solutions
- Demonstrate strong collaborative experience across regions (APAC, EMEA and NA) to effectively and efficiently identify root cause of code/data issues and come up with a permanent solution
- Prior experience with State Street and Charles River Development (CRD) considered a plus
- Experience in tools such as PowerPoint, Excel, SQL
- Exposure to Third party data providers such as Bloomberg, Reuters, MSCI and other Rating agencies is a plus
Key Attributes include:
- Team player with professional and positive approach
- Creative, innovative and able to think outside of the box
- Strong attention to detail during root cause analysis and defect issue resolution
- Self-motivated & self-sufficient
- Effective communicator both written and verbal
- Brings a high level of energy with enthusiasm to generate excitement and motivate the team
- Able to work under pressure with tight deadlines and/or multiple projects
- Experience in negotiation and conflict resolution
Senior ETL developer in SAS
We are seeking a skilled and experienced ETL Developer with strong SAS expertise to join
our growing Data Management team in Kolkata. The ideal candidate will be responsible for
designing, developing, implementing, and maintaining ETL processes to extract, transform,
and load data from various source systems into Banking data warehouse and other data
repositories of BFSI. This role requires a strong understanding of Banking data warehousing
concepts, ETL methodologies, and proficiency in SAS programming for data manipulation
and analysis.
Responsibilities:
• Design, develop, and implement ETL solutions using industry best practices and
tools, with a strong focus on SAS.
• Develop and maintain SAS programs for data extraction, transformation, and loading.
• Work with source system owners and data analysts to understand data requirements
and translate them into ETL specifications.
• Build and maintain data pipelines for Banking database to ensure data quality,
integrity, and consistency.
• Perform data profiling, data cleansing, and data validation to ensure accuracy and
reliability of data.
• Troubleshoot and resolve Bank’s ETL-related issues, including data quality problems
and performance bottlenecks.
• Optimize ETL processes for performance and scalability.
• Document ETL processes, data flows, and technical specifications.
• Collaborate with other team members, including data architects, data analysts, and
business users.
• Stay up-to-date with the latest SAS related ETL technologies and best practices,
particularly within the banking and financial services domain.
• Ensure compliance with data governance policies and security standards.
Qualifications:
• Bachelor's degree in Computer Science, Information Technology, or a related field.
• Proven experience as an ETL Developer, preferably within the banking or financial
services industry.
• Strong proficiency in SAS programming for data manipulation and ETL processes.
• Experience with other ETL tools (e.g., Informatica PowerCenter, DataStage, Talend)
is a plus.
• Solid understanding of data warehousing concepts, including dimensional modeling
(star schema, snowflake schema).
• Experience working with relational databases (e.g., Oracle, SQL Server) and SQL.
• Familiarity with data quality principles and practices.
• Excellent analytical and problem-solving skills.
• Strong communication and interpersonal skills.
• Ability to work independently and as part of a team.
• Experience with data visualization tools (e.g., Tableau, Power BI) is a plus.
• Understanding of regulatory requirements in the banking sector (e.g., RBI guidelines)
is an advantage.
Preferred Skills:
• Experience with cloud-based data warehousing solutions (e.g., AWS Redshift, Azure
Synapse, Google BigQuery).
• Knowledge of big data technologies (e.g., Hadoop, Spark).
• Experience with agile development methodologies.
• Relevant certifications (e.g., SAS Certified Professional).
What We Offer:
• Competitive salary and benefits package.
• Opportunity to work with cutting-edge technologies in a dynamic environment.
• Exposure to the banking and financial services domain.
• Professional development and growth opportunities.
• A collaborative and supportive work culture.