4+ ELT Jobs in Bangalore (Bengaluru) | ELT Job openings in Bangalore (Bengaluru)
Apply to 4+ ELT Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest ELT Job opportunities across top companies like Google, Amazon & Adobe.
Role & Responsibilities:
We are looking for a strong Data Engineer to join our growing team. The ideal candidate brings solid ETL fundamentals, hands-on pipeline experience, and cloud platform proficiency — with a preference for GCP / BigQuery expertise.
Responsibilities:
- Design, build, and maintain scalable data pipelines and ETL/ELT workflows
- Work with Dataform or DBT to implement transformation logic and data models
- Develop and optimize data solutions on GCP (BigQuery, GCS) or AWS/Azure
- Support data migration initiatives and data mesh architecture patterns
- Collaborate with analysts, scientists, and business stakeholders to deliver reliable data products
- Apply data governance and quality best practices across the data lifecycle
- Troubleshoot pipeline issues and drive proactive monitoring and resolution
Ideal Candidate:
- Strong Data Engineer Profile
- Must have 6+ years of hands-on experience in Data Engineering, with strong ownership of end-to-end data pipeline development.
- Must have strong experience in ETL/ELT pipeline design, transformation logic, and data workflow orchestration.
- Must have hands-on experience with any one of the following: Dataform, dbt, or BigQuery, with practical exposure to data transformation, modeling, or cloud data warehousing.
- Must have working experience on any cloud platform: GCP (preferred), AWS, or Azure, including object storage (GCS, S3, ADLS).
- Must have strong SQL skills with experience in writing complex queries and optimizing performance.
- Must have programming experience in Python and/or SQL for data processing.
- Must have experience in building and maintaining scalable data pipelines and troubleshooting data issues.
- Exposure to data migration projects and/or data mesh architecture concepts.
- Experience with Spark / PySpark or large-scale data processing frameworks.
- Experience working in product-based companies or data-driven environments.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
NOTE:
- There will be an interview drive scheduled on 28th and 29th March 2026, and if shortlisted, they will be expected to be available on these Interview dates. Only Immediate joiners are considered.
🤖 Data Scientist – Frontier AI for Data Platforms & Distributed Systems (4–8 Years)
Experience: 4–8 Years
Location: Bengaluru (On-site / Hybrid)
Company: Publicly Listed, Global Product Platform
🧠 About the Mission
We are building a Top 1% AI-Native Engineering & Data Organization — from first principles.
This is not incremental improvement.
This is a full-stack transformation of a large-scale enterprise into an AI-native data platform company.
We are re-architecting:
- Legacy systems → AI-native architectures
- Static pipelines → autonomous, self-healing systems
- Data platforms → intelligent, learning systems
- Software workflows → agentic execution layers
This is the kind of shift you would expect from companies like Google or Microsoft —
Except here, you will build it from day zero and scale it globally.
🧠 The Opportunity: This role sits at the intersection of three high-impact domains:
1. Frontier AI Systems: Large Language Models (LLMs), Small Language Models (SLMs), and Agentic AI
2. Data Platforms: Warehouses, Lakehouses, Streaming Systems, Query Engines
3. Distributed Systems: High-throughput, low-latency, multi-region infrastructure
We are building systems where:
- Data platforms optimize themselves using ML/LLMs
- Pipelines are autonomous, self-healing, and adaptive
- Queries are generated, optimized, and executed intelligently
- Infrastructure learns from usage and evolves continuously
This is: AI as the control plane for data infrastructure
🧩 What You’ll Work On
You will design and build AI-native systems deeply embedded inside data infrastructure.
1. AI-Native Data Platforms
- Build LLM-powered interfaces:
- Natural language → SQL / pipelines / transformations
- Design semantic data layers:
- Embeddings, vector search, knowledge graphs
- Develop AI copilots:
- For data engineers, analysts, and platform users
2. Autonomous Data Pipelines
- Build self-healing ETL/ELT systems using AI agents
- Create pipelines that:
- Detect anomalies in real time
- Automatically debug failures
- Dynamically optimize transformations
3. Intelligent Query & Compute Optimization
- Apply ML/LLMs to:
- Query planning and execution
- Cost-based optimization using learned models
- Workload prediction and scheduling
- Build systems that:
- Learn from query patterns
- Continuously improve performance and cost efficiency
4. Distributed Data + AI Infrastructure
- Architect systems operating at:
- Billions of events per day
- Petabyte-scale data
- Work with:
- Distributed compute engines (Spark / Flink / Ray class systems)
- Streaming systems (Kafka-class infra)
- Vector databases and hybrid retrieval systems
5. Learning Systems & Feedback Loops
- Build closed-loop AI systems:
- Execution → feedback → model updates
- Develop:
- Continual learning pipelines
- Online learning systems for infra optimization
- Experimentation frameworks (A/B, bandits, eval pipelines)
6. LLM & Agentic Systems (Infra-Aware)
- Build agents that understand data systems
- Enable:
- Autonomous pipeline debugging
- Root cause analysis for infra failures
- Intelligent orchestration of data workflows
🧠 What We’re Looking For
Core Foundations
- Strong grounding in:
- Machine Learning, Deep Learning, NLP
- Statistics, optimization, probabilistic systems
- Distributed systems fundamentals
- Deep understanding of:
- Transformer architectures
- Modern LLM ecosystems
Hands-On Expertise
- Experience building:
- LLM / GenAI systems (RAG, fine-tuning, embeddings)
- Data platforms (warehouse, lake, lakehouse architectures)
- Distributed pipelines and compute systems
- Strong programming skills:
- Python (ML/AI stack)
- SQL (deep understanding — query planning, optimization mindset)
Systems Thinking (Critical)
You think in systems, not components.
- Built or worked on:
- Large-scale data pipelines
- High-throughput distributed systems
- Low-latency, high-concurrency architectures
- Understand:
- Query optimization and execution
- Data partitioning, indexing, caching
- Trade-offs in distributed systems
🔥 What Sets You Apart (Top 1%)
- Built AI-powered data platforms or infra systems in production
- Designed or contributed to:
- Query engines / optimizers
- Data observability / lineage systems
- AI-driven infra or AIOps platforms
- Experience with:
- Multi-modal AI (logs, metrics, traces, text)
- Agentic AI systems
- Autonomous infrastructure
- Worked on systems at scale comparable to:
- Google (BigQuery-like systems)
- Meta (real-time analytics infra)
- Snowflake / Databricks (lakehouse architectures)
🧬 Ideal Background (Not Mandatory)
We often see strong candidates from:
- Data infrastructure or platform engineering teams
- AI-first startups or research-driven environments
- High-scale product companies
Experience building:
- Internal platforms used by 1000s of engineers
- Systems serving millions of users / high throughput workloads
- Multi-region, distributed cloud systems
🧠 The Kind of Problems You’ll Solve
- Can LLMs replace traditional query optimizers?
- How do we build self-healing data pipelines at scale?
- Can data systems learn from every query and improve automatically?
- How do we embed reasoning and planning into infrastructure layers?
- What does a fully autonomous data platform look like?
Background: We Commonly See (But Not Limited To)
Our team often includes engineers from top-tier institutions and strong research or product backgrounds, including:
- Leading engineering schools in India and globally
- Engineers with experience in top product companies, AI startups, or research-driven environments
- That said, we care far more about demonstrated ability, depth, and impact than pedigree alone.

Global digital transformation solutions provider.
Job Description – Senior Technical Business Analyst
Location: Trivandrum (Preferred) | Open to any location in India
Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST
About the Role
We are seeking highly motivated and analytically strong Senior Technical Business Analysts who can work seamlessly with business and technology stakeholders to convert a one-line problem statement into a well-defined project or opportunity. This role is ideal for fresh graduates who have a strong foundation in data analytics, data engineering, data visualization, and data science, along with a strong drive to learn, collaborate, and grow in a dynamic, fast-paced environment.
As a Technical Business Analyst, you will be responsible for translating complex business challenges into actionable user stories, analytical models, and executable tasks in Jira. You will work across the entire data lifecycle—from understanding business context to delivering insights, solutions, and measurable outcomes.
Key Responsibilities
Business & Analytical Responsibilities
- Partner with business teams to understand one-line problem statements and translate them into detailed business requirements, opportunities, and project scope.
- Conduct exploratory data analysis (EDA) to uncover trends, patterns, and business insights.
- Create documentation including Business Requirement Documents (BRDs), user stories, process flows, and analytical models.
- Break down business needs into concise, actionable, and development-ready user stories in Jira.
Data & Technical Responsibilities
- Collaborate with data engineering teams to design, review, and validate data pipelines, data models, and ETL/ELT workflows.
- Build dashboards, reports, and data visualizations using leading BI tools to communicate insights effectively.
- Apply foundational data science concepts such as statistical analysis, predictive modeling, and machine learning fundamentals.
- Validate and ensure data quality, consistency, and accuracy across datasets and systems.
Collaboration & Execution
- Work closely with product, engineering, BI, and operations teams to support the end-to-end delivery of analytical solutions.
- Assist in development, testing, and rollout of data-driven solutions.
- Present findings, insights, and recommendations clearly and confidently to both technical and non-technical stakeholders.
Required Skillsets
Core Technical Skills
- 6+ years of Technical Business Analyst experience within an overall professional experience of 8+ years
- Data Analytics: SQL, descriptive analytics, business problem framing.
- Data Engineering (Foundational): Understanding of data warehousing, ETL/ELT processes, cloud data platforms (AWS/GCP/Azure preferred).
- Data Visualization: Experience with Power BI, Tableau, or equivalent tools.
- Data Science (Basic/Intermediate): Python/R, statistical methods, fundamentals of ML algorithms.
Soft Skills
- Strong analytical thinking and structured problem-solving capability.
- Ability to convert business problems into clear technical requirements.
- Excellent communication, documentation, and presentation skills.
- High curiosity, adaptability, and eagerness to learn new tools and techniques.
Educational Qualifications
- BE/B.Tech or equivalent in:
- Computer Science / IT
- Data Science
What We Look For
- Demonstrated passion for data and analytics through projects and certifications.
- Strong commitment to continuous learning and innovation.
- Ability to work both independently and in collaborative team environments.
- Passion for solving business problems using data-driven approaches.
- Proven ability (or aptitude) to convert a one-line business problem into a structured project or opportunity.
Why Join Us?
- Exposure to modern data platforms, analytics tools, and AI technologies.
- A culture that promotes innovation, ownership, and continuous learning.
- Supportive environment to build a strong career in data and analytics.
Skills: Data Analytics, Business Analysis, Sql
Must-Haves
Technical Business Analyst (6+ years), SQL, Data Visualization (Power BI, Tableau), Data Engineering (ETL/ELT, cloud platforms), Python/R
We are looking for a Senior Data Engineer to join the Customer Innovation team, who will be responsible for acquiring, transforming, and integrating customer data onto our Data Activation Platform from customers’ clinical, claims, and other data sources. You will work closely with customers to build data and analytics solutions to support their business needs, and be the engine that powers the partnership that we build with them by delivering high-fidelity data assets.
In this role, you will work closely with our Product Managers, Data Scientists, and Software Engineers to build the solution architecture that will support customer objectives. You'll work with some of the brightest minds in the industry, work with one of the richest healthcare data sets in the world, use cutting-edge technology, and see your efforts affect products and people on a regular basis. The ideal candidate is someone that
- Has healthcare experience and is passionate about helping heal people,
- Loves working with data,
- Has an obsessive focus on data quality,
- Is comfortable with ambiguity and making decisions based on available data and reasonable assumptions,
- Has strong data interrogation and analysis skills,
- Defaults to written communication and delivers clean documentation, and,
- Enjoys working with customers and problem solving for them.
A day in the life at Innovaccer:
- Define the end-to-end solution architecture for projects by mapping customers’ business and technical requirements against the suite of Innovaccer products and Solutions.
- Measure and communicate impact to our customers.
- Enabling customers on how to activate data themselves using SQL, BI tools, or APIs to solve questions they have at speed.
What You Need:
- 4+ years of experience in a Data Engineering role, a Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field.
- 4+ years of experience working with relational databases like Snowflake, Redshift, or Postgres.
- Intermediate to advanced level SQL programming skills.
- Data Analytics and Visualization (using tools like PowerBI)
- The ability to engage with both the business and technical teams of a client - to document and explain technical problems or concepts in a clear and concise way.
- Ability to work in a fast-paced and agile environment.
- Easily adapt and learn new things whether it’s a new library, framework, process, or visual design concept.
What we offer:
- Industry certifications: We want you to be a subject matter expert in what you do. So, whether it’s our product or our domain, we’ll help you dive in and get certified.
- Quarterly rewards and recognition programs: We foster learning and encourage people to take risks. We recognize and reward your hard work.
- Health benefits: We cover health insurance for you and your loved ones.
- Sabbatical policy: We encourage people to take time off and rejuvenate, learn new skills, and pursue their interests so they can generate new ideas with Innovaccer.
- Pet-friendly office and open floor plan: No boring cubicles.

