Cutshort logo
Data engineering Jobs in Bangalore (Bengaluru)

50+ Data engineering Jobs in Bangalore (Bengaluru) | Data engineering Job openings in Bangalore (Bengaluru)

Apply to 50+ Data engineering Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Data engineering Job opportunities across top companies like Google, Amazon & Adobe.

icon
Optimo Capital

at Optimo Capital

2 candid answers
B V Murari
Posted by B V Murari
Bengaluru (Bangalore)
0 - 2 yrs
₹10000 - ₹25000 / mo
Product development
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Software Development
Data engineering
+2 more

Job Title: AI & Product Engineering Intern

Company: Optimo Capital (Nipun Projects and Finance Pvt. Ltd.) Location: Bengaluru, Karnataka (On-site / Hybrid) Duration: 4–6 months | Immediate / Rolling start Stipend: Based on profile


About Optimo Capital

Optimo Capital is an RBI-registered NBFC and India's first phygital Loan Against Property lender. We serve MSME business owners across 5 states, offering loans of ₹10L–₹2Cr with 4-day disbursal. Our tech team is actively building AI systems that redefine how lending operations work — from smart underwriting to autonomous calling agents to document intelligence and real-time property valuation.


What You'll Work On


You will be embedded in an early-stage fintech team working hands-on to make AI systems actually work in the real world — not in theory, not in demos, but in production. This means getting deep into existing projects — an autonomous AI calling system, an intelligent document extraction pipeline, and agentic AI frameworks — running experiments, figuring out what works and what doesn't, and iterating fast until it does.

Expect a lot of trial and error. Expect to think from first principles. Expect to question assumptions and find creative solutions to problems that don't have a textbook answer. The job is as much about curiosity and persistence as it is about technical skill.


Beyond the engineering, you will also be expected to think like a product person — understanding the business impact of what you build, owning parts of the product, and connecting the dots between what the AI does and why it matters to the business.


What We're Looking For

  • Person with fundamental thinking in problem solving Currently pursuing or recently completed a degree in Computer Science, Mathematics, Statistics, or Engineering
  • Solid programming fundamentals — Python, NodeJS etc and ability to learn new language if required
  • Has shipped something: Internship, a project, a GitHub repo, a hackathon submission, or a research prototype that demonstrates end-to-end thinking
  • Understands software systems beyond just writing code — APIs, data flows, system design at a basic level
  • Can articulate the product reason behind technical decisions — not just "how" but "why"
  • Comfortable working in ambiguity and taking ownership of open-ended problems


What Makes a Great Fit

We're looking for someone who reads about AI systems on weekends not because they have to, but because they're genuinely curious. You should be the kind of person who, when given a problem, immediately starts thinking about the product experience and the architecture simultaneously — and can hold both in your head at once. If you've built an LLM-powered tool, integrated a voice API, parsed messy documents, or simply gone deep on understanding how modern AI systems work under the hood — we'd love to talk.

Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
6 - 10 yrs
Upto ₹40L / yr (Varies
)
Windows Azure
databricks
Data Structures
Data engineering

We are hiring an Associate Technical Architect with strong expertise in Azure-based data platforms to design scalable data lakes, data warehouses, and enterprise data pipelines, while working with global teams.


Key Responsibilities

  • Design and implement scalable data lake, data warehouse, and lakehouse architectures on Azure
  • Build resilient data pipelines using Azure services
  • Architect and optimize cloud-based data platforms
  • Improve large-scale data processing and query performance
  • Collaborate with engineering teams, QA, product managers, and stakeholders
  • Communicate technical roadmap, risks, and mitigation strategies


Must-Have Skills:


  • 6+ years of experience in Azure Data Engineering / Data Architecture

Azure Data Platform

  • Experience with Azure Data Factory
  • Hands-on with Azure Databricks and PySpark
  • Experience with Azure Data Lake Storage
  • Knowledge of Azure Synapse or Azure SQL for data warehousing

Programming & Data Skills

  • Strong programming skills in Python and PySpark
  • Advanced SQL with query optimization and performance tuning
  • Experience building ETL / ELT data pipelines

Data Architecture Knowledge

  • Understanding of MPP databases
  • Knowledge of partitioning, indexing, and performance optimization
  • Experience with data modeling (dimensional, normalized, lakehouse)

Cloud Fundamentals

  • Azure security, networking, scalability, and disaster recovery
  • Experience with on-premise to Azure migrations

Certification (Preferred)

  • Azure Data Engineer or Azure Solutions Architect certification

Good-to-Have Skills

  • Domain experience in FSI, Retail, or CPG
  • Exposure to data governance tools
  • Experience with BI tools such as Power BI or Tableau
  • Familiarity with Terraform, CI/CD pipelines, or Azure DevOps
  • Experience with NoSQL databases such as Cosmos DB or MongoDB

Soft Skills

  • Strong problem-solving and analytical thinking
  • Good communication and stakeholder management
  • Ability to translate technical concepts into business outcomes
  • Experience working with global or distributed teams
Read more
Searce Inc

at Searce Inc

3 recruiters
Tejashree Kokare
Posted by Tejashree Kokare
Bengaluru (Bangalore), Pune, Mumbai
6 - 15 yrs
Best in industry
Google Cloud Platform (GCP)
Data engineering
Data warehouse architecture
Data architecture
Data modeling
+6 more

Solutions Architect - Data Engineering


Modern tech solutions advisory & 'futurify' consulting as a Searce lead fds (‘forward deployed solver’) architecting scalable data platforms and robust data engineering solutions that power intelligent insights and fuel AI innovation.

If you’re a tech-savvy, consultative seller with the brain of a strategist, the heart of a builder, and the charisma of a storyteller — we’ve got a seat for you at the front of the table.

You're not a sales lead. You're the transformation driver.


What are we looking for

real solver?

Solver? Absolutely. But not the usual kind. We're searching for the architects of the audacious & the pioneers of the possible. If you're the type to dismantle assumptions, re-engineer ‘best practices,’ and build solutions that make the future possible NOW, then you're speaking our language.

  • Improver. Solver. Futurist.
  • Great sense of humor.
  • ‘Possible. It is.’ Mindset.
  • Compassionate collaborator. Bold experimenter. Tireless iterator.
  • Natural creativity that doesn’t just challenge the norm, but solves to design what’s better.
  • Thinks in systems. Solves at scale.


This Isn’t for Everyone. But if you’re the kind who questions why things are done a certain way— and then identifies 3 better ways to do it — we’d love to chat with you.


Your Responsibilities

what you will wake up to solve.


You are not just a Solutions Architect; you are a futurifier of our data universe and the primary enabler of our AI ambitions. With a deep-seated passion for data engineering, you will architect and build the foundational data infrastructure that powers the customers entire data intelligence ecosystem.

As the Directly Responsible Individual (DRI) for our enterprise-grade data platforms, you own the outcome, end-to-end. You are the definitive solver for our customer's most complex data challenges, leveraging a powerful tech stack including Snowflake, Databricks, etc. and core GCP & AWS services (BigQuery, Spanner, Airflow, Kafka). This is a hands-on-keys role where you won't just design solutions—you'll build them, break them, and perfect them.


  • Solution Design & Pre-sales Excellence:Collaborate with cross-functional teams, including sales, engineering, and operations, to ensure successful project delivery.
  • Design Core Data Engineering: Master data modeling, architecting high-performance data ingestion pipelines and ensuring data quality and governance throughout the data lifecycle.
  • Enable Cloud & AI: Design and implement solutions utilizing core GCP data services, building foundational data platforms that efficiently support advanced analytics and AI/ML initiatives.
  • Optimize Performance & Cost: Continuously optimize data architectures and implementations for performance, efficiency, and cost-effectiveness within the cloud environment.
  • Bridge Business & Tech: Translate complex business requirements into clear technical designs, providing technical leadership and guidance to data engineering teams.
  • Stay Ahead of the Curve: Continuously research and evaluate new data technologies, architectural patterns, and industry trends to keep our data platforms at the cutting edge.


Functional Skills:


  • Enterprise Data Architecture Design: Expert ability to design holistic, scalable, and resilient data architectures for complex enterprise environments.
  • Cloud Data Platform Strategy: Proven capability to strategize, design, and implement cloud-native data platforms.
  • Pre-Sales & Technical Storyteller: Crafts compelling, client-ready proposals, architectural decks, and technical demonstrations. Doesn't just present; shapes the strategic technical narrative behind every proposed solution.
  • Advanced Data Modelling: Mastery in designing various data models for analytical, operational, and transactional use cases.
  • Data Ingestion & Pipeline Orchestration: Strong expertise in designing and optimizing robust data ingestion and transformation pipelines.
  • Stakeholder Communication: Exceptional skills in articulating complex technical concepts and architectural decisions to both technical and non-technical stakeholders.
  • Performance & Cost Optimization: Adept at optimizing data solutions for performance, efficiency, and cost within a cloud environment.


Tech Superpowers:


  • Cloud Data Mastery: You're a wizard at leveraging public cloud data services, with deep expertise in GCP (BigQuery, Spanner, etc.) and expert proficiency in modern data warehouse solutions like Snowflake.
  • Data Engineering Core: Highly skilled in designing, implementing, and managing data workflows using tools like Apache Airflow and Apache Kafka. You're also an authority on advanced data modeling and ETL/ELT patterns.
  • AI/ML Data Foundation: You instinctively design data pipelines and structures that efficiently feed and empower Machine Learning and Artificial Intelligence applications.
  • Programming for Data: You have a strong command over key programming languages (Python, SQL) for scripting, automation, and building data processing applications.


Experience & Relevance:


  • Architectural Leadership (8+ Years): You bring extensive experience (7+ years) specifically in a Solutions Architect role, focused on data engineering and platform building.
  • Cloud Data Expertise: You have a proven track record of designing and implementing production-grade data solutions leveraging major public cloud platforms, with significant experience in Google Cloud Platform (GCP).
  • Data Warehousing & Data Platform: Demonstrated hands-on experience in the end-to-end design, implementation, and optimization of modern data warehouses and comprehensive data platforms.
  • Databricks & BigQuery Mastery: You possess significant practical experience with Databricks as a core data warehouse and GCP BigQuery for analytical workloads.
  • Data Ingestion & Orchestration: Proven experience designing and implementing complex data ingestion pipelines and workflow orchestration using tools like Airflow and real-time streaming technologies like Kafka.
  • AI/ML Data Enablement: Experience in building data foundations specifically geared towards supporting Machine Learning and Artificial Intelligence initiatives.


Join the ‘real solvers’

ready to futurify?

If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’.


Don’t Just Send a Resume. Send a Statement.


So, If you are passionate about tech, future & what you read above (we really are!), apply here to experience the ‘Art of Possible’

Read more
JK Technosoft Ltd
Akanksh Gupta
Posted by Akanksh Gupta
Bengaluru (Bangalore), Noida
11 - 16 yrs
₹45L - ₹55L / yr
Data architecture
Azure cloud
databricks
snowflake
Data modeling
+1 more

About the Role: 

 

We are looking for a Data Architect with a strong background in data engineering & cloud data platforms. The ideal candidate will design and implement scalable data architectures that power enterprise analytics, AI/ML, and GenAI solutions — ensuring data availability, quality, and governance across the organization. 

 

Key Responsibilities: 

 

Data Architecture & Strategy 

  • Design & Architecture: Design and implement robust, scalable, and optimized data engineering solutions on the Databricks platform. Architect data pipelines that scale efficiently and reliably. 
  • Data Pipeline Development: Develop ETL/ELT pipelines leveraging Databricks notebooks, Delta Lake, Snowflake tech stack, Azure Data Factory etc.  
  • Cloud Integration: Work closely with cloud platforms like Azure, AWS, or GCP to integrate Databricks or Snowflake with data storage (e.g., ADLS, S3, etc.), databases, and other services. 
  • Performance Optimization: Optimize the performance of data workflows by tuning Databricks clusters, improving query performance, and identifying bottlenecks in data processing. 
  • Collaboration: Collaborate with data scientists, analysts, and business stakeholders to understand business requirements and translate them into scalable data solutions. 
  • Data Governance & Security: Ensure best practices for data security, governance, and compliance when working with sensitive or large datasets. 
  • Automation & Monitoring: Automate data pipeline deployments and create monitoring dashboards for ongoing performance checks. 
  • Continuous Improvement: Stay up to date with the latest Databricks features and Snowflake eco system best practices to continuously improve existing systems and processes. 

 

Required Skills & Experience: 

 

  • 12+ years of experience in Data Architecture / Data Engineering roles.  
  • Proven expertise in data modeling, ETL/ELT design, and cloud-based data solutions (AWS Redshift, Snowflake, BigQuery, or Synapse). 
  • Hands-on experience with data pipeline orchestration tools (Airflow, DBT, Azure Data Factory, etc.). 
  • Proficiency in Python, SQL, and Spark for data processing and integration. 
  • Experience with API integrations and data APIs for AI systems. 
  • Excellent communication and stakeholder management skills. 


Read more
AI Industry

AI Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai, Bengaluru (Bangalore), Hyderabad, Gurugram
6 - 10 yrs
₹32L - ₹42L / yr
ETL
SQL
Google Cloud Platform (GCP)
Data engineering
ELT
+17 more

Role & Responsibilities:

We are looking for a strong Data Engineer to join our growing team. The ideal candidate brings solid ETL fundamentals, hands-on pipeline experience, and cloud platform proficiency — with a preference for GCP / BigQuery expertise.


Responsibilities:

  • Design, build, and maintain scalable data pipelines and ETL/ELT workflows
  • Work with Dataform or DBT to implement transformation logic and data models
  • Develop and optimize data solutions on GCP (BigQuery, GCS) or AWS/Azure
  • Support data migration initiatives and data mesh architecture patterns
  • Collaborate with analysts, scientists, and business stakeholders to deliver reliable data products
  • Apply data governance and quality best practices across the data lifecycle
  • Troubleshoot pipeline issues and drive proactive monitoring and resolution


Ideal Candidate:

  • Strong Data Engineer Profile
  • Must have 6+ years of hands-on experience in Data Engineering, with strong ownership of end-to-end data pipeline development.
  • Must have strong experience in ETL/ELT pipeline design, transformation logic, and data workflow orchestration.
  • Must have hands-on experience with any one of the following: Dataform, dbt, or BigQuery, with practical exposure to data transformation, modeling, or cloud data warehousing.
  • Must have working experience on any cloud platform: GCP (preferred), AWS, or Azure, including object storage (GCS, S3, ADLS).
  • Must have strong SQL skills with experience in writing complex queries and optimizing performance.
  • Must have programming experience in Python and/or SQL for data processing.
  • Must have experience in building and maintaining scalable data pipelines and troubleshooting data issues.
  • Exposure to data migration projects and/or data mesh architecture concepts.
  • Experience with Spark / PySpark or large-scale data processing frameworks.
  • Experience working in product-based companies or data-driven environments.
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.


NOTE:

  • There will be an interview drive scheduled on 28th and 29th March 2026, and if shortlisted, they will be expected to be available on these Interview dates. Only Immediate joiners are considered.
Read more
TalentXO
Bengaluru (Bangalore), Hyderabad, Mumbai, Gurugram
6 - 10 yrs
₹32L - ₹40L / yr
ETL
Data engineering
Dataform
BigQuery
dbt
+5 more

Note-“Urgently Hiring – Immediate Joiners Preferred”

Data Engineering

Role & Responsibilities

We are looking for a strong Data Engineer to join our growing team. The ideal candidate brings solid ETL fundamentals, hands-on pipeline experience, and cloud platform proficiency — with a preference for GCP/BigQuery expertise.

Responsibilities:

  • Design, build, and maintain scalable data pipelines and ETL/ELT workflows
  • Work with Dataform or dbt to implement transformation logic and data models
  • Develop and optimize data solutions on GCP (BigQuery, GCS) or AWS/Azure
  • Support data migration initiatives and data mesh architecture patterns
  • Collaborate with analysts, scientists, and business stakeholders to deliver reliable data products
  • Apply data governance and quality best practices across the data lifecycle
  • Troubleshoot pipeline issues and drive proactive monitoring and resolution

Ideal Candidate

  • Strong Data Engineer Profile
  • Mandatory (Experience 1) – Must have 6+ years of hands-on experience in Data Engineering, with strong ownership of end-to-end data pipeline development.
  • Mandatory (Experience 2) – Must have strong experience in ETL/ELT pipeline design, transformation logic, and data workflow orchestration.
  • Mandatory (Experience 3) – Must have hands-on experience with any one of the following: Dataform, dbt, or BigQuery, with practical exposure to data transformation, modeling, or cloud data warehousing.
  • Mandatory (Experience 4) – Must have working experience on any cloud platform: GCP (preferred), AWS, or Azure, including object storage (GCS, S3, ADLS).
  • Mandatory (Core Skill 1) – Must have strong SQL skills with experience in writing complex queries and optimizing performance.
  • Mandatory (Core Skill 2) – Must have programming experience in Python and/or SQL for data processing.
  • Mandatory (Core Skill 3) – Must have experience in building and maintaining scalable data pipelines and troubleshooting data issues.
  • Preferred (Experience 1) – Exposure to data migration projects and/or data mesh architecture concepts.
  • Preferred (Skill 1) – Experience with Spark/PySpark or large-scale data processing frameworks.
  • Preferred (Company) – Experience working in product-based companies or data-driven environments.
  • Preferred (Education) – Bachelor’s or Master’s degree in Computer Science, Engineering, or related field

.


Read more
Talent Pro
Bengaluru (Bangalore), Mumbai, Hyderabad, Chennai
6 - 10 yrs
₹30L - ₹50L / yr
Data engineering

Strong Data Engineer Profile

Mandatory (Experience 1) – Must have 6+ years of hands-on experience in Data Engineering, with strong ownership of end-to-end data pipeline development.

Mandatory (Experience 2) – Must have strong experience in ETL/ELT pipeline design, transformation logic, and data workflow orchestration.

Mandatory (Experience 3) – Must have hands-on experience with any one of the following: Dataform, dbt, or BigQuery, with practical exposure to data transformation, modeling, or cloud data warehousing.

Mandatory (Experience 4) – Must have working experience on any cloud platform: GCP (preferred), AWS, or Azure, including object storage (GCS, S3, ADLS).

Mandatory (Core Skill 1) – Must have strong SQL skills with experience in writing complex queries and optimizing performance.

Mandatory (Core Skill 2) – Must have programming experience in Python and/or SQL for data processing.

Mandatory (Core Skill 3) – Must have experience in building and maintaining scalable data pipelines and troubleshooting data issues.

Read more
Ampera Technologies
Faisal AshrafNomani
Posted by Faisal AshrafNomani
Bengaluru (Bangalore), Chennai
4 - 10 yrs
Best in industry
AWS
Windows Azure
Google Cloud Platform (GCP)
Large Language Models (LLM)
AI Agents
+2 more

Job Description:

 

We are seeking a Cloud & AI Platform Engineer to design and operate AI-native infrastructure that supports large-scale machine learning, generative AI, and agentic AI systems. 

This role will focus on building secure, scalable, and automated multi-cloud platforms across AWS, Azure, GCP, and hybrid on-prem environments, enabling teams to deploy LLMs, AI agents, and data-driven applications reliably in production. 

You will work at the intersection of cloud engineering, MLOps, LLMOps, DevOps, and data infrastructure, helping build platforms that support RAG pipelines, vector search, AI model lifecycle management, and AI observability

 

Key Responsibilities

AI & Agentic Infrastructure 

  • Design infrastructure to support agentic AI systems, autonomous agents, and multi-agent workflows. 
  • Build scalable runtime environments for LLM orchestration frameworks. 
  • Enable deployment of AI copilots, assistants, and autonomous decision systems. 

Common frameworks may include: 

  • LangChain 
  • LlamaIndex 
  • AutoGPT 

 

LLMOps & AI Model Lifecycle 

Design and manage LLMOps pipelines for the full lifecycle of large language models: 

  • Model deployment 
  • Prompt management 
  • Versioning 
  • Evaluation and testing 
  • Model monitoring 

Integrate with AI platforms such as: 

  • Azure Machine Learning 
  • Amazon SageMaker 
  • Vertex AI 

 

Retrieval-Augmented Generation (RAG) Infrastructure 

Design and optimize RAG pipelines that integrate enterprise knowledge with LLMs. 

Responsibilities include: 

  • Document ingestion pipelines 
  • Embedding generation workflows 
  • Knowledge indexing 
  • Query orchestration 
  • Retrieval optimization 
  • Support scalable semantic search architectures. 

 

Vector Database & Knowledge Infrastructure 

Deploy and manage vector databases used for AI applications and semantic retrieval. 

Common technologies include: 

  • Pinecone 
  • Weaviate 
  • Milvus 
  • FAISS 

Responsibilities include: 

  • Index optimization 
  • Query latency tuning 
  • Scalable embedding storage 
  • Hybrid search architecture 

 

Multi-Cloud AI Infrastructure 

Design and maintain AI-ready infrastructure across: 

  • Amazon Web Services 
  • Microsoft Azure 
  • Google Cloud Platform 

Key responsibilities include: 

  • GPU infrastructure management 
  • Distributed training environments 
  • Hybrid cloud integrations with on-prem data centers 
  • Infrastructure scaling for AI workloads 

 

Data Platforms & Integration 

  • Support deployment and optimization of data lakes, data warehouses, and streaming platforms. 
  • Work with data engineering teams to ensure secure and scalable data infrastructure. 

 

Cloud Architecture & Infrastructure 

  • Design and implement scalable multi-cloud infrastructure across Azure, AWS, and Google Cloud. 
  • Build hybrid cloud architectures integrating on-premise environments with cloud platforms. 
  • Implement high availability, disaster recovery, and auto-scaling architectures for AI workloads. 

 

DevOps, Platform Engineering & Automation 

Build automated cloud infrastructure using modern DevOps practices. 

Tools may include: 

  • Terraform 
  • Docker 
  • Kubernetes 
  • GitHub Actions 

Responsibilities include: 

  • Infrastructure as Code (IaC) 
  • Automated deployments 
  • CI/CD pipelines for AI models and services 
  • Platform reliability and scalability 

 

AI Observability & Monitoring 

Implement observability frameworks to monitor AI systems in production. 

This includes: 

  • Model performance monitoring 
  • Prompt evaluation 
  • Hallucination detection 
  • Latency and throughput analysis 
  • Cost monitoring for LLM usage 

Tools may include: 

  • Arize AI 
  • WhyLabs 
  • Weights & Biases 

 

Security, Governance & Responsible AI 

Ensure AI systems follow strong governance and security practices. 

Responsibilities include: 

  • Data privacy and compliance 
  • Model governance frameworks 
  • Secure model deployment 
  • Monitoring model bias and drift 
  • AI risk management 

Support enterprise frameworks for Responsible AI and AI compliance. 

 

Data & Security 

  • Experience with data lake architectures, distributed storage, and ETL pipelines 
  • Knowledge of data security, encryption, IAM, and compliance frameworks 
  • Familiarity with AI governance and responsible AI practices 

 

 

Required Skills 

Cloud & Infrastructure 

  • Strong experience in Azure (must have), AWS or GCP 
  • Hybrid and multi-cloud architecture 
  • GPU infrastructure management 

DevOps & Automation 

  • Kubernetes 
  • Docker 
  • Terraform 
  • CI/CD pipelines 

AI / ML Platforms 

  • MLOps pipelines 
  • Model deployment 
  • Model monitoring 

AI Application Infrastructure 

  • Vector databases 
  • RAG pipelines 
  • LLM orchestration frameworks 

Programming 

Experience in one or more languages: 

  • Python 
  • Go 
  • Java 
  • TypeScript 

 

 

 

Preferred Qualifications 

  • Experience building AI copilots or autonomous agents 
  • Knowledge of distributed model training - Knowledge of GPU infrastructure and distributed training 
  • Familiarity with AI evaluation frameworks - Familiarity with model monitoring, drift detection, and AI observability 
  • Experience building enterprise AI platforms 

 

Education & Experience 

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field 
  • 4–8+ years experience in cloud infrastructure, DevOps, or platform engineering 
  • Experience working in data-driven or AI-focused environments 

 

 

What Success Looks Like 

  • Reliable ML model deployment pipelines - Reliable infrastructure for LLMs and AI agents, Scalable RAG knowledge platforms 
  • Efficient multi-cloud infrastructure management - Fast deployment cycles for AI products 
  • Secure and scalable AI-ready cloud platforms 
  • Strong automation and governance across cloud and AI systems 


Read more
ManpowerGroup
Shirisha Jangi
Posted by Shirisha Jangi
Bengaluru (Bangalore), Hyderabad
7 - 15 yrs
₹20L - ₹27L / yr
Data engineering
skill iconJava
skill iconPython
SQL
skill iconScala
+3 more

Immediate hiring for Senior Data Engineer

📍 Location: Hyderabad/Bangalore

💼 Experience: 7+Years

🕒 Employment Type: Full-Time

🏢 Work Mode: Hybrid

📅 Notice Period: 0-1Month serving notice only

 

   We are seeking a highly skilled and motivated Data Engineer to join our innovative team. As a Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support our enterprise-wide data-driven initiatives. You will collaborate closely with cross-functional teams to ensure the availability, reliability, and performance of our data systems and solutions.

 

🔎 Key Responsibilities:

  • Data Pipeline Development
  • Data Modeling and Architecture
  • Data Integration and API Development
  • Data Infrastructure Management
  • Collaboration and Documentation

 

🎯 Required Skills:

  • Bachelor’s degree in computer science, Engineering, Information Systems, or a related field.
  • 7+ years of proven experience in data engineering, software development, or related technical roles.
  • 7+ years of experience in programming languages commonly used in data engineering (Python, Java, SQL, Stored Procedures, Scala, etc.).
  • 7+ years of experience with database systems, data modeling, and advanced SQL.
  • 7+ years of experience with ETL tools such as SSIS, Snowflake, Databricks, Azure Data Factory, Stored Procedures, etc.
  • Experience with big data technologies such as Hadoop, Spark, Kafka, etc.
  • 5+ years of experience working with cloud platforms like Azure, AWS, or Google Cloud.
  • Strong analytical, problem-solving, and debugging skills with high attention to detail.
  • Excellent communication and collaboration skills in a team-oriented, fast-paced environment.
  • Ability to adapt to rapidly evolving technologies and business requirements.

 

 

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Robin Silverster
Posted by Robin Silverster
Bengaluru (Bangalore)
5 - 8 yrs
₹10L - ₹33L / yr
Data engineering
databricks
skill iconPython
Data Warehouse (DWH)
SQL
+1 more

Data Engineer MS Data Engineer + Snowflake/databrics Required Skills: · 6 to 8 years of being a practitioner in data engineering or a related field. Should have experience in Snowflake or Databricks. Experience with data processing frameworks like Apache Spark or Hadoop. Experience working on Databricks. Familiarity with cloud platforms (AWS, Azure) and their data services. Experience with data warehousing concepts and technologies. Experience with message queues and streaming platforms (e.g., Kafka). Excellent communication and collaboration skills. Ability to work independently and as part of a geographically distributed team. 

Read more
Chennai, Bengaluru (Bangalore)
10 - 15 yrs
₹40L - ₹70L / yr
Salesforce
Data engineering

Hi,


We are seeking a senior data leader with deep functional expertise in Salesforce Sales and Service domains to own the enterprise data model, metrics, and analytical outcomes supporting Sales, Service, and Customer Operations.

This role is business‑first and data‑centric. The successful candidate understands how Salesforce Sales Cloud and Service Cloud data is generated, evolves over time, and is consumed by business teams, and ensures analytics accurately reflect operational reality.

Snowflake serves as the enterprise analytics platform, but Salesforce domain mastery and functional data expertise are the primary requirements for success in this role.



Core Responsibilities

Salesforce Sales & Service Data Ownership

·                 Act as the data owner and architect for Salesforce Sales and Service domains.

  • Own Sales data including leads, accounts, opportunities, pipeline, bookings, revenue, forecasting, and CPQ (if applicable).
  • Own Service data including cases, case lifecycle, SLAs, backlog, escalations, and service performance metrics.
  • Define and govern enterprise‑wide KPI and metric definitions across Sales and Service.
  • Ensure alignment between Salesforce operational definitions and analytics/reporting outputs.
  • Own cross‑functional metrics spanning Sales, Service, and the customer lifecycle (e.g., customer health, renewals, churn).

Business‑Driven Data Modeling

·                 Design Salesforce‑centric analytical data models that accurately reflect Sales and Service processes.

  • Model sales stage progression, pipeline history, and forecast changes over time.
  • Model service case lifecycle, SLA compliance, backlog aging, and resolution metrics.
  • Handle Salesforce‑specific complexities such as slowly changing dimensions (ownership, territory, account hierarchies).
  • Ensure data models support operational dashboards, executive reporting, and advanced analytics.

Analytics Enablement & Business Partnership

·                 Partner closely with Sales Operations, Service Operations, Revenue Operations, Finance, and Analytics teams.

  • Translate business questions into trusted, reusable analytical datasets.
  • Identify data quality issues or Salesforce process gaps impacting reporting and drive remediation.
  • Enable self‑service analytics through well‑documented, certified data products.


Technical Responsibilities (Enabling Focus)

·                 Architect and govern Salesforce data ingestion and modeling on Snowflake.

  • Guide ELT/ETL strategies for Salesforce objects such as Opportunities, Accounts, Activities, Cases, and Entitlements.
  • Ensure reconciliation and auditability between Salesforce, Finance, and analytics layers.
  • Define data access, security, and governance aligned with Salesforce usage patterns.
  • Partner with data engineering teams on scalability, performance, and cost efficiency.


Required Experience & Skills

Salesforce Sales & Service Domain Expertise (Must‑Have)

·                 Extensive hands‑on experience working with Salesforce Sales Cloud and Service Cloud data.

  • Strong understanding of sales pipeline management, forecasting, and revenue reporting.
  • Strong understanding of service case workflows, SLAs, backlog management, and service performance measurement.
  • Experience working directly with Sales Operations and Service Operations teams.
  • Ability to identify when Salesforce configuration or process issues cause reporting inconsistencies.

Data & Analytics Expertise

·                 10+ years working with business‑critical analytical data.

  • Proven experience defining KPIs, metrics, and semantic models for Sales and Service domains.
  • Strong SQL and analytical skills to validate business logic and data outcomes.
  • Experience supporting BI and analytics platforms such as Tableau, Power BI, or MicroStrategy.

Platform Experience 

·                 Experience using Snowflake as an enterprise analytics platform.

  • Understanding of modern ELT/ETL and cloud data architecture concepts.
  • Familiarity with data governance, lineage, and access control best practices.


Leadership & Collaboration

·                 Acts as a bridge between business stakeholders and technical teams.

  • Comfortable challenging requirements using business and data context.
  • Mentors engineers and analysts on Salesforce data nuances and business meaning.
  • Strong communicator able to explain complex Salesforce data behavior to non‑technical leaders.


Thanks,

Ampera Talent Team

Read more
TalentXO
tabbasum shaikh
Posted by tabbasum shaikh
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
5 - 17 yrs
₹34L - ₹45L / yr
Data engineering
Dremio
cloud object storage
data engineering pipelines
Lakehouse

Role & Responsibilities

You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.

  • Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
  • Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
  • Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
  • Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
  • Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
  • Support self-service analytics by enabling governed data products and semantic layers.
  • Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
  • Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.

Ideal Candidate

  • Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
  • 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
  • Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
  • Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
  • Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
  • Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
  • Excellent problem-solving, documentation, and stakeholder communication skills.

Preferred:

  • Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
  • Exposure to Snowflake, Databricks, or BigQuery environments.
  • Experience in high-tech, manufacturing, or enterprise data modernization programs.


Read more
AI Industry

AI Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai, Bengaluru (Bangalore), Hyderabad, Gurugram
5 - 17 yrs
₹34L - ₹45L / yr
Dremio
Data engineering
Business Intelligence (BI)
Tableau
PowerBI
+51 more

Review Criteria:

  • Strong Dremio / Lakehouse Data Architect profile
  • 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
  • Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
  • Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
  • Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
  • Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
  • Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
  • Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
  • Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies


Preferred:

  • Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments


Role & Responsibilities:

You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.


  • Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
  • Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
  • Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
  • Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
  • Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
  • Support self-service analytics by enabling governed data products and semantic layers.
  • Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
  • Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.


Ideal Candidate:

  • Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
  • 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
  • Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
  • Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
  • Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
  • Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
  • Excellent problem-solving, documentation, and stakeholder communication skills.


Preferred:

  • Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
  • Exposure to Snowflake, Databricks, or BigQuery environments.
  • Experience in high-tech, manufacturing, or enterprise data modernization programs.
Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Bengaluru (Bangalore)
10 - 15 yrs
₹70L - ₹95L / yr
Data engineering

10+ years experience in data engineering or backend engineering


Should have a product based company experience


2+ years in a technical leadership or team-lead role


Expert-level experience with Kafka for high-throughput streaming systems


Strong hands-on expertise with PySpark for distributed data processing


Advanced experience with AWS Glue for ETL orchestration and metadata management


Proven experience building and upgrading real-time data lakes at scale


Hands-on knowledge of data warehouses such as Redshift or Snowflake


Experience with AWS services including S3, Kinesis, Lambda, and RDS

Read more
Bengaluru (Bangalore)
5 - 10 yrs
₹25L - ₹50L / yr
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconPython
skill iconJava
Data engineering
+10 more

Job Title : Senior Software Engineer (Full Stack — AI/ML & Data Applications)

Experience : 5 to 10 Years

Location : Bengaluru, India

Employment Type : Full-Time | Onsite


Role Overview :

We are seeking a Senior Full Stack Software Engineer with strong technical leadership and hands-on expertise in AI/ML, data-centric applications, and scalable full-stack architectures.

In this role, you will design and implement complex applications integrating ML/AI models, lead full-cycle development, and mentor engineering teams.


Mandatory Skills :

Full Stack Development (React/Angular/Vue + Node.js/Python/Java), Data Engineering (Spark/Kafka/ETL), ML/AI Model Integration (TensorFlow/PyTorch/scikit-learn), Cloud & DevOps (AWS/GCP/Azure, Docker, Kubernetes, CI/CD), SQL/NoSQL Databases (PostgreSQL/MongoDB).


Key Responsibilities :

  • Architect, design, and develop scalable full-stack applications for data and AI-driven products.
  • Build and optimize data ingestion, processing, and pipeline frameworks for large datasets.
  • Deploy, integrate, and scale ML/AI models in production environments.
  • Drive system design, architecture discussions, and API/interface standards.
  • Ensure engineering best practices across code quality, testing, performance, and security.
  • Mentor and guide junior developers through reviews and technical decision-making.
  • Collaborate cross-functionally with product, design, and data teams to align solutions with business needs.
  • Monitor, diagnose, and optimize performance issues across the application stack.
  • Maintain comprehensive technical documentation for scalability and knowledge-sharing.

Required Skills & Experience :

  • Education : B.E./B.Tech/M.E./M.Tech in Computer Science, Data Science, or equivalent fields.
  • Experience : 5+ years in software development with at least 2+ years in a senior or lead role.
  • Full Stack Proficiency :
  • Front-end : React / Angular / Vue.js
  • Back-end : Node.js / Python / Java
  • Data Engineering : Experience with data frameworks such as Apache Spark, Kafka, and ETL pipeline development.
  • AI/ML Expertise : Practical exposure to TensorFlow, PyTorch, or scikit-learn and deploying ML models at scale.
  • Databases : Strong knowledge of SQL & NoSQL systems (PostgreSQL, MongoDB) and warehousing tools (Snowflake, BigQuery).
  • Cloud & DevOps : Working knowledge of AWS, GCP, or Azure; containerization & orchestration (Docker, Kubernetes); CI/CD; MLflow/SageMaker is a plus.
  • Visualization : Familiarity with modern data visualization tools (D3.js, Tableau, Power BI).

Soft Skills :

  • Excellent communication and cross-functional collaboration skills.
  • Strong analytical mindset with structured problem-solving ability.
  • Self-driven with ownership mentality and adaptability in fast-paced environments.

Preferred Qualifications (Bonus) :

  • Experience deploying distributed, large-scale ML or data-driven platforms.
  • Understanding of data governance, privacy, and security compliance.
  • Exposure to domain-driven data/AI use cases in fintech, healthcare, retail, or e-commerce.
  • Experience working in Agile environments (Scrum/Kanban).
  • Active open-source contributions or a strong GitHub technical portfolio.
Read more
is a global digital solutions partner trusted by leading Fortune 500 companies in industries such as pharma & healthcare, retail, and BFSI.It is expertise in data and analytics, data engineering, machine learning, AI, and automation help companies streamline operations and unlock business value.

is a global digital solutions partner trusted by leading Fortune 500 companies in industries such as pharma & healthcare, retail, and BFSI.It is expertise in data and analytics, data engineering, machine learning, AI, and automation help companies streamline operations and unlock business value.

Agency job
via HyrHub by Shwetha Naik
Bengaluru (Bangalore), Mangalore
12 - 14 yrs
₹28L - ₹32L / yr
Data engineering
skill iconMachine Learning (ML)
Generative AI
Architecture
skill iconPython
+1 more

Skills: Gen AI, Machine learning Models, AWS/ Azure, redshift, Python, Apachi, Airflow, Devops, minimum 4-5years experience as Architect, should be from Data Engineering background.

• 8+ years of experience in data engineering, data science, or architecture roles.

• Experience designing enterprise-grade AI platforms.

• Certification in major cloud platforms (AWS/Azure/GCP).

• Experience with governance tooling (Collibra, Alation) and lineage systems

• Strong hands-on background in data engineering, analytics, or data science.

• Expertise in building data platforms using:

o Cloud: AWS (Glue, S3, Redshift), Azure (Data Factory, Synapse), GCP (BigQuery,

Dataflow).

o Compute: Spark, Databricks, Flink.

o Data modelling: dimensional, relational, NoSQL, graph.

• Proficiency with Python, SQL, and data pipeline orchestration tools.

• Understanding of ML frameworks and tools: TensorFlow, PyTorch, Scikit-learn, MLflow, etc.

• Experience implementing MLOps, model deployment, monitoring, logging, and versioning.


Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Gurugram, Mumbai, Hyderabad, Bengaluru (Bangalore)
5 - 17 yrs
₹40L - ₹57L / yr
Data engineering
Dremio

Criteria

Mandatory

Strong Dremio / Lakehouse Data Architect profile

Mandatory (Experience 1) – 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio

Mandatory (Experience 2) – Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems

Mandatory (Technical Skills 1) – Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts

Mandatory (Technical Skills 2) – Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)

Mandatory (Architecture) – Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics

Mandatory (Governance) – Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices

Mandatory (Stakeholder Management) – Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline

Mandatory (Company) – Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies

Preferred

Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Gurugram, Bengaluru (Bangalore), Hyderabad, Mumbai
5 - 10 yrs
₹40L - ₹55L / yr
Data engineering
Dremio

Criteria

Mandatory

Strong Dremio / Lakehouse Data Architect profile

Mandatory (Experience 1) – 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio

Mandatory (Experience 2) – Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems

Mandatory (Technical Skills 1) – Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts

Mandatory (Technical Skills 2) – Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)

Mandatory (Architecture) – Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics

Mandatory (Governance) – Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices

Mandatory (Stakeholder Management) – Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline

Mandatory (Company) – Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies

Read more
AI-First Company

AI-First Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
5 - 17 yrs
₹30L - ₹45L / yr
Data engineering
Data architecture
SQL
Data modeling
GCS
+47 more

ROLES AND RESPONSIBILITIES:

You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.


  • Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
  • Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
  • Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
  • Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
  • Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
  • Support self-service analytics by enabling governed data products and semantic layers.
  • Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
  • Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.


IDEAL CANDIDATE:

  • Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
  • 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
  • Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
  • Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
  • Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
  • Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
  • Excellent problem-solving, documentation, and stakeholder communication skills.


PREFERRED:

  • Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
  • Exposure to Snowflake, Databricks, or BigQuery environments.
  • Experience in high-tech, manufacturing, or enterprise data modernization programs.
Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Mumbai, Hyderabad, Bengaluru (Bangalore), Gurugram
5 - 10 yrs
₹35L - ₹50L / yr
Data engineering
Dremio

Criteria

Mandatory

Strong Dremio / Lakehouse Data Architect profile

Mandatory (Experience 1) – 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio

Mandatory (Experience 2) – Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems

Mandatory (Technical Skills 1) – Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts

Mandatory (Technical Skills 2) – Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)

Mandatory (Architecture) – Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics

Mandatory (Governance) – Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices

Mandatory (Stakeholder Management) – Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline

Mandatory (Company) – Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies

Read more
Talent Pro
Bengaluru (Bangalore)
9 - 11 yrs
₹80L - ₹120L / yr
Data engineering
Tier 1

10+ years experience in data engineering or backend engineering


2+ years in a technical leadership or team-lead role


Expert-level experience with Kafka for high-throughput streaming systems


Strong hands-on expertise with PySpark for distributed data processing


Advanced experience with AWS Glue for ETL orchestration and metadata management


Proven experience building and upgrading real-time data lakes at scale


Hands-on knowledge of data warehouses such as Redshift or Snowflake


Experience with AWS services including S3, Kinesis, Lambda, and RDS

Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Kochi (Cochin), Trivandrum, Hyderabad, Thiruvananthapuram
8 - 10 yrs
₹10L - ₹25L / yr
Business Analysis
Data Visualization
PowerBI
SQL
Tableau
+18 more

Job Description – Senior Technical Business Analyst

Location: Trivandrum (Preferred) | Open to any location in India

Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST

 

About the Role

We are seeking highly motivated and analytically strong Senior Technical Business Analysts who can work seamlessly with business and technology stakeholders to convert a one-line problem statement into a well-defined project or opportunity. This role is ideal for fresh graduates who have a strong foundation in data analytics, data engineering, data visualization, and data science, along with a strong drive to learn, collaborate, and grow in a dynamic, fast-paced environment.

As a Technical Business Analyst, you will be responsible for translating complex business challenges into actionable user stories, analytical models, and executable tasks in Jira. You will work across the entire data lifecycle—from understanding business context to delivering insights, solutions, and measurable outcomes.

 

Key Responsibilities

Business & Analytical Responsibilities

  • Partner with business teams to understand one-line problem statements and translate them into detailed business requirementsopportunities, and project scope.
  • Conduct exploratory data analysis (EDA) to uncover trends, patterns, and business insights.
  • Create documentation including Business Requirement Documents (BRDs)user storiesprocess flows, and analytical models.
  • Break down business needs into concise, actionable, and development-ready user stories in Jira.

Data & Technical Responsibilities

  • Collaborate with data engineering teams to design, review, and validate data pipelinesdata models, and ETL/ELT workflows.
  • Build dashboards, reports, and data visualizations using leading BI tools to communicate insights effectively.
  • Apply foundational data science concepts such as statistical analysispredictive modeling, and machine learning fundamentals.
  • Validate and ensure data quality, consistency, and accuracy across datasets and systems.

Collaboration & Execution

  • Work closely with product, engineering, BI, and operations teams to support the end-to-end delivery of analytical solutions.
  • Assist in development, testing, and rollout of data-driven solutions.
  • Present findings, insights, and recommendations clearly and confidently to both technical and non-technical stakeholders.

 

Required Skillsets

Core Technical Skills

  • 6+ years of Technical Business Analyst experience within an overall professional experience of 8+ years
  • Data Analytics: SQL, descriptive analytics, business problem framing.
  • Data Engineering (Foundational): Understanding of data warehousing, ETL/ELT processes, cloud data platforms (AWS/GCP/Azure preferred).
  • Data Visualization: Experience with Power BI, Tableau, or equivalent tools.
  • Data Science (Basic/Intermediate): Python/R, statistical methods, fundamentals of ML algorithms.

 

Soft Skills

  • Strong analytical thinking and structured problem-solving capability.
  • Ability to convert business problems into clear technical requirements.
  • Excellent communication, documentation, and presentation skills.
  • High curiosity, adaptability, and eagerness to learn new tools and techniques.

 

Educational Qualifications

  • BE/B.Tech or equivalent in:
  • Computer Science / IT
  • Data Science

 

What We Look For

  • Demonstrated passion for data and analytics through projects and certifications.
  • Strong commitment to continuous learning and innovation.
  • Ability to work both independently and in collaborative team environments.
  • Passion for solving business problems using data-driven approaches.
  • Proven ability (or aptitude) to convert a one-line business problem into a structured project or opportunity.

 

Why Join Us?

  • Exposure to modern data platforms, analytics tools, and AI technologies.
  • A culture that promotes innovation, ownership, and continuous learning.
  • Supportive environment to build a strong career in data and analytics.

 

Skills: Data Analytics, Business Analysis, Sql


Must-Haves

Technical Business Analyst (6+ years), SQL, Data Visualization (Power BI, Tableau), Data Engineering (ETL/ELT, cloud platforms), Python/R

Read more
AI company

AI company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
5 - 17 yrs
₹30L - ₹45L / yr
Data architecture
Data engineering
SQL
Data modeling
GCS
+21 more

Review Criteria

  • Strong Dremio / Lakehouse Data Architect profile
  • 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
  • Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
  • Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
  • Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
  • Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
  • Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
  • Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
  • Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies


Preferred

  • Preferred (Nice-to-have) – Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments


Job Specific Criteria

  • CV Attachment is mandatory
  • How many years of experience you have with Dremio?
  • Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
  • Are you okay with 3 Days WFO?
  • Virtual Interview requires video to be on, are you okay with it?


Role & Responsibilities

You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.

  • Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
  • Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
  • Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
  • Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
  • Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
  • Support self-service analytics by enabling governed data products and semantic layers.
  • Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
  • Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.


Ideal Candidate

  • Bachelor’s or master’s in computer science, Information Systems, or related field.
  • 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
  • Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
  • Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
  • Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
  • Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
  • Excellent problem-solving, documentation, and stakeholder communication skills.
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Annie Varghese
Posted by Annie Varghese
Bengaluru (Bangalore), Pune
5 - 10 yrs
Best in industry
microsoft fabric
data lake
Data Warehouse (DWH)
data pipeline
Data engineering
+2 more

Role: Azure Fabric Data Engineer

Experience: 5–10 Years

Location: Pune/Bangalore

Employment Type: Full-Time

About the Role

We are looking for an experienced Azure Data Engineer with strong expertise in Microsoft Fabric and Power BI to build scalable data pipelines, Lakehouse architectures, and enterprise analytics solutions on the Azure cloud.

Key Responsibilities

  • Design & build data pipelines using Microsoft Fabric (Pipelines, Dataflows Gen2, Notebooks).
  • Develop and optimize Lakehouse / Data Lake / Delta Lake architectures.
  • Build ETL/ELT workflows using Fabric, Azure Data Factory, or Synapse.
  • Create and optimize Power BI datasets, data models, and DAX calculations.
  • Implement semantic models, incremental refresh, and Direct Lake/DirectQuery.
  • Work with Azure services: ADLS Gen2, Azure SQL, Synapse, Event Hub, Functions, Databricks.
  • Build dimensional models (Star/Snowflake) and support BI teams.
  • Ensure data governance & security using Purview, RBAC, and AAD.

Required Skills

  • Strong hands-on experience with Microsoft Fabric (Lakehouse, Pipelines, Dataflows, Notebooks).
  • Expertise in Power BI (DAX, modeling, Dataflows, optimized datasets).
  • Deep knowledge of Azure Data Engineering stack (ADF, ADLS, Synapse, SQL).
  • Strong SQL, Python/PySpark skills.
  • Experience in Delta Lake, Medallion architecture, and data quality frameworks.

Nice to Have

  • Azure Certifications (DP-203, PL-300, Fabric Analytics Engineer).
  • Experience with CI/CD (Azure DevOps/GitHub).
  • Databricks experience (preferred).


Note: One Technical round is mandatory to be taken F2F from either Pune or Bangalore office

Read more
CGI Inc

at CGI Inc

3 recruiters
Shruthi BT
Posted by Shruthi BT
Bengaluru (Bangalore), Mumbai, Pune, Hyderabad, Chennai
8 - 15 yrs
₹15L - ₹25L / yr
Google Cloud Platform (GCP)
Data engineering
Big query

Google Data Engineer - SSE


Position Description

Google Cloud Data Engineer

Notice Period: Immediate to 30 days serving

Job Description:

We are seeking a highly skilled Data Engineer with extensive experience in Google Cloud Platform (GCP) data services and big data technologies. The ideal candidate will be responsible for designing, implementing, and optimizing scalable data solutions while ensuring high performance, reliability, and security.

Key Responsibilities:


• Design, develop, and maintain scalable data pipelines and architectures using GCP data services.

• Implement and optimize solutions using BigQuery, Dataproc, Composer, Pub/Sub, Dataflow, GCS, and BigTable.

• Work with GCP databases such as Bigtable, Spanner, CloudSQL, AlloyDB, ensuring performance, security, and availability.

• Develop and manage data processing workflows using Apache Spark, Hadoop, Hive, Kafka, and other Big Data technologies.

• Ensure data governance and security using Dataplex, Data Catalog, and other GCP governance tooling.

• Collaborate with DevOps teams to build CI/CD pipelines for data workloads using Cloud Build, Artifact Registry, and Terraform.

• Optimize query performance and data storage across structured and unstructured datasets.

• Design and implement streaming data solutions using Pub/Sub, Kafka, or equivalent technologies.


Required Skills & Qualifications:


• 8-15 years of experience

• Strong expertise in GCP Dataflow, Pub/Sub, Cloud Composer, Cloud Workflow, BigQuery, Cloud Run, Cloud Build.

• Proficiency in Python and Java, with hands-on experience in data processing and ETL pipelines.

• In-depth knowledge of relational databases (SQL, MySQL, PostgreSQL, Oracle) and NoSQL databases (MongoDB, Scylla, Cassandra, DynamoDB).

• Experience with Big Data platforms such as Cloudera, Hortonworks, MapR, Azure HDInsight, IBM Open Platform.

• Strong understanding of AWS Data services such as Redshift, RDS, Athena, SQS/Kinesis.

• Familiarity with data formats such as Avro, ORC, Parquet.

• Experience handling large-scale data migrations and implementing data lake architectures.

• Expertise in data modeling, data warehousing, and distributed data processing frameworks.

• Deep understanding of data formats such as Avro, ORC, Parquet.

• Certification in GCP Data Engineering Certification or equivalent.


Good to Have:


• Experience in BigQuery, Presto, or equivalent.

• Exposure to Hadoop, Spark, Oozie, HBase.

• Understanding of cloud database migration strategies.

• Knowledge of GCP data governance and security best practices.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Robin Silverster
Posted by Robin Silverster
Bengaluru (Bangalore)
5 - 11 yrs
₹10L - ₹35L / yr
skill iconPython
Spark
Apache Kafka
Snow flake schema
databricks
+1 more

Required Skills:

· 8+ years of being a practitioner in data engineering or a related field.

· Proficiency in programming skills in Python

· Experience with data processing frameworks like Apache Spark or Hadoop.

· Experience working on Databricks.

· Familiarity with cloud platforms (AWS, Azure) and their data services.

· Experience with data warehousing concepts and technologies.

· Experience with message queues and streaming platforms (e.g., Kafka).

· Excellent communication and collaboration skills.

· Ability to work independently and as part of a geographically distributed team.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Gagandeep Kaur
Posted by Gagandeep Kaur
Bengaluru (Bangalore), Mumbai, Pune
4 - 7 yrs
Best in industry
skill iconPython
PySpark
pandas
Airflow
Data engineering

Wissen Technology is hiring for Data Engineer

About Wissen Technology: At Wissen Technology, we deliver niche, custom-built products that solve complex business challenges across industries worldwide. Founded in 2015, our core philosophy is built around a strong product engineering mindset—ensuring every solution is architected and delivered right the first time. Today, Wissen Technology has a global footprint with 2000+ employees across offices in the US, UK, UAE, India, and Australia. Our commitment to excellence translates into delivering 2X impact compared to traditional service providers. How do we achieve this? Through a combination of deep domain knowledge, cutting-edge technology expertise, and a relentless focus on quality. We don’t just meet expectations—we exceed them by ensuring faster time-to-market, reduced rework, and greater alignment with client objectives. We have a proven track record of building mission-critical systems across industries, including financial services, healthcare, retail, manufacturing, and more. Wissen stands apart through its unique delivery models. Our outcome-based projects ensure predictable costs and timelines, while our agile pods provide clients the flexibility to adapt to their evolving business needs. Wissen leverages its thought leadership and technology prowess to drive superior business outcomes. Our success is powered by top-tier talent. Our mission is clear: to be the partner of choice for building world-class custom products that deliver exceptional impact—the first time, every time.

Job Summary: Wissen Technology is hiring a Data Engineer with expertise in Python, Pandas, Airflow, and Azure Cloud Services. The ideal candidate will have strong communication skills and experience with Kubernetes.

Experience: 4-7 years

Notice Period: Immediate- 15 days

Location: Pune, Mumbai, Bangalore

Mode of Work: Hybrid

Key Responsibilities:

  • Develop and maintain data pipelines using Python and Pandas.
  • Implement and manage workflows using Airflow.
  • Utilize Azure Cloud Services for data storage and processing.
  • Collaborate with cross-functional teams to understand data requirements and deliver solutions.
  • Ensure data quality and integrity throughout the data lifecycle.
  • Optimize and scale data infrastructure to meet business needs.

Qualifications and Required Skills:

  • Proficiency in Python (Must Have).
  • Strong experience with Pandas (Must Have).
  • Expertise in Airflow (Must Have).
  • Experience with Azure Cloud Services.
  • Good communication skills.

Good to Have Skills:

  • Experience with Pyspark.
  • Knowledge of Kubernetes.

Wissen Sites:


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Bipasha Rath
Posted by Bipasha Rath
Pune, Bengaluru (Bangalore)
7 - 15 yrs
₹15L - ₹40L / yr
skill iconPython
pandas
Data engineering

Experience - 7+Yrs


Must-Have:

o Python (Pandas, PySpark)

o Data engineering & workflow optimization

o Delta Tables, Parquet

· Good-to-Have:

o Databricks

o Apache Spark, DBT, Airflow

o Advanced Pandas optimizations

o PyTest/DBT testing frameworks


Interested candidates can revert back with detail below.


Total Experience -

Relevant Experience in Python,Pandas.DE,Workflow optimization,delta table.-

Current CTC -

Expected CTC -

Notice Period -LWD -

Current location -

Desired location -



Read more
Wissen Technology

at Wissen Technology

4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Bengaluru (Bangalore), Pune, Mumbai
7 - 12 yrs
Best in industry
skill iconPython
pandas
PySpark
SQL
Data engineering

Wissen Technology is hiring for Data Engineer

About Wissen Technology:At Wissen Technology, we deliver niche, custom-built products that solve complex business challenges across industries worldwide. Founded in 2015, our core philosophy is built around a strong product engineering mindset—ensuring every solution is architected and delivered right the first time. Today, Wissen Technology has a global footprint with 2000+ employees across offices in the US, UK, UAE, India, and Australia. Our commitment to excellence translates into delivering 2X impact compared to traditional service providers. How do we achieve this? Through a combination of deep domain knowledge, cutting-edge technology expertise, and a relentless focus on quality. We don’t just meet expectations—we exceed them by ensuring faster time-to-market, reduced rework, and greater alignment with client objectives. We have a proven track record of building mission-critical systems across industries, including financial services, healthcare, retail, manufacturing, and more. Wissen stands apart through its unique delivery models. Our outcome-based projects ensure predictable costs and timelines, while our agile pods provide clients the flexibility to adapt to their evolving business needs. Wissen leverages its thought leadership and technology prowess to drive superior business outcomes. Our success is powered by top-tier talent. Our mission is clear: to be the partner of choice for building world-class custom products that deliver exceptional impact—the first time, every time.

Job Summary:Wissen Technology is hiring a Data Engineer with a strong background in Python, data engineering, and workflow optimization. The ideal candidate will have experience with Delta Tables, Parquet, and be proficient in Pandas and PySpark.

Experience:7+ years

Location:Pune, Mumbai, Bangalore

Mode of Work:Hybrid

Key Responsibilities:

  • Develop and maintain data pipelines using Python (Pandas, PySpark).
  • Optimize data workflows and ensure efficient data processing.
  • Work with Delta Tables and Parquet for data storage and management.
  • Collaborate with cross-functional teams to understand data requirements and deliver solutions.
  • Ensure data quality and integrity throughout the data lifecycle.
  • Implement best practices for data engineering and workflow optimization.

Qualifications and Required Skills:

  • Proficiency in Python, specifically with Pandas and PySpark.
  • Strong experience in data engineering and workflow optimization.
  • Knowledge of Delta Tables and Parquet.
  • Excellent problem-solving skills and attention to detail.
  • Ability to work collaboratively in a team environment.
  • Strong communication skills.

Good to Have Skills:

  • Experience with Databricks.
  • Knowledge of Apache Spark, DBT, and Airflow.
  • Advanced Pandas optimizations.
  • Familiarity with PyTest/DBT testing frameworks.

Wissen Sites:

 

Wissen | Driving Digital Transformation

A technology consultancy that drives digital innovation by connecting strategy and execution, helping global clients to strengthen their core technology.

 

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Rithika SharonM
Posted by Rithika SharonM
Bengaluru (Bangalore)
4 - 6 yrs
Best in industry
skill iconPython
SQL
Data engineering
PowerBI
Tableau
+1 more

Data Engineer

Experience: 4–6 years

Key Responsibilities

  • Design, build, and maintain scalable data pipelines and workflows.
  • Manage and optimize cloud-native data platforms on Azure with Databricks and Apache Spark (1–2 years).
  • Implement CI/CD workflows and monitor data pipelines for performance, reliability, and accuracy.
  • Work with relational databases (Sybase, DB2, Snowflake, PostgreSQL, SQL Server) and ensure efficient SQL query performance.
  • Apply data warehousing concepts including dimensional modelling, star schema, data vault modelling, Kimball and Inmon methodologies, and data lake design.
  • Develop and maintain ETL/ELT pipelines using open-source frameworks such as Apache Spark and Apache Airflow.
  • Integrate and process data streams from message queues and streaming platforms (Kafka, RabbitMQ).
  • Collaborate with cross-functional teams in a geographically distributed setup.
  • Leverage Jupyter notebooks for data exploration, analysis, and visualization.

Required Skills

  • 4+ years of experience in data engineering or a related field.
  • Strong programming skills in Python with experience in Pandas, NumPy, Flask.
  • Hands-on experience with pipeline monitoring and CI/CD workflows.
  • Proficiency in SQL and relational databases.
  • Familiarity with Git for version control.
  • Strong communication and collaboration skills with ability to work independently.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Annie Varghese
Posted by Annie Varghese
Pune, Bengaluru (Bangalore), Mumbai
7 - 13 yrs
Best in industry
azure
synapse
data lake
Data engineering
skill iconLeadership

Overview

We are seeking an Azure Solutions Lead who will be responsible for managing and maintaining the overall architecture, design, application management and migrations, security of growing Cloud Infrastructure that supports the company’s core business and infrastructure systems and services. In this role, you will protect our critical information, systems, and assets, build new solutions, implement, and configure new applications and hardware, provide training, and optimize/monitor cloud systems. You must be passionate about applying technical skills that create operational efficiencies and offer solutions to support business operations and strategy roadmap.

Responsibilities:

  • Works in tandem with our Architecture, Applications and Security teams
  • Identify and implement the most optimal and secure Azure cloud-based solutions for the company.
  • Design and implement end-to-end Azure data solutions, including data ingestion, storage, processing, and visualization.
  • Architect data platforms using Azure services such as Azure Data Fabric, Azure Data Factory (ADF), Azure Databricks (ADB), Azure SQL Database, and One Lake etc.
  • Develop and maintain data pipelines for efficient data movement and transformation.
  • Design data models and schemas to support business requirements and analytical insights.
  • Collaborate with stakeholders to understand business needs and translate them into technical solutions.
  • Provide technical leadership and guidance to the data engineering team.
  • Stay updated on emerging Azure technologies and best practices in data architecture.
  • Stay current with industry trends, making recommendations as available to help keep the environment operating at it optimum while minimizing waste and maximizing investment.
  • Create and update the documentation to facilitate cross-training and troubleshooting
  • Work with the Security and Architecture teams to refine and deploy security best practices to identify, detect, protect, respond, and recover from threats to assets and information.

Qualifications:

  • Overall 7+ years of IT Experience & minimum 2 years as an Azure Data Lead
  • Strong expertise in all aspects of Azure services with a focus on data engineering & BI reporting.
  • Proficiency in Azure Data Factory (ADF), Data Factory, Azure Databricks (ADB), SQL, NoSQL, PySpark, Power BI and other Azure data tools.
  • Experience in data modelling, data warehousing, and business intelligence concepts.
  • Proven track record of designing and implementing scalable and robust data solutions.
  • Excellent communication skills with strong teamwork, analytical and troubleshooting skills, and attentiveness to detail.
  • Self-starter, ability to work independently and within a team. 

NOTE: IT IS MANDATORY TO GIVE ONE TECHNICHAL ROUND FACE TO FACE.

Read more
Remote, Bengaluru (Bangalore), Pune, Chennai, Nagpur
5 - 15 yrs
₹20L - ₹30L / yr
databricks
PySpark
Apache Spark
CI/CD
Data engineering


Technical Architect (Databricks)

  • 10+ Years Data Engineering Experience with expertise in Databricks
  • 3+ years of consulting experience
  • Completed Data Engineering Professional certification & required classes
  • Minimum 2-3 projects delivered with hands-on experience in Databricks
  • Completed Apache Spark Programming with Databricks, Data Engineering with Databricks, Optimizing Apache Spark™ on Databricks
  • Experience in Spark and/or Hadoop, Flink, Presto, other popular big data engines
  • Familiarity with Databricks multi-hop pipeline architecture

 

 

Sr. Data Engineer (Databricks)

 

  • 5+ Years Data Engineering Experience with expertise in Databricks
  • Completed Data Engineering Associate certification & required classes
  • Minimum 1 project delivered with hands-on experience in development on Databricks
  • Completed Apache Spark Programming with Databricks, Data Engineering with Databricks, Optimizing Apache Spark™ on Databricks
  • SQL delivery experience, and familiarity with Bigquery, Synapse or Redshift
  • Proficient in Python, knowledge of additional databricks programming languages (Scala)


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Akansha Sharma
Posted by Akansha Sharma
Bengaluru (Bangalore)
4 - 6 yrs
Best in industry
Generative AI
MLOps
Data engineering
Big Data


Notice Period - 0-15 days Max

Apply only who are currently in Karnataka

F2F interview

Interview - 4 rounds


Job Title: AI Specialist

Company Overview: We are the Technology Center of Excellence for Long Arc Capital

which provides growth capital to businesses with a sustainable competitive advantage and 

a strong management team with whom we can partner to build a category leader. We focus 

on North American and European companies where technology is transforming traditional 

business models in the Financial Services, Business Services, Technology, Media and 

Telecommunications sectors.

As part of our mission to leverage AI for business innovation, we are establishing AI COE to 

develop Generative AI (GenAI) and Agentic AI solutions that enhance decision-making, 

automation, and user experiences.

Job Overview: We are seeking dynamic and talented individuals to join our AI COE. This 

team will focus on developing advanced AI models, integrating them into our cloud-based 

platform, and delivering impactful solutions that drive efficiency, innovation, and customer 

value.

Key Responsibilities:

• As a Full Stack AI Engineer, research, design, and develop AI solutions for text, 

image, audio, and video generation

• Build and deploy Agentic AI systems for autonomous decision-making across 

business outcomes and enhancing associate productivity. 

• Work with domain experts to design and fine-tune AI solutions tailored to portfoliospecific challenges.

• Partner with data engineers across portfolio companies to –

o Preprocess large datasets and ensure high-quality input for training AI 

models.

o Develop scalable and efficient AI pipelines using frameworks like 

TensorFlow, PyTorch, and Hugging Face.

• Implement MLOps best practices for AI model deployment, versioning, and 

monitoring using tools like MLflow and Kubernetes.

• Ensure AI solutions adhere to ethical standards, comply with regulations (e.g., 

GDPR, CCPA), and mitigate biases.

• Design intuitive and user-friendly interfaces for AI-driven applications, collaborating 

with UX designers and frontend developers.

Internal Use Only

• Stay up to date with the latest AI research and tools and evaluate their applicability 

to our business needs.

Key Qualifications:

Technical Expertise:

• Proficiency in full stack application development (specifically using Angular, React).

• Expertise in backend technologies (Django, Flask) and cloud platforms (AWS 

SageMaker/Azure AI Studio).

• Proficiency in deep learning frameworks (TensorFlow, PyTorch, JAX).

• Proficiency with Large Language Models (LLMs) and generative AI tools (e.g., OpenAI 

APIs, LangChain, Stable Diffusion).

• Solid understanding of data engineering workflows, including ETL processes and 

distributed computing tools (Apache Spark, Kafka).

• Experience with data pipelines, big data processing, and database management 

(SQL, NoSQL).

• Knowledge of containerization (Docker) and orchestration (Kubernetes) for scalable 

AI deployment.

• Familiarity with CI/CD pipelines and automation tools (Terraform, Jenkins).

• Good understanding of AI ethics, bias mitigation, and compliance standards.

• Excellent problem-solving abilities and innovative thinking.

• Strong collaboration and communication skills, with the ability to work in crossfunctional teams.

• Proven ability to work in a fast-paced and dynamic environment.

Preferred Qualifications:

• Advanced studies in Artificial Intelligence, or a related field.

• Experience with reinforcement learning, multi-agent systems, or autonomous 

decision-making

Read more
Digitide
Bengaluru (Bangalore)
6 - 9 yrs
₹5L - ₹15L / yr
Windows Azure
Data engineering
databricks
Data Factory

1. Design, develop, and maintain data pipelines using Azure Data Factory

2. ⁠Create and manage data models in PostgreSQL, ensuring efficient data storage and retrieval.

3. ⁠Optimize query performance and database performance in PostgreSQL, including indexing, query tuning, and performance monitoring.

4. Strong knowledge on data modeling and mapping from various sources to data model.

5. ⁠Develop and maintain logging mechanisms in Azure Data Factory to monitor and troubleshoot data pipelines.

6. Strong knowledge on Key Vault, Azure Data lake, PostgreSQL

7. ⁠Manage file handling within Azure Data Factory, including reading, writing, and transforming data from various file formats.

8. Strong SQL query skills with the ability to handle multiple scenarios and optimize query performance.

9. ⁠Excellent problem-solving skills and ability to handle complex data scenarios.

10. Collaborate with Business stakeholder, data architects and PO's to understand and meet their data requirements.

11. Ensure data quality and integrity through validation and quality checks.

12. Having Power BI knowledge, creating and configuring semantic models & reports would be preferred.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Annie Varghese
Posted by Annie Varghese
Bengaluru (Bangalore), Mumbai, Pune
7 - 16 yrs
Best in industry
fabric
Data engineering
skill iconPython
SQL

Key Responsibilities

● Design & Development

○ Architect and implement data ingestion pipelines using Microsoft Fabric Data Factory (Dataflows) and OneLake sources

○ Build and optimize Lakehouse and Warehouse solutions leveraging Delta Lake, Spark Notebooks, and SQL Endpoints

○ Define and enforce Medallion (Bronze–Silver–Gold) architecture patterns for raw, enriched, and curated datasets

● Data Modeling & Transformation

○ Develop scalable transformation logic in Spark (PySpark/Scala) and Fabric SQL to support reporting and analytics

○ Implement slowly changing dimensions (SCD Type 2), change-data-capture (CDC) feeds, and time-windowed aggregations

● Performance Tuning & Optimization

○ Monitor and optimize data pipelines for throughput, cost efficiency, and reliability

○ Apply partitioning, indexing, caching, and parallelism best practices in Fabric Lakehouses and Warehouse compute

● Data Quality & Governance

○ Integrate Microsoft Purview for metadata cataloging, lineage tracking, and data discovery

○ Develop automated quality checks, anomaly detection rules, and alerts for data reliability

● CI/CD & Automation

○ Implement infrastructure-as-code (ARM templates or Terraform) for Fabric workspaces, pipelines, and artifacts

○ Set up Git-based version control, CI/CD pipelines (e.g. Azure DevOps) for seamless deployment across environments

● Collaboration & Support

○ Partner with data scientists, BI developers, and business analysts to understand requirements and deliver data solutions

○ Provide production support, troubleshoot pipeline failures, and drive root-cause analysis

Required Qualifications

● 5+ years of professional experience in data engineering roles, with at least 1 year working hands-on in Microsoft Fabric

● Strong proficiency in:

○ Languages: SQL (T-SQL), Python, and/or Scala

○ Fabric Components: Data Factory Dataflows, OneLake, Spark Notebooks, Lakehouse, Warehouse

○ Data Storage: Delta Lake, Parquet, CSV, JSON formats

● Deep understanding of data modeling principles (star schemas, snowflake schemas, normalized vs. denormalized)

● Experience with CI/CD and infrastructure-as-code for data platforms (ARM templates, Terraform, Git)

● Familiarity with data governance tools, especially Microsoft Purview

● Excellent problem-solving skills and ability to communicate complex technical concepts clearly


NOTE: Candidate should be willing to take one technical round F2F from any of the branch location. (Pune/ Mumbai/ Bangalore)

Read more
Codnatives
Bengaluru (Bangalore), Pune
5 - 9 yrs
₹5L - ₹14L / yr
Data engineering
skill iconAmazon Web Services (AWS)
Amazon Redshift

Good experience in 5+ SQL and NoSQL database development and optimization. 

∙Strong hands-on experience with Amazon Redshift, MySQL, MongoDB, and Flyway. 

∙In-depth understanding of data warehousing principles and performance tuning techniques. 

∙Strong hands-on experience in building complex aggregation pipelines in NoSQL databases such as MongoDB. 

∙Proficient in Python or Scala for data processing and automation. 

∙3+ years of experience working with AWS-managed database services. 

∙3+ years of experience with Power BI or similar BI/reporting platforms. 

Read more
NeoGenCode Technologies Pvt Ltd
Bengaluru (Bangalore)
8 - 12 yrs
₹15L - ₹22L / yr
Data engineering
Google Cloud Platform (GCP)
Data Transformation Tool (DBT)
Google Dataform
BigQuery
+6 more

Job Title : Data Engineer – GCP + Spark + DBT

Location : Bengaluru (On-site at Client Location | 3 Days WFO)

Experience : 8 to 12 Years

Level : Associate Architect

Type : Full-time


Job Overview :

We are looking for a seasoned Data Engineer to join the Data Platform Engineering team supporting a Unified Data Platform (UDP). This role requires hands-on expertise in DBT, GCP, BigQuery, and PySpark, with a solid foundation in CI/CD, data pipeline optimization, and agile delivery.


Mandatory Skills : GCP, DBT, Google Dataform, BigQuery, PySpark/Spark SQL, Advanced SQL, CI/CD, Git, Agile Methodologies.


Key Responsibilities :

  • Design, build, and optimize scalable data pipelines using BigQuery, DBT, and PySpark.
  • Leverage GCP-native services like Cloud Storage, Pub/Sub, Dataproc, Cloud Functions, and Composer for ETL/ELT workflows.
  • Implement and maintain CI/CD for data engineering projects with Git-based version control.
  • Collaborate with cross-functional teams including Infra, Security, and DataOps for reliable, secure, and high-quality data delivery.
  • Lead code reviews, mentor junior engineers, and enforce best practices in data engineering.
  • Participate in Agile sprints, backlog grooming, and Jira-based project tracking.

Must-Have Skills :

  • Strong experience with DBT, Google Dataform, and BigQuery
  • Hands-on expertise with PySpark/Spark SQL
  • Proficient in GCP for data engineering workflows
  • Solid knowledge of SQL optimization, Git, and CI/CD pipelines
  • Agile team experience and strong problem-solving abilities

Nice-to-Have Skills :

  • Familiarity with Databricks, Delta Lake, or Kafka
  • Exposure to data observability and quality frameworks (e.g., Great Expectations, Soda)
  • Knowledge of MDM patterns, Terraform, or IaC is a plus
Read more
Deqode

at Deqode

1 recruiter
Alisha Das
Posted by Alisha Das
Bengaluru (Bangalore), Hyderabad, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5 - 7 yrs
₹10L - ₹25L / yr
Microsoft Windows Azure
Data engineering
skill iconPython
Apache Kafka

Role Overview:

We are seeking a Senior Software Engineer (SSE) with strong expertise in Kafka, Python, and Azure Databricks to lead and contribute to our healthcare data engineering initiatives. This role is pivotal in building scalable, real-time data pipelines and processing large-scale healthcare datasets in a secure and compliant cloud environment.

The ideal candidate will have a solid background in real-time streaming, big data processing, and cloud platforms, along with strong leadership and stakeholder engagement capabilities.

Key Responsibilities:

  • Design and develop scalable real-time data streaming solutions using Apache Kafka and Python.
  • Architect and implement ETL/ELT pipelines using Azure Databricks for both structured and unstructured healthcare data.
  • Optimize and maintain Kafka applications, Python scripts, and Databricks workflows to ensure performance and reliability.
  • Ensure data integrity, security, and compliance with healthcare standards such as HIPAA and HITRUST.
  • Collaborate with data scientists, analysts, and business stakeholders to gather requirements and translate them into robust data solutions.
  • Mentor junior engineers, perform code reviews, and promote engineering best practices.
  • Stay current with evolving technologies in cloud, big data, and healthcare data standards.
  • Contribute to the development of CI/CD pipelines and containerized environments (Docker, Kubernetes).

Required Skills & Qualifications:

  • 4+ years of hands-on experience in data engineering roles.
  • Strong proficiency in Kafka (including Kafka Streams, Kafka Connect, Schema Registry).
  • Proficient in Python for data processing and automation.
  • Experience with Azure Databricks (or readiness to ramp up quickly).
  • Solid understanding of cloud platforms, with a preference for Azure (AWS/GCP is a plus).
  • Strong knowledge of SQL and NoSQL databases; data modeling for large-scale systems.
  • Familiarity with containerization tools like Docker and orchestration using Kubernetes.
  • Exposure to CI/CD pipelines for data applications.
  • Prior experience with healthcare datasets (EHR, HL7, FHIR, claims data) is highly desirable.
  • Excellent problem-solving abilities and a proactive mindset.
  • Strong communication and interpersonal skills to work in cross-functional teams.


Read more
VyTCDC
Gobinath Sundaram
Posted by Gobinath Sundaram
Bengaluru (Bangalore)
5 - 8 yrs
₹4L - ₹25L / yr
Data engineering
skill iconPython
Spark

🛠️ Key Responsibilities

  • Design, build, and maintain scalable data pipelines using Python and Apache Spark (PySpark or Scala APIs)
  • Develop and optimize ETL processes for batch and real-time data ingestion
  • Collaborate with data scientists, analysts, and DevOps teams to support data-driven solutions
  • Ensure data quality, integrity, and governance across all stages of the data lifecycle
  • Implement data validation, monitoring, and alerting mechanisms for production pipelines
  • Work with cloud platforms (AWS, GCP, or Azure) and tools like Airflow, Kafka, and Delta Lake
  • Participate in code reviews, performance tuning, and documentation


🎓 Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
  • 3–6 years of experience in data engineering with a focus on Python and Spark
  • Experience with distributed computing and handling large-scale datasets (10TB+)
  • Familiarity with data security, PII handling, and compliance standards is a plus


Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Bengaluru (Bangalore), Hyderabad
4 - 8 yrs
₹10L - ₹24L / yr
skill iconPython
Data engineering
skill iconAmazon Web Services (AWS)
RESTful APIs
Microservices
+9 more

Job Title : Python Data Engineer

Experience : 4+ Years

Location : Bangalore / Hyderabad (On-site)


Job Summary :

We are seeking a skilled Python Data Engineer to work on cloud-native data platforms and backend services.

The role involves building scalable APIs, working with diverse data systems, and deploying containerized services using modern cloud infrastructure.


Mandatory Skills : Python, AWS, RESTful APIs, Microservices, SQL/PostgreSQL/NoSQL, Docker, Kubernetes, CI/CD (Jenkins/GitLab CI/AWS CodePipeline)


Key Responsibilities :

  • Design, develop, and maintain backend systems using Python.
  • Build and manage RESTful APIs and microservices architectures.
  • Work extensively with AWS cloud services for deployment and data storage.
  • Implement and manage SQL, PostgreSQL, and NoSQL databases.
  • Containerize applications using Docker and orchestrate with Kubernetes.
  • Set up and maintain CI/CD pipelines using Jenkins, GitLab CI, or AWS CodePipeline.
  • Collaborate with teams to ensure scalable and reliable software delivery.
  • Troubleshoot and optimize application performance.


Must-Have Skills :

  • 4+ years of hands-on experience in Python backend development.
  • Strong experience with AWS cloud infrastructure.
  • Proficiency in building microservices and APIs.
  • Good knowledge of relational and NoSQL databases.
  • Experience with Docker and Kubernetes.
  • Familiarity with CI/CD tools and DevOps processes.
  • Strong problem-solving and collaboration skills.
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Hanisha Pralayakaveri
Posted by Hanisha Pralayakaveri
Bengaluru (Bangalore), Mumbai
5 - 9 yrs
Best in industry
skill iconPython
skill iconAmazon Web Services (AWS)
PySpark
Data engineering

Job Description: Data Engineer 

Position Overview:

Role Overview

We are seeking a skilled Python Data Engineer with expertise in designing and implementing data solutions using the AWS cloud platform. The ideal candidate will be responsible for building and maintaining scalable, efficient, and secure data pipelines while leveraging Python and AWS services to enable robust data analytics and decision-making processes.

 

Key Responsibilities

· Design, develop, and optimize data pipelines using Python and AWS services such as Glue, Lambda, S3, EMR, Redshift, Athena, and Kinesis.

· Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses).

· Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions.

· Monitor, troubleshoot, and enhance data workflows for performance and cost optimization.

· Ensure data quality and consistency by implementing validation and governance practices.

· Work on data security best practices in compliance with organizational policies and regulations.

· Automate repetitive data engineering tasks using Python scripts and frameworks.

· Leverage CI/CD pipelines for deployment of data workflows on AWS.

Read more
ZeMoSo Technologies

at ZeMoSo Technologies

11 recruiters
Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Mumbai, Bengaluru (Bangalore), Hyderabad, Chennai, Pune
4 - 8 yrs
₹10L - ₹15L / yr
Data engineering
skill iconPython
SQL
Data Warehouse (DWH)
skill iconAmazon Web Services (AWS)
+3 more

Work Mode: Hybrid


Need B.Tech, BE, M.Tech, ME candidates - Mandatory



Must-Have Skills:

● Educational Qualification :- B.Tech, BE, M.Tech, ME in any field.

● Minimum of 3 years of proven experience as a Data Engineer.

● Strong proficiency in Python programming language and SQL.

● Experience in DataBricks and setting up and managing data pipelines, data warehouses/lakes.

● Good comprehension and critical thinking skills.


● Kindly note Salary bracket will vary according to the exp. of the candidate - 

- Experience from 4 yrs to 6 yrs - Salary upto 22 LPA

- Experience from 5 yrs to 8 yrs - Salary upto 30 LPA

- Experience more than 8 yrs - Salary upto 40 LPA

Read more
top MNC

top MNC

Agency job
via Vy Systems by thirega thanasekaran
Bengaluru (Bangalore), Chennai, Hyderabad, Coimbatore, Kochi (Cochin), Thrissur, Thiruvananthapuram, Kozhikode (Calicut), Kasaragod
5 - 12 yrs
₹5L - ₹9L / yr
Data engineering
databricks
Apache Synapse
Apache Spark

Job Summary:


Seeking an experienced Senior Data Engineer to lead data ingestion, transformation, and optimization initiatives using the modern Apache and Azure data stack. The role involves working on scalable pipelines, large-scale distributed systems, and data lake management.

Core Responsibilities:

· Build and manage high-volume data pipelines using Spark/Databricks.

· Implement ELT frameworks using Azure Data Factory/Synapse Pipelines.

· Optimize large-scale datasets in Delta/Iceberg formats.

· Implement robust data quality, monitoring, and governance layers.

· Collaborate with Data Scientists, Analysts, and Business stakeholders.

Technical Stack:

· Big Data: Apache Spark, Kafka, Hive, Airflow, Hudi/Iceberg

· Cloud: Azure (Synapse, ADF, ADLS Gen2), Databricks, AWS (Glue/S3)

· Languages: Python, Scala, SQL

· Storage Formats: Delta Lake, Iceberg, Parquet, ORC

· CI/CD: Azure DevOps, Terraform (infra as code), Git

Senior Data Engineer (Apache Stack + Databricks/Synapse)


Share cv to

Thirega@ vysystems dot com - WhatsApp - 91Five0033Five2Three

Read more
Gipfel & Schnell Consultings Pvt Ltd
TanmayaKumar Pattanaik
Posted by TanmayaKumar Pattanaik
Bengaluru (Bangalore)
3 - 9 yrs
Best in industry
Data engineering
ADF
data factory
SQL Azure
databricks
+4 more

Data Engineer

 

Brief Posting Description:

This person will work independently or with a team of data engineers on cloud technology products, projects, and initiatives. Work with all customers, both internal and external, to make sure all data related features are implemented in each solution. Will collaborate with business partners and other technical teams across the organization as required to deliver proposed solutions.

 

Detailed Description:

·        Works with Scrum masters, product owners, and others to identify new features for digital products.

·        Works with IT leadership and business partners to design features for the cloud data platform.

·        Troubleshoots production issues of all levels and severities, and tracks progress from identification through resolution.

·        Maintains culture of open communication, collaboration, mutual respect and productive behaviors; participates in the hiring, training, and retention of top tier talent and mentors team members to new and fulfilling career experiences.

·        Identifies risks, barriers, efficiencies and opportunities when thinking through development approach; presents possible platform-wide architectural solutions based on facts, data, and best practices.

·        Explores all technical options when considering solution, including homegrown coding, third-party sub-systems, enterprise platforms, and existing technology components.

·        Actively participates in collaborative effort through all phases of software development life cycle (SDLC), including requirements analysis, technical design, coding, testing, release, and customer technical support.

·        Develops technical documentation, such as system context diagrams, design documents, release procedures, and other pertinent artifacts.

·        Understands lifecycle of various technology sub-systems that comprise the enterprise data platform (i.e., version, release, roadmap), including current capabilities, compatibilities, limitations, and dependencies; understands and advises of optimal upgrade paths.

·        Establishes relationships with key IT, QA, and other corporate partners, and regularly communicates and collaborates accordingly while working on cross-functional projects or production issues.

 

 

 

 

Job Requirements:

 

EXPERIENCE:

2 years required; 3 - 5 years preferred experience in a data engineering role.

2 years required, 3 - 5 years preferred experience in Azure data services (Data Factory, Databricks, ADLS, Synapse, SQL DB, etc.)

 

EDUCATION:

Bachelor’s degree information technology, computer science, or data related field preferred

 

SKILLS/REQUIREMENTS:

Expertise working with databases and SQL.

Strong working knowledge of Azure Data Factory and Databricks

Strong working knowledge of code management and continuous integrations systems (Azure DevOps or Github preferred)

Strong working knowledge of cloud relational databases (Azure Synapse and Azure SQL preferred)

Familiarity with Agile delivery methodologies

Familiarity with NoSQL databases (such as CosmosDB) preferred.

Any experience with Python, DAX, Azure Logic Apps, Azure Functions, IoT technologies, PowerBI, Power Apps, SSIS, Informatica, Teradata, Oracle DB, and Snowflake preferred but not required.

Ability to multi-task and reprioritize in a dynamic environment.

Outstanding written and verbal communication skills

 

Working Environment:

General Office – Work is generally performed within an office environment, with standard office equipment. Lighting and temperature are adequate and there are no hazardous or unpleasant conditions caused by noise, dust, etc. 

 

physical requirements:                     

Work is generally sedentary in nature but may require standing and walking for up to 10% of the time. 

 

Mental requirements:

Employee required to organize and coordinate schedules.

Employee required to analyze and interpret complex data.

Employee required to problem-solve. 

Employee required to communicate with the public.

Read more
hopscotch
Bengaluru (Bangalore)
5 - 8 yrs
₹6L - ₹15L / yr
skill iconPython
Amazon Redshift
skill iconAmazon Web Services (AWS)
PySpark
Data engineering
+3 more

About the role:

 Hopscotch is looking for a passionate Data Engineer to join our team. You will work closely with other teams like data analytics, marketing, data science and individual product teams to specify, validate, prototype, scale, and deploy data pipelines features and data architecture.


Here’s what will be expected out of you:

➢ Ability to work in a fast-paced startup mindset. Should be able to manage all aspects of data extraction transfer and load activities.

➢ Develop data pipelines that make data available across platforms.

➢ Should be comfortable in executing ETL (Extract, Transform and Load) processes which include data ingestion, data cleaning and curation into a data warehouse, database, or data platform.

➢ Work on various aspects of the AI/ML ecosystem – data modeling, data and ML pipelines.

➢ Work closely with Devops and senior Architect to come up with scalable system and model architectures for enabling real-time and batch services.


What we want:

➢ 5+ years of experience as a data engineer or data scientist with a focus on data engineering and ETL jobs.

➢ Well versed with the concept of Data warehousing, Data Modelling and/or Data Analysis.

➢ Experience using & building pipelines and performing ETL with industry-standard best practices on Redshift (more than 2+ years).

➢ Ability to troubleshoot and solve performance issues with data ingestion, data processing & query execution on Redshift.

➢ Good understanding of orchestration tools like Airflow.

 ➢ Strong Python and SQL coding skills.

➢ Strong Experience in distributed systems like spark.

➢ Experience with AWS Data and ML Technologies (AWS Glue,MWAA, Data Pipeline,EMR,Athena, Redshift,Lambda etc).

➢ Solid hands on with various data extraction techniques like CDC or Time/batch based and the related tools (Debezium, AWS DMS, Kafka Connect, etc) for near real time and batch data extraction.


Note :

Product based companies, Ecommerce companies is added advantage

Read more
AI Domain US Based Product Based Company

AI Domain US Based Product Based Company

Agency job
via New Era India by Asha P
Bengaluru (Bangalore)
3 - 10 yrs
₹30L - ₹50L / yr
Data engineering
Data modeling
skill iconPython

Requirements:

  • 2+ years of experience (4+ for Senior Data Engineer) with system/data integration, development or implementation of enterprise and/or cloud software Engineering degree in Computer Science, Engineering or related field.
  • Extensive hands-on experience with data integration/EAI technologies (File, API, Queues, Streams), ETL Tools and building custom data pipelines.
  • Demonstrated proficiency with Python, JavaScript and/or Java
  • Familiarity with version control/SCM is a must (experience with git is a plus).
  • Experience with relational and NoSQL databases (any vendor) Solid understanding of cloud computing concepts.
  • Strong organisational and troubleshooting skills with attention to detail.
  • Strong analytical ability, judgment and problem-solving techniques Interpersonal and communication skills with the ability to work effectively in a cross functional team.


Read more
SimpliFin
Bengaluru (Bangalore)
6 - 14 yrs
₹20L - ₹50L / yr
SaaS
Engineering Management
Artificial Intelligence (AI)
Data engineering
Financial services

We are looking for a passionate technologist with experience in building SaaS tech experience and products for a once-in-a-lifetime opportunity to lead Engineering for an AI powered Financial Operations platform to seamlessly monitor, optimize, reconcile and forecast cashflow with ease.


Background


An incredible rare opportunity for a VP Engineering to join a top tier incubated VC SaaS startup and outstanding management team. Product is currently in the build stage with a solid design partners pipeline of ~$250K and soon raising a pre-seed/seed round with marquee investors.


Responsibilities


  • Develop and implement the company's technical strategy and roadmap, ensuring that it aligns with the overall business objectives and is scalable, reliable, and secure.


  • Manage and optimize the company's technical resources, including staffing, software, hardware, and infrastructure, to ensure that they are being used effectively and efficiently.


  • Work with the founding team and other executives to identify opportunities for innovation and new technology solutions, and evaluate the feasibility and impact of these solutions on the business.


  • Lead the engineering function in developing and deploying high-quality software products and solutions, ensuring that they meet or exceed customer requirements and industry standards.


  • Analyze and evaluate technical data and metrics, identifying areas for improvement and implementing changes to drive efficiency and effectiveness.


  • Ensure that the company is in compliance with all legal and regulatory requirements, including data privacy and security regulations.


Eligibility criteria:


  • 6+ years of experience in developing scalable SaaS products.


  • Strong technical background with 6+ years of experience with a strong focus on SaaS, AI, and finance software.


  • Prior experience in leadership roles.


  • Entrepreneurial mindset, with a strong desire to innovate and grow a startup from the ground up.


Perks:


  • Vested Equity.


  • Ownership in the company.


  • Build alongside passionate and smart individuals.


Read more
codersbrain

at codersbrain

1 recruiter
Tanuj Uppal
Posted by Tanuj Uppal
Bengaluru (Bangalore)
10 - 18 yrs
Best in industry
flink
apache flink
skill iconJava
Data engineering

1. Flink Sr. Developer


Location: Bangalore(WFO)


Mandatory Skills & Exp -10+ Years : Must have Hands on Experience on FLINK, Kubernetes , Docker, Microservices, any one of Kafka/Pulsar, CI/CD and Java.


Job Responsibilities:


As the Data Engineer lead, you are expected to engineer, develop, support, and deliver real-time


streaming applications that model real-world network entities, and have a good understanding of the


Telecom Network KPIs to improve the customer experience through automation of operational network


data. Real-time application development will include building stateful in-memory backends, real-time


streaming APIs , leveraging real-time databases such as Apache Druid.


 Architecting and creating the streaming data pipelines that will enrich the data and support


the use cases for telecom networks


 Collaborating closely with multiple stakeholders, gathering requirements and seeking


iterative feedback on recently delivered application features.


 Participating in peer review sessions to provide teammates with code review as well as


architectural and design feedback.


 Composing detailed low-level design documentation, call flows, and architecture diagrams


for the solutions you build.


 Running to a crisis anytime the Operations team needs help.


 Perform duties with minimum supervision and participate in cross-functional projects as


scheduled.


Skills:


 Flink Sr. Developer, who has implemented and dealt with failure scenarios of


processing data through Flink.


 Experience with Java, K8S, Argo CD/Workflow, Prometheus, and Aether.


 Familiarity with object-oriented design patterns.


 Experience with Application Development DevOps Tools.


 Experience with distributed cloud-native application design deployed on Kubernetes


platforms.


 Experience with PostGres, Druid, and Oracle databases.


 Experience with Messaging Bus - Kafka/Pulsar


 Experience with AI/ML - Kubeflow, JupyterHub


 Experience with building real-time applications which leverage streaming data.


 Experience with streaming message bus platforming, either Kafka or Pulsar.


 Experience with Apache Spark applications and Hadoop platforms.


 Strong problem solving skills.


 Strong written and oral communication skills.

Read more
Mobile Programming LLC

at Mobile Programming LLC

1 video
34 recruiters
Sukhdeep Singh
Posted by Sukhdeep Singh
Bengaluru (Bangalore)
6 - 10 yrs
₹10L - ₹15L / yr
Data engineering
Nifi
DevOps
ETL

Job description Position: Data Engineer Experience: 6+ years Work Mode: Work from Office Location: Bangalore Please note: This position is focused on development rather than migration. Experience in Nifi or Tibco is mandatory.Mandatory Skills: ETL, DevOps platform, Nifi or Tibco We are seeking an experienced Data Engineer to join our team. As a Data Engineer, you will play a crucial role in developing and maintaining our data infrastructure and ensuring the smooth operation of our data platforms. The ideal candidate should have a strong background in advanced data engineering, scripting languages, cloud and big data technologies, ETL tools, and database structures.

 

Responsibilities: •  Utilize advanced data engineering techniques, including ETL (Extract, Transform, Load), SQL, and other advanced data manipulation techniques. •   Develop and maintain data-oriented scripting using languages such as Python. •   Create and manage data structures to ensure efficient and accurate data storage and retrieval. •   Work with cloud and big data technologies, specifically AWS and Azure stack, to process and analyze large volumes of data. •   Utilize ETL tools such as Nifi and Tibco to extract, transform, and load data into various systems. •   Have hands-on experience with database structures, particularly MSSQL and Vertica, to optimize data storage and retrieval. •   Manage and maintain the operations of data platforms, ensuring data availability, reliability, and security. •   Collaborate with cross-functional teams to understand data requirements and design appropriate data solutions. •   Stay up-to-date with the latest industry trends and advancements in data engineering and suggest improvements to enhance our data infrastructure.

 

Requirements: •  A minimum of 6 years of relevant experience as a Data Engineer. •  Proficiency in ETL, SQL, and other advanced data engineering techniques. •   Strong programming skills in scripting languages such as Python. •   Experience in creating and maintaining data structures for efficient data storage and retrieval. •   Familiarity with cloud and big data technologies, specifically AWS and Azure stack. •   Hands-on experience with ETL tools, particularly Nifi and Tibco. •   In-depth knowledge of database structures, including MSSQL and Vertica. •   Proven experience in managing and operating data platforms. •   Strong problem-solving and analytical skills with the ability to handle complex data challenges. •   Excellent communication and collaboration skills to work effectively in a team environment. •   Self-motivated with a strong drive for learning and keeping up-to-date with the latest industry trends.

Read more
persistent

persistent

Agency job
via Bohiyaanam Talent Solutions LLP by TrishaDutt Tekgminus
Pune, Mumbai, Bengaluru (Bangalore), Indore, Kolkata
6 - 7 yrs
₹12L - ₹18L / yr
MuleSoft
ETL QA
Automation
Data engineering

I am looking for Mulesoft Developer for a reputed MNC

 

Experience: 6+ Years

Relevant experience: 4 Years

Location : Pune, Mumbai, Bangalore, Indore, Kolkata

 

Skills:

Mulesoft

Experience: 6+ Years

Relevant experience: 4 Years

Location : Pune, Mumbai, Bangalore, Indore, Kolkata

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort