Cutshort logo

50+ Big data Jobs in India

Apply to 50+ Big data Jobs on CutShort.io. Find your next job, effortlessly. Browse Big data Jobs and apply today!

icon
Searce Inc

at Searce Inc

3 recruiters
Jatin Gereja
Posted by Jatin Gereja
Bengaluru (Bangalore), Mumbai, Pune
10 - 18 yrs
Best in industry
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Enterprise Data Warehouse (EDW)
Data modeling
Big Data
+9 more

Director - Data engineering


What are we looking for

real solver?

Solver? Absolutely. But not the usual kind. We're searching for the architects of the audacious & the pioneers of the possible. If you're the type to dismantle assumptions, re-engineer ‘best practices,’ and build solutions that make the future possible NOW, then you're speaking our language.


Your Responsibilities

what you will wake up to solve.

1. Delivery & Tactical Rigor

  • Methodology Implementation: Implement and manage a unified, 'DataOps-First' methodology for data engineering delivery (ETL/ELT pipelines, Data Modeling, MLOps, Data Governance) within assigned business units. This ensures predictable outcomes and trusted data integrity by reducing architecture variability at the project level.
  • Operational Stewardship: Drive initiatives to optimize team utilization and enhance operational efficiency within the practice. You manage the commercial success of your squads, ensuring data delivery models (from migration to modern data stack implementation) are executed profitably, scalably, and cost-effectively.
  • Execution & Technical Resolution
  • Technical Escalation: Serve as the primary escalation point for delivery issues, personally leading the resolution of complex data integration bottlenecks and pipeline failures to protect client timelines and data reliability standards.
  • Quality Enforcement
  • Quality Oversight: Execute and monitor technical data quality standards, ensuring engineering teams adhere to strict policies regarding data lineage, automated quality checks (observability), security/privacy compliance (GDPR/CCPA/PII), and active catalog management.

2. Strategic Growth & Practice Scaling

  • Talent & Scaling Execution: Execute the strategy for data engineering talent acquisition and development within your business units. Implement objective metrics to assess and grow the 'Data-Native' DNA of your teams, ensuring squads are consistently equipped to handle petabyte-scale environments and high-impact delivery.
  • Offerings Alignment: Drive the adoption of standardized regional offerings (e.g., Modern Data Platform, Data Mesh, Lakehouse Implementation). Ensure your teams leverage the profitable frameworks defined by the practice to accelerate time-to-insight and eliminate architectural fragmentation in client environments.
  • Innovation & IP Development: Lead the practical integration of Vector Databases and LLM-ready architectures into project delivery. Champion the hands-on development of IP and reusable accelerators (e.g., automated ingestion engines) that improve delivery speed and enhance data availability across your portfolio.

3. Leadership & Unit Management

  • Unit Leadership: Directly lead, mentor, and manage the Engineering Managers and Lead Architects within your business unit. Hold your teams accountable for project-level operational consistency, technical talent development, and strict adherence to the practice's data governance standards.
  • Stakeholder Communication: Clearly articulate the business unit’s operational performance, technical quality metrics, and delivery progress to the C-suite Stakeholders and regional client leadership, bridging the gap between technical execution and business value.
  • Ecosystem Alignment: Maintain strong technical relationships with key partner contacts (Snowflake, Databricks, AWS/GCP). Align team delivery capabilities with current product roadmaps and ensure squad-level participation in training, certifications, and partner-led enablement opportunities.


Welcome to Searce

The ‘process-first’, AI-native modern tech consultancy that's rewriting the rules.

We don’t do traditional.

As an engineering-led consultancy, we are dedicated to relentlessly improving the real business outcomes. Our solvers co-innovate with clients to futurify operations and make processes smarter, faster & better.


Functional Skills

1. Delivery Management & Operational Excellence

  • Methodology Execution: Expert capability in implementing and enforcing a unified delivery methodology (DataOps, Agile, Mesh Principles) within specific business units. Proven track record of auditing squad-level adherence to ensure consistency across the project lifecycle.
  • Operational Performance: High proficiency in managing day-to-day operational metrics, including squad utilization, resource forecasting, and productivity tracking. Skilled at optimizing team performance to meet profitability and efficiency targets.
  • SOW & Risk Mitigation: Proven experience in operationalizing Statement of Work (SOW) requirements and identifying technical delivery risks early. Expert at mitigating scope creep and data-specific bottlenecks (e.g., latency, ingestion gaps) before they impact client outcomes.
  • Technical Escalation Leadership: Demonstrated ability to lead "war room" efforts to resolve complex pipeline failures or data integrity issues. Skilled at providing clear, rapid remediation plans and communicating technical status directly to regional stakeholders.

2. Architectural Implementation & Technical Oversight

  • Modern Stack Proficiency: Deep, hands-on expertise in implementing Cloud-Native architectures (Lakehouse, Data Mesh, MPP) on Snowflake, Databricks, or hyperscalers. Ability to conduct deep-dive architectural reviews and course-correct design decisions at the squad level to ensure scalability.
  • Operationalizing Governance: Proven experience in embedding data quality and observability (completeness, freshness, accuracy) directly into the CI/CD pipeline. Responsible for technical enforcement of regulatory compliance (GDPR/PII) and maintaining the integrity of data catalogs across active projects.
  • Applied Domain Expertise: Practical experience leading the delivery of high-growth solutions, specifically Generative AI infrastructure (RAG, Vector DBs), Real-Time Streaming, and large-scale platform migrations with a focus on zero-downtime execution.
  • DataOps & Engineering Standards: Expert-level mastery of DataOps, including the setup and management of orchestration frameworks (Airflow, Dagster) and Infrastructure as Code (IaC). You ensure that automation is a baseline requirement, not an afterthought, for all delivery teams.

3. Unit Management & Commercial Execution

  • Unit & Team Management: Proven success in leading and mentoring Engineering Managers and Lead Architects. Responsible for the operational metrics, technical output, and career development of the business unit's talent pool.
  • Offerings Implementation & Scoping: Expertise in translating service offerings (e.g., Data Maturity Assessments, Lakehouse Builds) into accurate project scopes, technical estimates, and resource plans to ensure delivery is both profitable and competitive.
  • Talent Growth & Mentorship: Functional ability to implement growth frameworks for data engineering roles. Focus on hands-on coaching and scaling high-performance technical talent to meet the demands of complex, petabyte-scale environments.
  • Partner Enablement: Functional competence in managing regional technical relationships with major partners (Snowflake, Databricks, GCP/AWS). Drives squad-level certifications, joint technical enablement, and alignment with partner product roadmaps.

Tech Superpowers

  • Modern Data Architect – Reimagines business with the Modern Data Stack (MDS) to deliver data mesh implementations, insights, & real value to clients.
  • End-to-End Ecosystem Thinker – Builds modular, reusable data products across ingestion, transformation (ETL/ELT), governance, and consumption layers.
  • Distributed Compute Savant – Crafts resilient, high-throughput architectures that survive petabyte-scale volume and data skew without breaking the bank.
  • Governance & Integrity Guardian – Embeds data quality, complete lineage, and privacy-by-design (GDPR/PII) into every table, view, and pipeline.
  • AI-Ready Orchestrator – Engineers pipelines that bridge structured data with Unstructured/Vector stores, powering RAG models and Generative AI workflows.
  • Product-Minded Strategist – Balances architectural purity with time-to-insight; treats every dataset as a measurable "Data Product" with clear ROI.
  • Pragmatic Stack Curator – Chooses the simplest tools that compound reliability; fluent in SQL, Python, Spark, dbt, and Cloud Warehouses.
  • Builder @ Heart – Writes, reviews, and optimizes queries daily; proves architectures with cost-performance benchmarks, not slideware. Business-first, data-second, outcome focused technology leader.

Experience & Relevance

  • Executive Experience: Minimum 10+ years of progressive experience in data engineering and analytics, with at least 3 years in a Senior Manager or Director -level role managing multiple technical teams and owning significant operational and efficiency metrics for a large data service line.
  • Delivery Standardization: Demonstrated success in defining and implementing globally consistent, repeatable delivery methodologies (DataOps/Agile Data Warehousing) across diverse teams.
  • Architectural Depth: Must retain deep, current expertise in Modern Data Stack architectures (Lakehouse, MPP, Mesh) and maintain the ability to personally validate high-level architectural and data pipeline design decisions.
  • Operational Leadership: Proven expertise in managing and scaling large professional services organizations, demonstrated ability to optimize utilization, resource allocation, and operational expense.
  • Domain Expertise: Strong background in Enterprise Data Platforms, Applied AI/ML, Generative AI integration, or large-scale Cloud Data Migration.
  • Communication: Exceptional executive-level presentation and negotiation skills, particularly in communicating complex operational, data quality, and governance metrics to C-level stakeholders.

Join the ‘real solvers’

ready to futurify?

If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’. Don’t Just Send a Resume. Send a Statement.

Read more
Searce Inc

at Searce Inc

3 recruiters
Tejashree Kokare
Posted by Tejashree Kokare
Bengaluru (Bangalore), Pune, Mumbai
6 - 15 yrs
Best in industry
Google Cloud Platform (GCP)
Data engineering
Data warehouse architecture
Data architecture
Data modeling
+6 more

Solutions Architect - Data Engineering


Modern tech solutions advisory & 'futurify' consulting as a Searce lead fds (‘forward deployed solver’) architecting scalable data platforms and robust data engineering solutions that power intelligent insights and fuel AI innovation.

If you’re a tech-savvy, consultative seller with the brain of a strategist, the heart of a builder, and the charisma of a storyteller — we’ve got a seat for you at the front of the table.

You're not a sales lead. You're the transformation driver.


What are we looking for

real solver?

Solver? Absolutely. But not the usual kind. We're searching for the architects of the audacious & the pioneers of the possible. If you're the type to dismantle assumptions, re-engineer ‘best practices,’ and build solutions that make the future possible NOW, then you're speaking our language.

  • Improver. Solver. Futurist.
  • Great sense of humor.
  • ‘Possible. It is.’ Mindset.
  • Compassionate collaborator. Bold experimenter. Tireless iterator.
  • Natural creativity that doesn’t just challenge the norm, but solves to design what’s better.
  • Thinks in systems. Solves at scale.


This Isn’t for Everyone. But if you’re the kind who questions why things are done a certain way— and then identifies 3 better ways to do it — we’d love to chat with you.


Your Responsibilities

what you will wake up to solve.


You are not just a Solutions Architect; you are a futurifier of our data universe and the primary enabler of our AI ambitions. With a deep-seated passion for data engineering, you will architect and build the foundational data infrastructure that powers the customers entire data intelligence ecosystem.

As the Directly Responsible Individual (DRI) for our enterprise-grade data platforms, you own the outcome, end-to-end. You are the definitive solver for our customer's most complex data challenges, leveraging a powerful tech stack including Snowflake, Databricks, etc. and core GCP & AWS services (BigQuery, Spanner, Airflow, Kafka). This is a hands-on-keys role where you won't just design solutions—you'll build them, break them, and perfect them.


  • Solution Design & Pre-sales Excellence:Collaborate with cross-functional teams, including sales, engineering, and operations, to ensure successful project delivery.
  • Design Core Data Engineering: Master data modeling, architecting high-performance data ingestion pipelines and ensuring data quality and governance throughout the data lifecycle.
  • Enable Cloud & AI: Design and implement solutions utilizing core GCP data services, building foundational data platforms that efficiently support advanced analytics and AI/ML initiatives.
  • Optimize Performance & Cost: Continuously optimize data architectures and implementations for performance, efficiency, and cost-effectiveness within the cloud environment.
  • Bridge Business & Tech: Translate complex business requirements into clear technical designs, providing technical leadership and guidance to data engineering teams.
  • Stay Ahead of the Curve: Continuously research and evaluate new data technologies, architectural patterns, and industry trends to keep our data platforms at the cutting edge.


Functional Skills:


  • Enterprise Data Architecture Design: Expert ability to design holistic, scalable, and resilient data architectures for complex enterprise environments.
  • Cloud Data Platform Strategy: Proven capability to strategize, design, and implement cloud-native data platforms.
  • Pre-Sales & Technical Storyteller: Crafts compelling, client-ready proposals, architectural decks, and technical demonstrations. Doesn't just present; shapes the strategic technical narrative behind every proposed solution.
  • Advanced Data Modelling: Mastery in designing various data models for analytical, operational, and transactional use cases.
  • Data Ingestion & Pipeline Orchestration: Strong expertise in designing and optimizing robust data ingestion and transformation pipelines.
  • Stakeholder Communication: Exceptional skills in articulating complex technical concepts and architectural decisions to both technical and non-technical stakeholders.
  • Performance & Cost Optimization: Adept at optimizing data solutions for performance, efficiency, and cost within a cloud environment.


Tech Superpowers:


  • Cloud Data Mastery: You're a wizard at leveraging public cloud data services, with deep expertise in GCP (BigQuery, Spanner, etc.) and expert proficiency in modern data warehouse solutions like Snowflake.
  • Data Engineering Core: Highly skilled in designing, implementing, and managing data workflows using tools like Apache Airflow and Apache Kafka. You're also an authority on advanced data modeling and ETL/ELT patterns.
  • AI/ML Data Foundation: You instinctively design data pipelines and structures that efficiently feed and empower Machine Learning and Artificial Intelligence applications.
  • Programming for Data: You have a strong command over key programming languages (Python, SQL) for scripting, automation, and building data processing applications.


Experience & Relevance:


  • Architectural Leadership (8+ Years): You bring extensive experience (7+ years) specifically in a Solutions Architect role, focused on data engineering and platform building.
  • Cloud Data Expertise: You have a proven track record of designing and implementing production-grade data solutions leveraging major public cloud platforms, with significant experience in Google Cloud Platform (GCP).
  • Data Warehousing & Data Platform: Demonstrated hands-on experience in the end-to-end design, implementation, and optimization of modern data warehouses and comprehensive data platforms.
  • Databricks & BigQuery Mastery: You possess significant practical experience with Databricks as a core data warehouse and GCP BigQuery for analytical workloads.
  • Data Ingestion & Orchestration: Proven experience designing and implementing complex data ingestion pipelines and workflow orchestration using tools like Airflow and real-time streaming technologies like Kafka.
  • AI/ML Data Enablement: Experience in building data foundations specifically geared towards supporting Machine Learning and Artificial Intelligence initiatives.


Join the ‘real solvers’

ready to futurify?

If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’.


Don’t Just Send a Resume. Send a Statement.


So, If you are passionate about tech, future & what you read above (we really are!), apply here to experience the ‘Art of Possible’

Read more
TalentXO
tabbasum shaikh
Posted by tabbasum shaikh
Bengaluru (Bangalore), karnatak
10 - 12 yrs
₹20L - ₹25L / yr
skill iconPostgreSQL
skill iconDocker
skill iconKubernetes
GoLang
Kafka
+9 more

Must Have Skills:

  • Overall 10+ years of experience in application development using Golang.
  • Experience in designing and developing REST based services / Microservice development.
  • Ability to design scalable, robust, and error-tolerant systems.
  • Understanding of software architecture and distributed systems.
  • Proficient in writing efficient and optimized algorithms under time constraints.
  • Skilled in developing solutions that balance performance, readability, and maintainability.
  • Ability to effectively communicate coding decisions and rationale during problem-solving discussions.
  • Hands-on experience with queuing mechanisms such as Kafka or RabbitMQ.
  • Candidates should be adaptable and eager to quickly learn and integrate into the existing tech stack if they lack direct experience.
  • Candidate should have good communication skills (written and verbal).
  • Experience with delivering projects in an agile environment using SCRUM methodologies.

Good to have:

  • Experience to AWS, CI/CD, DevOps.
  • Experience using container management tools such as Kubernetes, Docker and Rancher.
  • Any one of these data store Cassandra, Postgres, Couchbase, or other NoSQL servers.


Read more
AI Industry

AI Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai, Bengaluru (Bangalore), Hyderabad, Gurugram
6 - 10 yrs
₹32L - ₹42L / yr
ETL
SQL
Google Cloud Platform (GCP)
Data engineering
ELT
+17 more

Role & Responsibilities:

We are looking for a strong Data Engineer to join our growing team. The ideal candidate brings solid ETL fundamentals, hands-on pipeline experience, and cloud platform proficiency — with a preference for GCP / BigQuery expertise.


Responsibilities:

  • Design, build, and maintain scalable data pipelines and ETL/ELT workflows
  • Work with Dataform or DBT to implement transformation logic and data models
  • Develop and optimize data solutions on GCP (BigQuery, GCS) or AWS/Azure
  • Support data migration initiatives and data mesh architecture patterns
  • Collaborate with analysts, scientists, and business stakeholders to deliver reliable data products
  • Apply data governance and quality best practices across the data lifecycle
  • Troubleshoot pipeline issues and drive proactive monitoring and resolution


Ideal Candidate:

  • Strong Data Engineer Profile
  • Must have 6+ years of hands-on experience in Data Engineering, with strong ownership of end-to-end data pipeline development.
  • Must have strong experience in ETL/ELT pipeline design, transformation logic, and data workflow orchestration.
  • Must have hands-on experience with any one of the following: Dataform, dbt, or BigQuery, with practical exposure to data transformation, modeling, or cloud data warehousing.
  • Must have working experience on any cloud platform: GCP (preferred), AWS, or Azure, including object storage (GCS, S3, ADLS).
  • Must have strong SQL skills with experience in writing complex queries and optimizing performance.
  • Must have programming experience in Python and/or SQL for data processing.
  • Must have experience in building and maintaining scalable data pipelines and troubleshooting data issues.
  • Exposure to data migration projects and/or data mesh architecture concepts.
  • Experience with Spark / PySpark or large-scale data processing frameworks.
  • Experience working in product-based companies or data-driven environments.
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.


NOTE:

  • There will be an interview drive scheduled on 28th and 29th March 2026, and if shortlisted, they will be expected to be available on these Interview dates. Only Immediate joiners are considered.
Read more
Hyderabad
3 - 7 yrs
₹10L - ₹30L / yr
skill iconScala
Spark
Play Framework
Apache Spark
NOSQL Databases
+4 more

Company Description


TECHSOPHY specializes in productizing solutions based on new technology, focusing on emerging platforms of BPM & ECM, Low Code, AI (ML/RPA/NLP). Founded in 2009, TECHSOPHY operates with headquarters in California, USA, and regional offices in Dubai, UAE, and an offshore innovation center in Hyderabad, India.


Qualifications

  • Solid Fundamentals and exceptional problem-solving skills
  • Solid and fluent understanding of algorithms and data structures
  • Proficiency in Scala + Spark
  • Proficiency in Scala + Play framework
  • Experience Range: 4 to 7 Years


Requirement:

Some or all of them – because we believe intelligent people can pick up whatever they need in a short period of time. You just need to prove that you can:

  • Excellent programming skills and knowledge of Java / Scala
  • Excellent software design, problem-solving, and debugging skills
  • Experience with modern Big Data technologies such as Spark, NoSQL, Cassandra, Kafka, MapReduce, and the Hadoop ecosystem is a must-have
  • Experience with data analytics and the ability to mine data to obtain insights are much appreciated
Read more
ManpowerGroup
Shirisha Jangi
Posted by Shirisha Jangi
Bengaluru (Bangalore), Hyderabad
7 - 15 yrs
₹20L - ₹27L / yr
Data engineering
skill iconJava
skill iconPython
SQL
skill iconScala
+3 more

Immediate hiring for Senior Data Engineer

📍 Location: Hyderabad/Bangalore

💼 Experience: 7+Years

🕒 Employment Type: Full-Time

🏢 Work Mode: Hybrid

📅 Notice Period: 0-1Month serving notice only

 

   We are seeking a highly skilled and motivated Data Engineer to join our innovative team. As a Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support our enterprise-wide data-driven initiatives. You will collaborate closely with cross-functional teams to ensure the availability, reliability, and performance of our data systems and solutions.

 

🔎 Key Responsibilities:

  • Data Pipeline Development
  • Data Modeling and Architecture
  • Data Integration and API Development
  • Data Infrastructure Management
  • Collaboration and Documentation

 

🎯 Required Skills:

  • Bachelor’s degree in computer science, Engineering, Information Systems, or a related field.
  • 7+ years of proven experience in data engineering, software development, or related technical roles.
  • 7+ years of experience in programming languages commonly used in data engineering (Python, Java, SQL, Stored Procedures, Scala, etc.).
  • 7+ years of experience with database systems, data modeling, and advanced SQL.
  • 7+ years of experience with ETL tools such as SSIS, Snowflake, Databricks, Azure Data Factory, Stored Procedures, etc.
  • Experience with big data technologies such as Hadoop, Spark, Kafka, etc.
  • 5+ years of experience working with cloud platforms like Azure, AWS, or Google Cloud.
  • Strong analytical, problem-solving, and debugging skills with high attention to detail.
  • Excellent communication and collaboration skills in a team-oriented, fast-paced environment.
  • Ability to adapt to rapidly evolving technologies and business requirements.

 

 

Read more
Business Intelligence & Digital Consulting company

Business Intelligence & Digital Consulting company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
4 - 6 yrs
₹14L - ₹16L / yr
skill iconPython
skill iconData Science
skill iconMachine Learning (ML)
SQL
skill iconData Analytics
+6 more

Description

JOB DESCRIPTION – SENIOR ANALYST – DATA SCIENTIST 

 

Key Responsibilities ·       

Work with business stakeholders and cross-functional SMEs to deeply understand business context and key business questions·       

Advanced skills with statistical/programming in Python and data querying languages (e.g., SQL, Hadoop/Hive, Scala)·       

Solid understanding of time-series forecasting techniques·       

Good hands-on skills in both feature engineering and hyperparameter optimization·       

Able to write clean and tested code that can be maintained by other software engineers·       

Able to clearly summarize and communicate data analysis assumptions and results·       

Able to craft effective data pipelines to transform your analyses from offline to production systems·       

Self-motivated and a proactive problem solver who can work independently and in teams·       

Connects both externally and internally to understand industry trends, technology advances and outstanding processes or solutions·       

Is collaborative and engages (strategic & tactical. Able to influence without authority, handle complex issues and implement positive change·       

Work on multiple pillars of AI including cognitive engineering, conversational bots, and data science·       

Ensure that solutions exhibit high levels of performance, security, scalability, maintainability, repeatability, appropriate reusability, and reliability upon deployment ·       

Provide guidance and leadership to more junior data scientists, managing processes and flow of work, vetting designs, and mentoring team members to realize their full potential·       

Lead discussions at peer review and use interpersonal skills to positively influence decision making·       

Provide subject matter expertise in machine learning techniques, tools, and concepts; make impactful contributions to internal discussions on emerging practices·       

Facilitate cross-geography sharing of new ideas, learnings, and best-practices 

 

What We Are Looking For 

Required Qualifications ·       

Master's degree in a quantitative field such as Data Science, Statistics, Applied Mathematics or Bachelor's degree in engineering, computer science, or related field. ·       

4 – 6 years of total work experience as data scientist or analytical role, with at least 2-3 years of experience in time series forecasting·       

A combination of business focus, strong analytical and problem-solving skills, and programming knowledge to be able to quickly cycle hypothesis through the discovery phase of a project ·       

Strong experience in Time Series Forecasting and Demand Planning ·       

Advanced skills with statistical/programming software (e.g., R, Python) and data querying languages (e.g., SQL, Hadoop/Hive, Scala) ·       

Good hands-on skills in both feature engineering and hyperparameter optimization ·       

Experience producing high-quality code, tests, documentation·       

Understanding of descriptive and exploratory statistics, predictive modelling, evaluation metrics, decision trees, machine learning algorithms, optimization & forecasting techniques, and / or deep learning methodologies·       

Proficiency in statistical concepts and ML algorithms·       

Ability to lead, manage, build, and deliver customer business results through data scientists or professional services team·       

Ability to share ideas in a compelling manner, to clearly summarize and communicate data analysis assumptions and results·       

Self-motivated and a proactive problem solver who can work independently and in teams·       

Outstanding verbal and written communication skills with the ability to effectively advocate technical solutions to engineering and business teams 


Desired Qualifications ·      

Experience working in one or multiple supply chain functions (e.g., procurement, planning, manufacturing, quality, logistics) is strongly preferred ·       

Experience in applying AI/ML within a CPG or Healthcare business environment is strongly preferred ·       

Experience in creating CI/CD pipelines for deployment using Jenkins. ·       

Experience implementing MLOPs framework along with understanding of data security·       

Implementation on ML models·       

Exposure to visualization packages and Azure tech stack.  

 

Must have skills

Python - 2 years

Data Science - 4 years

SQL - 2 years

Machine Learning - 2 years

 

Nice to have skills

Data Analysis - 4 years

Time Series Forecasting - 2 years

Demand Planning - 2 years

Hadoop - 2 years

Statistical concepts - 2 years

Supply chain functions - 2 years

Read more
Ebrotech Software Solutions
Remote, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
8 - 10 yrs
₹15L - ₹20L / yr
Systems design
Microservices
Apache Kafka
Oracle
Cassandra

Job Description

  • 8+ years of experience to Lead, coach and mentor team of 5-8 full stack and backend engineers
  • Write high quality code
  • Champion coding standards and re-usable components
  • Influence the technical direction of the engineering team
  • Partner with Product Managers in designing and defining new features
  • Serve as a key member of a Scrum team
  • Participate and potentially lead Communities-of-Practice programs.

 

Requirements

  • 8+ years of large-scale distributed ecommerce systems experience
  • Expert understanding of Java, database, and messaging technologies
  • Enthusiasm for constant improvement as a Software Engineer
  • Ability to communicate clearly effectively and motivate team members
  • Reactive Java a plus
  • Passionate about Ecommerce and retail a plus


Technical Skill Set

  • Java (Core & Advanced)
  • Spring Boot, RESTful Services, Reactive Java (preferred)                                                                    
  • Vue.js, Web Components (Lit Framework)
  • Oracle, Cassandra
  • Apache Kafka
  • System Design, Microservices Architecture, Scalable & Event-Driven Systems
Read more
Consumer Internet, Technology & Travel and Tourism Platform

Consumer Internet, Technology & Travel and Tourism Platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
7 - 10 yrs
₹45L - ₹60L / yr
DevOps
Cloud Computing
Infrastructure
skill iconKubernetes
skill iconDocker
+22 more

Job Details

Job Title: Lead DevOps Engineer

Industry: Consumer Internet, Technology & Travel and Tourism Platform

Function - IT

Experience Required: 7-10 years

Employment Type: Full Time

Job Location: Bengaluru

CTC Range: Best in Industry

 

Criteria:

  • Strong Lead DevOps / Infrastructure Engineer Profiles.
  • Must have 7+ years of hands-on experience working as a DevOps / Infrastructure Engineer.
  • Candidate’s current title must be Lead DevOps Engineer (or equivalent Lead role) in the current organization
  • Must have minimum 2+ years of team management / technical leadership experience, including mentoring engineers, driving infrastructure decisions, or leading DevOps initiatives.
  • Must have strong hands-on experience with Kubernetes (container orchestration) including deployment, scaling, and cluster management.
  • Must have experience with Infrastructure as Code (IaC) tools such as Terraform, Ansible, Chef, or Puppet.
  • Must have strong scripting and automation experience using Python, Go, Bash, or similar scripting languages.
  • Must have working experience with distributed databases or data systems such as MongoDB, Redis, Cassandra, Elasticsearch, or Puppet.
  • Must have strong hands-on experience in Observability & Monitoring, CI/CD architecture, and Networking concepts in production environments.
  • (Company) – Must be from B2C Product Companies only.
  • (Education) – B.E/ B.Tech

 

Preferred

  • Experience working in microservices architecture and event-driven systems.
  • Exposure to cloud infrastructure, scalability, reliability, and cost optimization practices.
  • (Skills) – Understanding of programming languages such as Go, Python, or Java.
  • (Environment) – Experience working in high-growth startup or large-scale production environments.

 

Job Description 

As a DevOps Engineer, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.

 

Job Responsibilities:

  • Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
  • Codify our infrastructure
  • Do what it takes to keep the uptime above 99.99%
  • Understand the bigger picture and sail through the ambiguities
  • Scale technology considering cost and observability and manage end-to-end processes
  • Understand DevOps philosophy and evangelize the principles across the organization
  • Strong communication and collaboration skills to break down the silos

 

Read more
Consumer Internet, Technology & Travel and Tourism Platform

Consumer Internet, Technology & Travel and Tourism Platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
4 - 7 yrs
₹38L - ₹50L / yr
DevOps
Cloud Computing
Infrastructure
skill iconKubernetes
skill iconDocker
+23 more

Job Details

Job Title: Senior DevOps Engineer

Industry: Consumer Internet, Technology & Travel and Tourism Platform

Function - IT

Experience Required: 4-7 years

Employment Type: Full Time

Job Location: Bengaluru

CTC Range: Best in Industry

 

Criteria:

  • Strong DevOps / Infrastructure Engineer Profiles.
  • Must have 4+ years of hands-on experience working as a DevOps Engineer / Infrastructure Engineer / SRE / DevOps Consultant.
  • Must have hands-on experience with Kubernetes and Docker, including deployment, scaling, or containerized application management.
  • Must have experience with Infrastructure as Code (IaC) or configuration management tools such as Terraform, Ansible, Chef, or Puppet.
  • Must have strong automation and scripting experience using Python, Go, Bash, Shell, or similar scripting languages.
  • Must have working experience with distributed databases or data systems such as MongoDB, Redis, Cassandra, Elasticsearch, or Kafka.
  • Candidate must demonstrate strong expertise in at least one of the following areas - Databases / Distributed Data Systems, Observability & Monitoring, CI/CD Pipelines. Networking Concepts, Kubernetes / Container Platforms
  • Candidates must be from B2C Product-based companies only.
  • (Education) – BE / B.Tech or equivalent

 

Preferred

  • Experience working with microservices or event-driven architectures.
  • Exposure to cloud infrastructure, monitoring, reliability, and scalability practices.
  • (Skills) – Understanding of programming languages such as Go, Python, or Java.
  • Preferred (Environment) – Experience working in high-scale production or fast-growing product startups.

 

Job Description 

As a DevOps Engineer, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.

 

Job Responsibilities:

  • Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
  • Codify our infrastructure
  • Do what it takes to keep the uptime above 99.99%
  • Understand the bigger picture and sail through the ambiguities
  • Scale technology considering cost and observability and manage end-to-end processes
  • Understand DevOps philosophy and evangelize the principles across the organization
  • Strong communication and collaboration skills to break down the silos


Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune, Trivandrum , Thiruvananthapuram
8 - 10 yrs
₹20L - ₹24L / yr
skill iconJava
skill iconPython
API
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
+13 more

Job Details

Job Title: Lead Software Engineer - Java, Python, API Development

Industry: Global digital transformation solutions provider

Domain - Information technology (IT)

Experience Required: 8-10 years

Employment Type: Full Time

Job Location: Pune & Trivandrum/ Thiruvananthapuram

CTC Range: Best in Industry

 

Job Description

Job Summary

We are seeking a Lead Software Engineer with strong hands-on expertise in Java and Python to design, build, and optimize scalable backend applications and APIs. The ideal candidate will bring deep experience in cloud technologies, large-scale data processing, and leading the design of high-performance, reliable backend systems.

 

Key Responsibilities

  • Design, develop, and maintain backend services and APIs using Java and Python
  • Build and optimize Java-based APIs for large-scale data processing
  • Ensure high performance, scalability, and reliability of backend systems
  • Architect and manage backend services on cloud platforms (AWS, GCP, or Azure)
  • Collaborate with cross-functional teams to deliver production-ready solutions
  • Lead technical design discussions and guide best practices

 

Requirements

  • 8+ years of experience in backend software development
  • Strong proficiency in Java and Python
  • Proven experience building scalable APIs and data-driven applications
  • Hands-on experience with cloud services and distributed systems
  • Solid understanding of databases, microservices, and API performance optimization

 

Nice to Have

  • Experience with Spring Boot, Flask, or FastAPI
  • Familiarity with Docker, Kubernetes, and CI/CD pipelines
  • Exposure to Kafka, Spark, or other big data tools

 

Skills

Java, Python, API Development, Data Processing, AWS Backend

 

Skills: Java, API development, Data Processing, AWS backend, Python,

 

Must-Haves

Java (8+ years), Python (8+ years), API Development (8+ years), Cloud Services (AWS/GCP/Azure), Database & Microservices

8+ years of experience in backend software development

Strong proficiency in Java and Python

Proven experience building scalable APIs and data-driven applications

Hands-on experience with cloud services and distributed systems

Solid understanding of databases, microservices, and API performance optimization

Mandatory Skills: Java API AND AWS

 

******

Notice period - 0 to 15 days only

Job stability is mandatory

Location: Pune, Trivandrum

Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
5 - 8 yrs
₹11L - ₹20L / yr
PySpark
Apache Kafka
Data architecture
skill iconAmazon Web Services (AWS)
EMR
+32 more

JOB DETAILS:

* Job Title: Lead II - Software Engineering - AWS, Apache Spark (PySpark/Scala), Apache Kafka

* Industry: Global digital transformation solutions provider

* Salary: Best in Industry

* Experience: 5-8 years

* Location: Hyderabad

 

Job Summary

We are seeking a skilled Data Engineer to design, build, and optimize scalable data pipelines and cloud-based data platforms. The role involves working with large-scale batch and real-time data processing systems, collaborating with cross-functional teams, and ensuring data reliability, security, and performance across the data lifecycle.


Key Responsibilities

ETL Pipeline Development & Optimization

  • Design, develop, and maintain complex end-to-end ETL pipelines for large-scale data ingestion and processing.
  • Optimize data pipelines for performance, scalability, fault tolerance, and reliability.

Big Data Processing

  • Develop and optimize batch and real-time data processing solutions using Apache Spark (PySpark/Scala) and Apache Kafka.
  • Ensure fault-tolerant, scalable, and high-performance data processing systems.

Cloud Infrastructure Development

  • Build and manage scalable, cloud-native data infrastructure on AWS.
  • Design resilient and cost-efficient data pipelines adaptable to varying data volume and formats.

Real-Time & Batch Data Integration

  • Enable seamless ingestion and processing of real-time streaming and batch data sources (e.g., AWS MSK).
  • Ensure consistency, data quality, and a unified view across multiple data sources and formats.

Data Analysis & Insights

  • Partner with business teams and data scientists to understand data requirements.
  • Perform in-depth data analysis to identify trends, patterns, and anomalies.
  • Deliver high-quality datasets and present actionable insights to stakeholders.

CI/CD & Automation

  • Implement and maintain CI/CD pipelines using Jenkins or similar tools.
  • Automate testing, deployment, and monitoring to ensure smooth production releases.

Data Security & Compliance

  • Collaborate with security teams to ensure compliance with organizational and regulatory standards (e.g., GDPR, HIPAA).
  • Implement data governance practices ensuring data integrity, security, and traceability.

Troubleshooting & Performance Tuning

  • Identify and resolve performance bottlenecks in data pipelines.
  • Apply best practices for monitoring, tuning, and optimizing data ingestion and storage.

Collaboration & Cross-Functional Work

  • Work closely with engineers, data scientists, product managers, and business stakeholders.
  • Participate in agile ceremonies, sprint planning, and architectural discussions.


Skills & Qualifications

Mandatory (Must-Have) Skills

  1. AWS Expertise
  • Hands-on experience with AWS Big Data services such as EMR, Managed Apache Airflow, Glue, S3, DMS, MSK, and EC2.
  • Strong understanding of cloud-native data architectures.
  1. Big Data Technologies
  • Proficiency in PySpark or Scala Spark and SQL for large-scale data transformation and analysis.
  • Experience with Apache Spark and Apache Kafka in production environments.
  1. Data Frameworks
  • Strong knowledge of Spark DataFrames and Datasets.
  1. ETL Pipeline Development
  • Proven experience in building scalable and reliable ETL pipelines for both batch and real-time data processing.
  1. Database Modeling & Data Warehousing
  • Expertise in designing scalable data models for OLAP and OLTP systems.
  1. Data Analysis & Insights
  • Ability to perform complex data analysis and extract actionable business insights.
  • Strong analytical and problem-solving skills with a data-driven mindset.
  1. CI/CD & Automation
  • Basic to intermediate experience with CI/CD pipelines using Jenkins or similar tools.
  • Familiarity with automated testing and deployment workflows.

 

Good-to-Have (Preferred) Skills

  • Knowledge of Java for data processing applications.
  • Experience with NoSQL databases (e.g., DynamoDB, Cassandra, MongoDB).
  • Familiarity with data governance frameworks and compliance tooling.
  • Experience with monitoring and observability tools such as AWS CloudWatch, Splunk, or Dynatrace.
  • Exposure to cost optimization strategies for large-scale cloud data platforms.

 

Skills: big data, scala spark, apache spark, ETL pipeline development

 

******

Notice period - 0 to 15 days only

Job stability is mandatory

Location: Hyderabad

Note: If a candidate is a short joiner, based in Hyderabad, and fits within the approved budget, we will proceed with an offer

F2F Interview: 14th Feb 2026

3 days in office, Hybrid model.

 


Read more
Ride-hailing Industry

Ride-hailing Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
5 - 7 yrs
₹42L - ₹45L / yr
DevOps
skill iconPython
Shell Scripting
Infrastructure
Terraform
+16 more

JOB DETAILS:

- Job Title: Senior Devops Engineer 2

- Industry: Ride-hailing

- Experience: 5-7 years

- Working Days: 5 days/week

- Work Mode: ONSITE

- Job Location: Bangalore

- CTC Range: Best in Industry


Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)

 

Criteria:

1.   Candidate must be from a product-based or scalable app-based start-ups company with experience handling large-scale production traffic.

2.   Minimum 5 yrs of experience working as a DevOps/Infrastructure Consultant

3.   Own end-to-end infrastructure right from non-prod to prod environment including self-managed

4.   Candidate must have experience in database migration from scratch 

5.   Must have a firm hold on the container orchestration tool Kubernetes

6.   Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet

7.   Understanding programming languages like GO/Python, and Java

8.   Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

9.   Working experience on Cloud platform - AWS

10. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.

 

Description 

Job Summary:

As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.

 

Job Responsibilities:

● Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs

● Codify our infrastructure

● Do what it takes to keep the uptime above 99.99%

● Understand the bigger picture and sail through the ambiguities

● Scale technology considering cost and observability and manage end-to-end processes

● Understand DevOps philosophy and evangelize the principles across the organization

● Strong communication and collaboration skills to break down the silos

 

Job Requirements:

● B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience

● Minimum 5 yrs of experience working as a DevOps/Infrastructure Consultant

● Must have a firm hold on the container orchestration tool Kubernetes

● Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet

● Strong problem-solving skills, and ability to write scripts using any scripting language

● Understanding programming languages like GO/Python, and Java

● Comfortable working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

 

What’s there for you?

Company’s team handles everything – infra, tooling, and self-manages a bunch of databases, such as

● 150+ microservices with event-driven architecture across different tech stacks Golang/ java/ node

● More than 100,000 Request per second on our edge gateways

● ~20,000 events per second on self-managed Kafka

● 100s of TB of data on self-managed databases

● 100s of real-time continuous deployment to production

● Self-managed infra supporting

● 100% OSS

Read more
Ride-hailing Industry

Ride-hailing Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 9 yrs
₹47L - ₹50L / yr
DevOps
skill iconPython
Shell Scripting
skill iconKubernetes
Terraform
+15 more

JOB DETAILS:

- Job Title: Lead DevOps Engineer

- Industry: Ride-hailing

- Experience: 6-9 years

- Working Days: 5 days/week

- Work Mode: ONSITE

- Job Location: Bangalore

- CTC Range: Best in Industry


Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)

 

Criteria:

1.   Candidate must be from a product-based or scalable app-based start-ups company with experience handling large-scale production traffic.

2.   Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant

3.   Candidate must have 2 years of experience as an lead (handling team of 3 to 4 members at least)

4.   Own end-to-end infrastructure right from non-prod to prod environment including self-managed

5.   Candidate must have Self experience in database migration from scratch 

6.   Must have a firm hold on the container orchestration tool Kubernetes

7.   Should have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet

8.   Understanding programming languages like GO/Python, and Java

9.   Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

10.   Working experience on Cloud platform -AWS

11. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.

 

Description

Job Summary:

As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.

 

Job Responsibilities:

● Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs

● Codify our infrastructure

● Do what it takes to keep the uptime above 99.99%

● Understand the bigger picture and sail through the ambiguities

● Scale technology considering cost and observability and manage end-to-end processes

● Understand DevOps philosophy and evangelize the principles across the organization

● Strong communication and collaboration skills to break down the silos

 

Job Requirements:

● B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience

● Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant

● Must have a firm hold on the container orchestration tool Kubernetes

● Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet

● Strong problem-solving skills, and ability to write scripts using any scripting language

● Understanding programming languages like GO/Python, and Java

● Comfortable working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

 

What’s there for you?

Company’s team handles everything – infra, tooling, and self-manages a bunch of databases, such as

● 150+ microservices with event-driven architecture across different tech stacks Golang/ java/ node

● More than 100,000 Request per second on our edge gateways

● ~20,000 events per second on self-managed Kafka

● 100s of TB of data on self-managed databases

● 100s of real-time continuous deployment to production

● Self-managed infra supporting

● 100% OSS

Read more
Ride-hailing Industry

Ride-hailing Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
4 - 6 yrs
₹34L - ₹37L / yr
DevOps
skill iconPython
Shell Scripting
skill iconKubernetes
Monitoring
+18 more

JOB DETAILS:

- Job Title: Senior Devops Engineer 1

- Industry: Ride-hailing

- Experience: 4-6 years

- Working Days: 5 days/week

- Work Mode: ONSITE

- Job Location: Bangalore

- CTC Range: Best in Industry


Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)

 

Criteria:

1. Candidate must be from a product-based or scalable app-based startups company with experience handling large-scale production traffic.

2. Candidate must have strong Linux expertise with hands-on production troubleshooting and working knowledge of databases and middleware (Mongo, Redis, Cassandra, Elasticsearch, Kafka).

3. Candidate must have solid experience with Kubernetes.

4. Candidate should have strong knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.

5. Candidate must be an individual contributor with strong ownership.

6. Candidate must have hands-on experience with DATABASE MIGRATIONS and observability tools such as Prometheus and Grafana.

7. Candidate must have working knowledge of Go/Python and Java.

8. Candidate should have working experience on Cloud platform - AWS

9. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.

 

Description 

Job Summary:

As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.

 

Job Responsibilities:

- Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs.

- Understanding the needs of stakeholders and conveying this to developers.

- Working on ways to automate and improve development and release processes.

- Identifying technical problems and developing software updates and ‘fixes’.

- Working with software developers to ensure that development follows established processes and works as intended.

- Do what it takes to keep the uptime above 99.99%.

- Understand DevOps philosophy and evangelize the principles across the organization.

- Strong communication and collaboration skills to break down the silos

 

Job Requirements:

- B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience.

- Minimum 4 yrs of experience working as a DevOps/Infrastructure Consultant.

- Strong background in operating systems like Linux.

- Understands the container orchestration tool Kubernetes.

- Proficient Knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.

- Problem-solving attitude, and ability to write scripts using any scripting language.

- Understanding programming languages like GO/Python, and Java.

- Basic understanding of databases and middlewares like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

- Should be able to take ownership of tasks, and must be responsible. - Good communication skills

 

Read more
-
Remote only
8 - 13 yrs
₹10L - ₹33L / yr
python
PySpark
Big Data
SQL

Role: Lead Data Engineer Core

Responsibilities: Lead end-to-end design, development, and delivery of complex cloud-based data pipelines.

Collaborate with architects and stakeholders to translate business requirements into technical data solutions.

Ensure scalability, reliability, and performance of data systems across environments. Provide mentorship and technical leadership to data engineering teams. Define and enforce best practices for data modeling, transformation, and governance.


Optimize data ingestion and transformation frameworks for efficiency and cost management. Contribute to data architecture design and review sessions across projects.


Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.

8+ years of experience in data engineering with proven leadership in designing cloud native data systems.


Strong expertise in Python, SQL, Apache Spark, and at least one cloud platform (Azure, AWS, or GCP). Experience with Big Data, DataLake, DeltaLake, and Lakehouse architectures Proficient in one or more database technologies (e.g. PostgreSQL, Redshift, Snowflake, and NoSQL databases).


Ability to recommend and implement scalable data pipelines Preferred Qualifications: Cloud certification (AWS, Azure, or GCP). Experience with Databricks, Snowflake, or Terraform. Familiarity with data governance, lineage, and observability tools. Strong collaboration skills and ability to influence data-driven decisions across teams.

Read more
AI-First Company

AI-First Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
5 - 17 yrs
₹30L - ₹45L / yr
Data engineering
Data architecture
SQL
Data modeling
GCS
+47 more

ROLES AND RESPONSIBILITIES:

You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.


  • Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
  • Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
  • Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
  • Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
  • Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
  • Support self-service analytics by enabling governed data products and semantic layers.
  • Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
  • Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.


IDEAL CANDIDATE:

  • Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
  • 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
  • Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
  • Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
  • Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
  • Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
  • Excellent problem-solving, documentation, and stakeholder communication skills.


PREFERRED:

  • Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
  • Exposure to Snowflake, Databricks, or BigQuery environments.
  • Experience in high-tech, manufacturing, or enterprise data modernization programs.
Read more
Tecblic Private LImited
Ahmedabad
5 - 6 yrs
₹5L - ₹15L / yr
Windows Azure
skill iconPython
SQL
Data Warehouse (DWH)
Data modeling
+5 more

Job Description: Data Engineer

Location: Ahmedabad

Experience: 5 to 6 years

Employment Type: Full-Time



We are looking for a highly motivated and experienced Data Engineer to join our  team. As a Data Engineer, you will play a critical role in designing, building, and optimizing data pipelines that ensure the availability, reliability, and performance of our data infrastructure. You will collaborate closely with data scientists, analysts, and cross-functional teams to provide timely and efficient data solutions.



Responsibilities


● Design and optimize data pipelines for various data sources


● Design and implement efficient data storage and retrieval mechanisms


● Develop data modelling solutions and data validation mechanisms


● Troubleshoot data-related issues and recommend process improvements


● Collaborate with data scientists and stakeholders to provide data-driven insights and solutions


● Coach and mentor junior data engineers in the team




Skills Required: 


● Minimum 4 years of experience in data engineering or related field


● Proficient in designing and optimizing data pipelines and data modeling


● Strong programming expertise in Python


● Hands-on experience with big data technologies such as Hadoop, Spark, and Hive


● Extensive experience with cloud data services such as AWS, Azure, and GCP


● Advanced knowledge of database technologies like SQL, NoSQL, and data warehousing


● Knowledge of distributed computing and storage systems


● Familiarity with DevOps practices and power automate and Microsoft Fabric will be an added advantage


● Strong analytical and problem-solving skills with outstanding communication and collaboration abilities




Qualifications


  • Bachelor's degree in Computer Science, Data Science, or a Computer related field


Read more
Kanerika Software

at Kanerika Software

3 candid answers
2 recruiters
Soyam Gupta
Posted by Soyam Gupta
Hyderabad, Indore, Ahmedabad
7 - 15 yrs
₹22L - ₹40L / yr
Data governance
Data management
Meta-data management
Data security
Microsoft Windows Azure
+5 more

What You Will Do :


As a Data Governance Lead at Kanerika, you will be responsible for defining, leading, and operationalizing the data governance framework, ensuring enterprise-wide alignment and regulatory compliance.


Required Qualifications :


- 7+ years of experience in data governance and data management.


- Proficient in Microsoft Purview and Informatica data governance tools.


- Strong in metadata management, lineage mapping, classification, and security.


- Experience with ADF, REST APIs, Talend, dbt, and automation via Azure tools.


- Knowledge of GDPR, CCPA, HIPAA, SOX and related compliance needs.


- Skilled in bridging technical governance with business and compliance goals.


Tools & Technologies :


- Microsoft Purview, Collibra, Atlan, Informatica Axon, IBM IG Catalog


- Microsoft Purview capabilities :


1. Label creation & policy setup


2. Auto-labeling & DLP


3. Compliance Manager, Insider Risk, Records & Lifecycle Management


4. Unified Catalog, eDiscovery, Data Map, Audit, Compliance alerts, DSPM.


Key Responsibilities :


1. Governance Strategy & Stakeholder Alignment :


- Develop and maintain enterprise data governance strategies, policies, and standards.


- Align governance with business goals : compliance, analytics, and decision-making.


- Collaborate across business, IT, legal, and compliance teams for role alignment.


- Drive governance training, awareness, and change management programs.


2. Microsoft Purview Administration & Implementation :


- Manage Microsoft Purview accounts, collections, and RBAC aligned to org structure.


- Optimize Purview setup for large-scale environments (50TB+).


- Integrate with Azure Data Lake, Synapse, SQL DB, Power BI, Snowflake.


- Schedule scans, set classification jobs, and maintain collection hierarchies.


3. Metadata & Lineage Management :


- Design metadata repositories and maintain business glossaries and data dictionaries.


- Implement ingestion workflows via ADF, REST APIs, PowerShell, Azure Functions.


- Ensure lineage mapping (ADF ? Synapse ? Power BI) and impact analysis.


4. Data Classification & Security Governance :


- Define classification rules and sensitivity labels (PII, PCI, PHI).


- Integrate with MIP, DLP, Insider Risk Management, and Compliance Manager.


- Enforce records management, lifecycle policies, and information barriers.


5. Data Quality & Policy Management :


- Define KPIs and dashboards to monitor data quality across domains.


- Collaborate on rule design, remediation workflows, and exception handling.


- Ensure policy compliance (GDPR, HIPAA, CCPA, etc.) and risk management.


6. Business Glossary & Stewardship :


- Maintain business glossary with domain owners and stewards in Purview.


- Enforce approval workflows, standard naming, and steward responsibilities.


- Conduct metadata audits for glossary and asset documentation quality.


7. Automation & Integration :


- Automate governance processes using PowerShell, Azure Functions, Logic Apps.


- Create pipelines for ingestion, lineage, glossary updates, tagging.


- Integrate with Power BI, Azure Monitor, Synapse Link, Collibra, BigID, etc.


8. Monitoring, Auditing & Compliance :


- Set up dashboards for audit logs, compliance reporting, metadata coverage.


- Oversee data lifecycle management across its phases.


- Support internal and external audit readiness with proper documentation.



Read more
Data Havn

Data Havn

Agency job
via Infinium Associate by Toshi Srivastava
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
4 - 6 yrs
₹40L - ₹45L / yr
skill iconR Programming
Google Cloud Platform (GCP)
skill iconData Science
skill iconPython
Data Visualization
+3 more

DataHavn IT Solutions is a company that specializes in big data and cloud computing, artificial intelligence and machine learning, application development, and consulting services. We want to be in the frontrunner into anything to do with data and we have the required expertise to transform customer businesses by making right use of data.

 

About the Role:

As a Data Scientist specializing in Google Cloud, you will play a pivotal role in driving data-driven decision-making and innovation within our organization. You will leverage the power of Google Cloud's robust data analytics and machine learning tools to extract valuable insights from large datasets, develop predictive models, and optimize business processes.

Key Responsibilities:

  • Data Ingestion and Preparation:
  • Design and implement efficient data pipelines for ingesting, cleaning, and transforming data from various sources (e.g., databases, APIs, cloud storage) into Google Cloud Platform (GCP) data warehouses (BigQuery) or data lakes (Dataflow).
  • Perform data quality assessments, handle missing values, and address inconsistencies to ensure data integrity.
  • Exploratory Data Analysis (EDA):
  • Conduct in-depth EDA to uncover patterns, trends, and anomalies within the data.
  • Utilize visualization techniques (e.g., Tableau, Looker) to communicate findings effectively.
  • Feature Engineering:
  • Create relevant features from raw data to enhance model performance and interpretability.
  • Explore techniques like feature selection, normalization, and dimensionality reduction.
  • Model Development and Training:
  • Develop and train predictive models using machine learning algorithms (e.g., linear regression, logistic regression, decision trees, random forests, neural networks) on GCP platforms like Vertex AI.
  • Evaluate model performance using appropriate metrics and iterate on the modeling process.
  • Model Deployment and Monitoring:
  • Deploy trained models into production environments using GCP's ML tools and infrastructure.
  • Monitor model performance over time, identify drift, and retrain models as needed.
  • Collaboration and Communication:
  • Work closely with data engineers, analysts, and business stakeholders to understand their requirements and translate them into data-driven solutions.
  • Communicate findings and insights in a clear and concise manner, using visualizations and storytelling techniques.

Required Skills and Qualifications:

  • Strong proficiency in Python or R programming languages.
  • Experience with Google Cloud Platform (GCP) services such as BigQuery, Dataflow, Cloud Dataproc, and Vertex AI.
  • Familiarity with machine learning algorithms and techniques.
  • Knowledge of data visualization tools (e.g., Tableau, Looker).
  • Excellent problem-solving and analytical skills.
  • Ability to work independently and as part of a team.
  • Strong communication and interpersonal skills.

Preferred Qualifications:

  • Experience with cloud-native data technologies (e.g., Apache Spark, Kubernetes).
  • Knowledge of distributed systems and scalable data architectures.
  • Experience with natural language processing (NLP) or computer vision applications.
  • Certifications in Google Cloud Platform or relevant machine learning frameworks.


Read more
Data Havn

Data Havn

Agency job
via Infinium Associate by Toshi Srivastava
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2.5 - 4.5 yrs
₹10L - ₹20L / yr
skill iconPython
SQL
Google Cloud Platform (GCP)
SQL server
ETL
+9 more

About the Role:


We are seeking a talented Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.

Responsibilities:

  • Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
  • Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
  • Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
  • Team Management: Able to handle team.
  • Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
  • Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
  • Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
  • Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.

 

 Skills:

  • Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
  • Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
  • Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
  • Understanding of data modeling and data architecture concepts.
  • Experience with ETL/ELT tools and frameworks.
  • Excellent problem-solving and analytical skills.
  • Ability to work independently and as part of a team.

Preferred Qualifications:

  • Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
  • Knowledge of machine learning and artificial intelligence concepts.
  • Experience with data visualization tools (e.g., Tableau, Power BI).
  • Certification in cloud platforms or data engineering.
Read more
Remote only
10 - 15 yrs
₹25L - ₹40L / yr
data engineer
Apache Spark
skill iconScala
Big Data
skill iconPython
+5 more

What You’ll Be Doing:

● Own the architecture and roadmap for scalable, secure, and high-quality data pipelines

and platforms.

● Lead and mentor a team of data engineers while establishing engineering best practices,

coding standards, and governance models.

● Design and implement high-performance ETL/ELT pipelines using modern Big Data

technologies for diverse internal and external data sources.

● Drive modernization initiatives including re-architecting legacy systems to support

next-generation data products, ML workloads, and analytics use cases.

● Partner with Product, Engineering, and Business teams to translate requirements into

robust technical solutions that align with organizational priorities.

● Champion data quality, monitoring, metadata management, and observability across the

ecosystem.

● Lead initiatives to improve cost efficiency, data delivery SLAs, automation, and

infrastructure scalability.

● Provide technical leadership on data modeling, orchestration, CI/CD for data workflows,

and cloud-based architecture improvements.


Qualifications:

● Bachelor's degree in Engineering, Computer Science, or relevant field.

● 8+ years of relevant and recent experience in a Data Engineer role.

● 5+ years recent experience with Apache Spark and solid understanding of the

fundamentals.

● Deep understanding of Big Data concepts and distributed systems.

● Demonstrated ability to design, review, and optimize scalable data architectures across

ingestion.

● Strong coding skills with Scala, Python and the ability to quickly switch between them with

ease.

● Advanced working SQL knowledge and experience working with a variety of relational

databases such as Postgres and/or MySQL.

● Cloud Experience with DataBricks.


● Strong understanding of Delta Lake architecture and working with Parquet, JSON, CSV,

and similar formats.

● Experience establishing and enforcing data engineering best practices, including CI/CD

for data, orchestration and automation, and metadata management.

● Comfortable working in an Agile environment

● Machine Learning knowledge is a plus.

● Demonstrated ability to operate independently, take ownership of deliverables, and lead

technical decisions.

● Excellent written and verbal communication skills in English.

● Experience supporting and working with cross-functional teams in a dynamic

environment.

REPORTING: This position will report to Sr. Technical Manager or Director of Engineering as

assigned by Management.

EMPLOYMENT TYPE: Full-Time, Permanent


SHIFT TIMINGS: 10:00 AM - 07:00 PM IST

Read more
Technology Industry

Technology Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Delhi
10 - 15 yrs
₹105L - ₹140L / yr
Data engineering
Apache Spark
Apache
Apache Kafka
skill iconJava
+25 more

MANDATORY:

  • Super Quality Data Architect, Data Engineering Manager / Director Profile
  • Must have 12+ YOE in Data Engineering roles, with at least 2+ years in a Leadership role
  • Must have 7+ YOE in hands-on Tech development with Java (Highly preferred) or Python, Node.JS, GoLang
  • Must have strong experience in large data technologies, tools like HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, Presto etc.
  • Strong expertise in HLD and LLD, to design scalable, maintainable data architectures.
  • Must have managed a team of at least 5+ Data Engineers (Read Leadership role in CV)
  • Product Companies (Prefers high-scale, data-heavy companies)


PREFERRED:

  • Must be from Tier - 1 Colleges, preferred IIT
  • Candidates must have spent a minimum 3 yrs in each company.
  • Must have recent 4+ YOE with high-growth Product startups, and should have implemented Data Engineering systems from an early stage in the Company


ROLES & RESPONSIBILITIES:

  • Lead and mentor a team of data engineers, ensuring high performance and career growth.
  • Architect and optimize scalable data infrastructure, ensuring high availability and reliability.
  • Drive the development and implementation of data governance frameworks and best practices.
  • Work closely with cross-functional teams to define and execute a data roadmap.
  • Optimize data processing workflows for performance and cost efficiency.
  • Ensure data security, compliance, and quality across all data platforms.
  • Foster a culture of innovation and technical excellence within the data team.


IDEAL CANDIDATE:

  • 10+ years of experience in software/data engineering, with at least 3+ years in a leadership role.
  • Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS.
  • Proficiency in SQL, Python, and Scala for data processing and analytics.
  • Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services.
  • Strong foundation and expertise in HLD and LLD, as well as design patterns, preferably using Spring Boot or Google Guice
  • Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks.
  • Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery
  • Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.).
  • Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB.
  • Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK.
  • Proven ability to drive technical strategy and align it with business objectives.
  • Strong leadership, communication, and stakeholder management skills.


PREFERRED QUALIFICATIONS:

  • Experience in machine learning infrastructure or MLOps is a plus.
  • Exposure to real-time data processing and analytics.
  • Interest in data structures, algorithm analysis and design, multicore programming, and scalable architecture.
  • Prior experience in a SaaS or high-growth tech company.
Read more
Deqode

at Deqode

1 recruiter
Shraddha Katare
Posted by Shraddha Katare
Bengaluru (Bangalore)
5 - 8 yrs
₹5L - ₹20L / yr
Apache Hive
Apache Spark
skill iconPython
SQL
Hadoop
+1 more

Profile: Big Data Engineer (System Design)

Experience: 5+ years

Location: Bangalore

Work Mode: Hybrid

About the Role

We're looking for an experienced Big Data Engineer with system design expertise to architect and build scalable data pipelines and optimize big data solutions.

Key Responsibilities

  • Design, develop, and maintain data pipelines and ETL processes using Python, Hive, and Spark
  • Architect scalable big data solutions with strong system design principles
  • Build and optimize workflows using Apache Airflow
  • Implement data modeling, integration, and warehousing solutions
  • Collaborate with cross-functional teams to deliver data solutions

Must-Have Skills

  • 5+ years as a Data Engineer with Python, Hive, and Spark
  • Strong hands-on experience with Java
  • Advanced SQL and Hadoop experience
  • Expertise in Apache Airflow
  • Strong understanding of data modeling, integration, and warehousing
  • Experience with relational databases (PostgreSQL, MySQL)
  • System design knowledge
  • Excellent problem-solving and communication skills

Good to Have

  • Docker and containerization experience
  • Knowledge of Apache Beam, Apache Flink, or similar frameworks
  • Cloud platform experience.
Read more
Tata Consultancy Services
Bengaluru (Bangalore), Hyderabad, Pune, Delhi, Kolkata, Chennai
5 - 8 yrs
₹7L - ₹30L / yr
skill iconScala
skill iconPython
PySpark
Apache Hive
Spark
+3 more

Skills and competencies:

Required:

·        Strong analytical skills in conducting sophisticated statistical analysis using bureau/vendor data, customer performance

Data and macro-economic data to solve business problems.

·        Working experience in languages PySpark & Scala to develop code to validate and implement models and codes in

Credit Risk/Banking

·        Experience with distributed systems such as Hadoop/MapReduce, Spark, streaming data processing, cloud architecture.

  • Familiarity with machine learning frameworks and libraries (like scikit-learn, SparkML, tensorflow, pytorch etc.
  • Experience in systems integration, web services, batch processing
  • Experience in migrating codes to PySpark/Scala is big Plus
  • The ability to act as liaison conveying information needs of the business to IT and data constraints to the business

applies equal conveyance regarding business strategy and IT strategy, business processes and work flow

·        Flexibility in approach and thought process

·        Attitude to learn and comprehend the periodical changes in the regulatory requirement as per FED

Read more
Tata Consultancy Services
Agency job
via Risk Resources LLP hyd by susmitha o
Chennai, Hyderabad, Kolkata, Delhi, Pune, Bengaluru (Bangalore)
5 - 8 yrs
₹7L - ₹30L / yr
Informatica MDM
MDM
ETL
Big Data

• Technical expertise in the area of development of Master Data Management, data extraction, transformation, and load (ETL) applications, big data using existing and emerging technology platforms and cloud architecture

• Functions as lead developer• Support System Analysis, Technical/Data design, development, unit testing, and oversee end-to-end data solution.

• Technical SME in Master Data Management application, ETL, big data and cloud technologies                                                                                               

• Collaborate with IT teams to ensure technical designs and implementations account for requirements, standards, and best practices                                                                                           

• Performance tuning of end-to-end MDM, database, ETL, Big data processes or in the source/target database endpoints as needed.                                                               

• Mentor and advise junior members of team to provide guidance.                                                  

• Perform a technical lead and solution lead role for a team of onshore and offshore developers

Read more
Blurgs
Blurgs Innovations
Posted by Blurgs Innovations
Hyderabad
0 - 3 yrs
₹4L - ₹10L / yr
Apache
Databases
Infrastructure
Data Structures
Big Data

Job Title: Data Engineer

Location: Hyderabad


About us:


Blurgs AI is a deep-tech startup focused on maritime and defence data-intelligence solutions, specialising in multi-modal sensor fusion and data correlation. Our flagship product, Trident, provides advanced domain awareness for maritime, defence, and commercial sectors by integrating data from various sensors like AIS, Radar, SAR, and EO/IR.


At Blurgs AI, we foster a collaborative, innovative, and growth-driven culture. Our team is passionate about solving real-world challenges, and we prioritise an open, inclusive work environment where creativity and problem-solving thrive. We encourage new hires to bring their ideas to the table, offering opportunities for personal growth, skill development, and the chance to work on cutting-edge technology that impacts global defence and maritime operations.


Join us to be part of a team that's shaping the future of technology in a fast-paced, dynamic industry.


Job Summary:


We are looking for a Senior Data Engineer to design, build, and maintain a robust, scalable on-premise data infrastructure. You will focus on real-time and batch data processing using platforms such as Apache Pulsar and Apache Flink, work with NoSQL databases like MongoDB and ClickHouse, and deploy services using containerization technologies like Docker and Kubernetes. This role is ideal for engineers with strong systems knowledge, deep backend data experience, and a passion for building efficient, low-latency data pipelines in a non-cloud, on-prem environment.


Key Responsibilities:


  • Data Pipeline & Streaming Development
  • Design and implement real-time data pipelines using Apache Pulsar and Apache Flink to support mission-critical systems.
  • Develop high-throughput, low-latency data ingestion and processing workflows across streaming and batch workloads.
  • Integrate internal systems and external data sources into a unified on-prem data platform.
  • Data Storage & Modelling
  • Design efficient data models for MongoDB, ClickHouse, and other on-prem databases to support analytical and operational workloads.
  • Optimise storage formats, indexing strategies, and partitioning schemes for performance and scalability.
  • Infrastructure & Containerization
  • Deploy, manage, and monitor containerised data services using Docker and Kubernetes in on-prem environments.
  • Performance, Monitoring & Reliability
  • Monitor the performance of streaming jobs and database queries; fine-tune for efficiency and reliability.
  • Implement robust logging, metrics, and alerting solutions to ensure data system availability and uptime.
  • Identify bottlenecks in the pipeline and proactively implement optimisations.


Required Skills & Experience:


  • Strong experience in data engineering with a focus on on-premise infrastructure.
  • Strong expertise in streaming technologies like Apache Pulsar, Apache Flink, or similar.
  • Deep experience with MongoDB, ClickHouse, and other NoSQL or columnar storage databases.
  • Proficient in Python, Java, or Scala for data processing and backend development.
  • Hands-on experience deploying and managing systems using Docker and Kubernetes.
  • Familiarity with Linux-based systems, system tuning, and resource monitoring.


Preferred Qualifications:


Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field, or an equivalent combination of education and experience.


Additional Responsibilities for Senior Data Engineers :


For those hired as Senior Data Engineers, the role will come with added responsibilities, including:

  • Leadership & Mentorship: Guide and mentor junior engineers, sharing expertise and best practices.
  • System Architecture: Lead the design and optimization of complex real-time and batch data pipelines, ensuring scalability and performance.
  • Sensor Data Expertise: Focus on building and optimizing sensor-based data pipelines and stateful stream processing for mission-critical applications in domains like maritime and defense.
  • End-to-End Ownership: Take responsibility for the performance, reliability, and optimization of data systems.


Compensation:


  • Data Engineer CTC: 4 - 8 LPA
  • Senior Data Engineer CTC: 12 - 16 LPA
Read more
Remote only
12 - 16 yrs
₹20L - ₹35L / yr
skill iconScala
Apache Spark
Big Data
Data engineering
databricks
+1 more

What You’ll Be Doing:

● Design and build parts of our data pipeline architecture for extraction, transformation, and loading of data from a wide variety of data sources using the latest Big Data technologies.

● Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

● Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.

● Work with machine learning, data, and analytics experts to drive innovation, accuracy and greater functionality in our data system.


Qualifications:

● Bachelor's degree in Engineering, Computer Science, or relevant field.

● 10+ years of relevant and recent experience in a Data Engineer role. ● 5+ years recent experience with Apache Spark and solid understanding of the fundamentals.

● Deep understanding of Big Data concepts and distributed systems.

● Strong coding skills with Scala, Python, Java and/or other languages and the ability to quickly switch between them with ease.

● Advanced working SQL knowledge and experience working with a variety of relational databases such as Postgres and/or MySQL.

● Cloud Experience with DataBricks

● Experience working with data stored in many formats including Delta Tables, Parquet, CSV and JSON.

● Comfortable working in a linux shell environment and writing scripts as needed.

● Comfortable working in an Agile environment

● Machine Learning knowledge is a plus.

● Must be capable of working independently and delivering stable, efficient and reliable software.

● Excellent written and verbal communication skills in English.

● Experience supporting and working with cross-functional teams in a dynamic environment.


REPORTING: This position will report to our CEO or any other Lead as assigned by Management.


EMPLOYMENT TYPE: Full-Time,

Permanent LOCATION: Remote


SHIFT TIMINGS: 2.00 pm-11:00pm IST

Read more
Cymetrix Software

at Cymetrix Software

2 candid answers
Netra Shettigar
Posted by Netra Shettigar
Remote only
3 - 7 yrs
₹8L - ₹20L / yr
Google Cloud Platform (GCP)
ETL
skill iconPython
Big Data
SQL
+4 more

Must have skills:

1. GCP - GCS, PubSub, Dataflow or DataProc, Bigquery, Airflow/Composer, Python(preferred)/Java

2. ETL on GCP Cloud - Build pipelines (Python/Java) + Scripting, Best Practices, Challenges

3. Knowledge of Batch and Streaming data ingestion, build End to Data pipelines on GCP

4. Knowledge of Databases (SQL, NoSQL), On-Premise and On-Cloud, SQL vs No SQL, Types of No-SQL DB (At least 2 databases)

5. Data Warehouse concepts - Beginner to Intermediate level


Role & Responsibilities:

● Work with business users and other stakeholders to understand business processes.

● Ability to design and implement Dimensional and Fact tables

● Identify and implement data transformation/cleansing requirements

● Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data

from various systems to the Enterprise Data Warehouse

● Develop conceptual, logical, and physical data models with associated metadata including data lineage and technical

data definitions

● Design, develop and maintain ETL workflows and mappings using the appropriate data load technique

● Provide research, high-level design, and estimates for data transformation and data integration from source

applications to end-user BI solutions.

● Provide production support of ETL processes to ensure timely completion and availability of data in the data

warehouse for reporting use.

● Analyze and resolve problems and provide technical assistance as necessary. Partner with the BI team to evaluate,

design, develop BI reports and dashboards according to functional specifications while maintaining data integrity and

data quality.

● Work collaboratively with key stakeholders to translate business information needs into well-defined data

requirements to implement the BI solutions.

● Leverage transactional information, data from ERP, CRM, HRIS applications to model, extract and transform into

reporting & analytics.

● Define and document the use of BI through user experience/use cases, prototypes, test, and deploy BI solutions.

● Develop and support data governance processes, analyze data to identify and articulate trends, patterns, outliers,

quality issues, and continuously validate reports, dashboards and suggest improvements.

● Train business end-users, IT analysts, and developers.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Akansha Sharma
Posted by Akansha Sharma
Bengaluru (Bangalore)
4 - 6 yrs
Best in industry
Generative AI
MLOps
Data engineering
Big Data


Notice Period - 0-15 days Max

Apply only who are currently in Karnataka

F2F interview

Interview - 4 rounds


Job Title: AI Specialist

Company Overview: We are the Technology Center of Excellence for Long Arc Capital

which provides growth capital to businesses with a sustainable competitive advantage and 

a strong management team with whom we can partner to build a category leader. We focus 

on North American and European companies where technology is transforming traditional 

business models in the Financial Services, Business Services, Technology, Media and 

Telecommunications sectors.

As part of our mission to leverage AI for business innovation, we are establishing AI COE to 

develop Generative AI (GenAI) and Agentic AI solutions that enhance decision-making, 

automation, and user experiences.

Job Overview: We are seeking dynamic and talented individuals to join our AI COE. This 

team will focus on developing advanced AI models, integrating them into our cloud-based 

platform, and delivering impactful solutions that drive efficiency, innovation, and customer 

value.

Key Responsibilities:

• As a Full Stack AI Engineer, research, design, and develop AI solutions for text, 

image, audio, and video generation

• Build and deploy Agentic AI systems for autonomous decision-making across 

business outcomes and enhancing associate productivity. 

• Work with domain experts to design and fine-tune AI solutions tailored to portfoliospecific challenges.

• Partner with data engineers across portfolio companies to –

o Preprocess large datasets and ensure high-quality input for training AI 

models.

o Develop scalable and efficient AI pipelines using frameworks like 

TensorFlow, PyTorch, and Hugging Face.

• Implement MLOps best practices for AI model deployment, versioning, and 

monitoring using tools like MLflow and Kubernetes.

• Ensure AI solutions adhere to ethical standards, comply with regulations (e.g., 

GDPR, CCPA), and mitigate biases.

• Design intuitive and user-friendly interfaces for AI-driven applications, collaborating 

with UX designers and frontend developers.

Internal Use Only

• Stay up to date with the latest AI research and tools and evaluate their applicability 

to our business needs.

Key Qualifications:

Technical Expertise:

• Proficiency in full stack application development (specifically using Angular, React).

• Expertise in backend technologies (Django, Flask) and cloud platforms (AWS 

SageMaker/Azure AI Studio).

• Proficiency in deep learning frameworks (TensorFlow, PyTorch, JAX).

• Proficiency with Large Language Models (LLMs) and generative AI tools (e.g., OpenAI 

APIs, LangChain, Stable Diffusion).

• Solid understanding of data engineering workflows, including ETL processes and 

distributed computing tools (Apache Spark, Kafka).

• Experience with data pipelines, big data processing, and database management 

(SQL, NoSQL).

• Knowledge of containerization (Docker) and orchestration (Kubernetes) for scalable 

AI deployment.

• Familiarity with CI/CD pipelines and automation tools (Terraform, Jenkins).

• Good understanding of AI ethics, bias mitigation, and compliance standards.

• Excellent problem-solving abilities and innovative thinking.

• Strong collaboration and communication skills, with the ability to work in crossfunctional teams.

• Proven ability to work in a fast-paced and dynamic environment.

Preferred Qualifications:

• Advanced studies in Artificial Intelligence, or a related field.

• Experience with reinforcement learning, multi-agent systems, or autonomous 

decision-making

Read more
Coimbatore, Bengaluru (Bangalore), Mumbai
1 - 4 yrs
₹3.4L - ₹5L / yr
skill iconPython
skill iconJavascript
skill iconJava
skill iconHTML/CSS
Big Data
+2 more

The Assistant Professor in CSE will teach undergraduate and graduate courses, conduct independent and collaborative research, mentor students, and contribute to departmental and institutional service.

Read more
Tecblic Private LImited
Ahmedabad
4 - 5 yrs
₹8L - ₹12L / yr
Microsoft Windows Azure
SQL
skill iconPython
PySpark
ETL
+2 more

🚀 We Are Hiring: Data Engineer | 4+ Years Experience 🚀


Job description

🔍 Job Title: Data Engineer

📍 Location: Ahmedabad

🚀 Work Mode: On-Site Opportunity

📅 Experience: 4+ Years

🕒 Employment Type: Full-Time

⏱️ Availability : Immediate Joiner Preferred


Join Our Team as a Data Engineer

We are seeking a passionate and experienced Data Engineer to be a part of our dynamic and forward-thinking team in Ahmedabad. This is an exciting opportunity for someone who thrives on transforming raw data into powerful insights and building scalable, high-performance data infrastructure.

As a Data Engineer, you will work closely with data scientists, analysts, and cross-functional teams to design robust data pipelines, optimize data systems, and enable data-driven decision-making across the organization.


Your Key Responsibilities

Architect, build, and maintain scalable and reliable data pipelines from diverse data sources.

Design effective data storage, retrieval mechanisms, and data models to support analytics and business needs.

Implement data validation, transformation, and quality monitoring processes.

Collaborate with cross-functional teams to deliver impactful, data-driven solutions.

Proactively identify bottlenecks and optimize existing workflows and processes.

Provide guidance and mentorship to junior engineers in the team.


Skills & Expertise We’re Looking For

3+ years of hands-on experience in Data Engineering or related roles.

Strong expertise in Python and data pipeline design.

Experience working with Big Data tools like Hadoop, Spark, Hive.

Proficiency with SQL, NoSQL databases, and data warehousing solutions.

Solid experience in cloud platforms - Azure

Familiar with distributed computing, data modeling, and performance tuning.

Understanding of DevOps, Power Automate, and Microsoft Fabric is a plus.

Strong analytical thinking, collaboration skills, Excellent Communication Skill and the ability to work independently or as part of a team.


Qualifications

Bachelor’s degree in Computer Science, Data Science, or a related field.

Read more
Coimbatore
0 - 5 yrs
₹2.5L - ₹7L / yr
skill iconPython
skill iconC++
skill iconHTML/CSS
skill iconJavascript
Big Data
+2 more

A Computer Scientist/Engineer designs, develops, tests, and integrates computer software and hardware systems. This pivotal role blends deep knowledge of computer architecture with advanced software engineering—driving innovation in platforms spanning from embedded systems and networks to AI and cybersecurity

Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Chennai
8 - 12 yrs
₹10L - ₹26L / yr
skill iconPython
skill iconMachine Learning (ML)
Scikit-Learn
TensorFlow
PyTorch
+10 more

Job Title : Senior Machine Learning Engineer

Experience : 8+ Years

Location : Chennai

Notice Period : Immediate Joiners Only

Work Mode : Hybrid


Job Summary :

We are seeking an experienced Machine Learning Engineer with a strong background in Python, ML algorithms, and data-driven development.

The ideal candidate should have hands-on experience with popular ML frameworks and tools, solid understanding of clustering and classification techniques, and be comfortable working in Unix-based environments with Agile teams.


Mandatory Skills :

  • Programming Languages : Python
  • Machine Learning : Strong experience with ML algorithms, models, and libraries such as Scikit-learn, TensorFlow, and PyTorch
  • ML Concepts : Proficiency in supervised and unsupervised learning, including techniques such as K-Means, DBSCAN, and Fuzzy Clustering
  • Operating Systems : RHEL or any Unix-based OS
  • Databases : Oracle or any relational database
  • Version Control : Git
  • Development Methodologies : Agile

Desired Skills :

  • Experience with issue tracking tools such as Azure DevOps or JIRA.
  • Understanding of data science concepts.
  • Familiarity with Big Data algorithms, models, and libraries.
Read more
Hunarstreet Technologies pvt ltd

Hunarstreet Technologies pvt ltd

Agency job
via Hunarstreet Technologies Pvt Ltd by Sakshi Patankar
Remote only
10 - 20 yrs
₹15L - ₹30L / yr
Data engineering
databricks
skill iconPython
skill iconScala
Spark
+14 more

What You’ll Be Doing:

● Design and build parts of our data pipeline architecture for extraction, transformation, and loading of data from a wide variety of data sources using the latest Big Data technologies.

● Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

● Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.

● Work with machine learning, data, and analytics experts to drive innovation, accuracy and greater functionality in our data system. Qualifications:

● Bachelor's degree in Engineering, Computer Science, or relevant field.

● 10+ years of relevant and recent experience in a Data Engineer role. ● 5+ years recent experience with Apache Spark and solid understanding of the fundamentals.

● Deep understanding of Big Data concepts and distributed systems.

● Strong coding skills with Scala, Python, Java and/or other languages and the ability to quickly switch between them with ease.

● Advanced working SQL knowledge and experience working with a variety of relational databases such as Postgres and/or MySQL.

● Cloud Experience with DataBricks

● Experience working with data stored in many formats including Delta Tables, Parquet, CSV and JSON.

● Comfortable working in a linux shell environment and writing scripts as needed.

● Comfortable working in an Agile environment

● Machine Learning knowledge is a plus.

● Must be capable of working independently and delivering stable, efficient and reliable software.

● Excellent written and verbal communication skills in English.

● Experience supporting and working with cross-functional teams in a dynamic environment


EMPLOYMENT TYPE: Full-Time, Permanent

LOCATION: Remote (Pan India)

SHIFT TIMINGS: 2.00 pm-11:00pm IST 

Read more
Astegic

at Astegic

3 recruiters
Agency job
via Hunarstreet Technologies Pvt Ltd by Priyanka Londhe
Remote only
10 - 13 yrs
₹30L - ₹50L / yr
skill iconScala
Apache Spark
Big Data
skill iconPython
skill iconJava
+3 more

POSITION:

Senior Data Engineer

The Senior Data Engineer will be responsible for building and extending our data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys working with big data and building systems from the ground up.

You will collaborate with our software engineers, database architects, data analysts and data scientists to ensure our data delivery architecture is consistent throughout the platform. You must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.


What You’ll Be Doing:

● Design and build parts of our data pipeline architecture for extraction, transformation, and loading of data from a wide variety of data sources using the latest Big Data technologies.

● Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

● Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.

● Work with machine learning, data, and analytics experts to drive innovation, accuracy and greater functionality in our data system.

Qualifications:

● Bachelor's degree in Engineering, Computer Science, or relevant

field.

● 10+ years of relevant and recent experience in a Data Engineer role.

● 5+ years recent experience with Apache Spark and solid understanding of the fundamentals.

● Deep understanding of Big Data concepts and distributed systems.

● Strong coding skills with Scala, Python, Java and/or other languages and the ability to quickly switch between them with ease.

● Advanced working SQL knowledge and experience working with a variety of relational databases such as Postgres and/or MySQL.

● Cloud Experience with DataBricks

● Experience working with data stored in many formats including Delta Tables, Parquet, CSV and JSON.

● Comfortable working in a linux shell environment and writing scripts as needed.

● Comfortable working in an Agile environment

● Machine Learning knowledge is a plus.

● Must be capable of working independently and delivering stable,

efficient and reliable software.

● Excellent written and verbal communication skills in English.

● Experience supporting and working with cross-functional teams in a dynamic environment.

REPORTING: This position will report to our CEO or any other Lead as assigned by Management.

EMPLOYMENT TYPE: Full-Time, Permanent LOCATION: Remote (Pan India) SHIFT TIMINGS: 2.00 pm-11:00pm IST

WHO WE ARE:

SalesIntel is the top revenue intelligence platform on the market. Our combination of automation and researchers allows us to reach 95% data accuracy for all our published contact data, while continuing to scale up our number of contacts. We currently have more than 5 million human-verifi ed contacts, another 70 million plus machine processed contacts, and the highest number of direct dial contacts in the industry. We guarantee our accuracy with our well-trained research team that re-verifi es every direct dial number, email, and contact every 90 days. With the most comprehensive contact and company data and our excellent customer service, SalesIntel has the best B2B data available. For more information, please visit – www.salesintel.io

WHAT WE OFFER: SalesIntel’s workplace is all about diversity. Different countries and cultures are represented in our workforce. We are growing at a fast pace and our work environment is constantly evolving with changing times. We motivate our team to better themselves by offering all the good stuff you’d expect like Holidays, Paid Leaves, Bonuses, Incentives, Medical Policy and company paid Training Programs.

SalesIntel is an Equal Opportunity Employer. We prohibit discrimination and harassment of any type and offer equal employment opportunities to employees and applicants without regard to race, color, religion, sex, sexual orientation, gender identity or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law.

Read more
Innominds

at Innominds

1 video
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Hyderabad
10yrs+
Upto ₹36L / yr (Varies
)
skill iconPython
Spark
Apache Airflow
Windows Azure
Big Data

We Help Our Customers Build Great Products.


Innominds is a trusted innovation acceleration partner focused on designing, developing and delivering technology solutions for specialized practices in Big Data & Analytics, Connected Devices, and Security, helping enterprises with their digital transformation initiatives. We built these practices on top of our foundational services of innovation, like UX/UI, application development and testing.


Over 1,000 people strong, we are a pioneer at the forefront of technology and engineering R& D, priding ourselves as being forward thinkers and anticipating market changes to help our clients stay relevant and competitive.


About the Role:


We are looking for a seasoned Data Engineering Lead to help shape and evolve our data platform. This role is both strategic and hands-on—requiring leadership of a team of data engineers while actively contributing to the design, development, and maintenance of robust data solutions.


Key Responsibilities:

  • Lead and mentor a team of Data Engineers to deliver scalable and reliable data solutions
  • Own the end-to-end architecture and development of data pipelines, data lakes, and warehouses
  • Design and implement batch data processing frameworks to support large-scale analytics
  • Define and enforce best practices in data modeling, data quality, and system performance
  • Collaborate with cross-functional teams to understand data requirements and deliver insights
  • Ensure smooth and secure data ingestion, transformation, and export processes
  • Stay current with industry trends and apply them to drive improvements in the platform

Requirements

  • Strong programming skills in Python
  • Deep expertise in Apache Spark, Big Data ecosystems, and Airflow
  • Hands-on experience with Azure cloud services and data engineering tools
  • Strong understanding of data architecture, data modeling, and data governance practices
  • Proven ability to design scalable data systems and enterprise-level solutions
  • Strong analytical mindset and problem-solving skills


For our company to deliver world-class products and services, our business depends on recruiting and hiring the best and the brightest from around the globe. We are looking for the engineers, designers and creative problem solvers that stand out from the rest of the crowd but are also humble enough to continue learning and growing, are eager to tackle complex problems and are able to keep up with the demanding pace of our business. We are looking for YOU!


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Tony Tom
Posted by Tony Tom
Bengaluru (Bangalore)
5 - 7 yrs
₹1L - ₹25L / yr
skill iconJava
Cassandra

· 5+ years of experience in software development using Java.

· Proficiency in Spring Boot and Spring Batch.

· Experience with microservices architecture.

· Hands-on experience with Cassandra or similar NoSQL databases.

· Solid understanding of cloud platforms (AWS, GCP, Azure, etc.).

· Familiarity with Docker and Kubernetes.

· Experience with CI/CD tools such as Jenkins etc

· Strong problem-solving skills and attention to detail.

· Excellent communication and teamwork skills.

Important consideration:

· Core Java - 4 to 6 Yrs

· Spring and Spring Boot, Spring MVC, Spring Data, Spring Security - 4 to 6 Yrs

· DevOps (Jenkins, Junit, sonarQube, Maven) - 1 to 2 Yrs

· MongoDB, NOSql, Couch DB, Cassandra - 1 to 2 Yrs

Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Remote only
11 - 18 yrs
₹70L - ₹80L / yr
skill iconJava
skill iconGo Programming (Golang)
skill iconNodeJS (Node.js)
skill iconPython
Apache Kafka
+7 more

Role & Responsibilities

Lead and mentor a team of data engineers, ensuring high performance and career growth.

Architect and optimize scalable data infrastructure, ensuring high availability and reliability.

Drive the development and implementation of data governance frameworks and best practices.

Work closely with cross-functional teams to define and execute a data roadmap.

Optimize data processing workflows for performance and cost efficiency.

Ensure data security, compliance, and quality across all data platforms.

Foster a culture of innovation and technical excellence within the data team.

Read more
Remote only
7 - 12 yrs
₹25L - ₹40L / yr
Spark
skill iconJava
Apache Kafka
Big Data
Apache Hive
+5 more

Job Title: Big Data Engineer (Java Spark Developer – JAVA SPARK EXP IS MUST)

Location: Chennai, Hyderabad, Pune, Bangalore (Bengaluru) / NCR Delhi

Client: Premium Tier 1 Company

Payroll: Direct Client

Employment Type: Full time / Perm

Experience: 7+ years

 

Job Description:

We are looking for a skilled Big Data Engineers using Java Spark with 7+ years of experience in Big Data / legacy platforms, who can join immediately. Desired candidate should have design, development and optimization of real-time & batch data pipelines experience in Big Data environment at an enterprise scale applications. You will work on building scalable and high-performance data processing solutions, integrating real-time data streams, and building a reliable Data platforms. Strong troubleshooting, performance tuning, and collaboration skills are key for this role.

 

Key Responsibilities:

·      Develop data pipelines using Java Spark and Kafka.

·      Optimize and maintain real-time data pipelines and messaging systems.

·      Collaborate with cross-functional teams to deliver scalable data solutions.

·      Troubleshoot and resolve issues in Java Spark and Kafka applications.

 

Qualifications:

·      Experience in Java Spark is must

·      Knowledge and hands-on experience using distributed computing, real-time data streaming, and big data technologies

·      Strong problem-solving and performance optimization skills

·      Looking for immediate joiners

 

If interested, please share your resume along with the following details

1)    Notice Period

2)    Current CTC

3)    Expected CTC

4)    Have Experience in Java Spark - Y / N (this is must)

5)    Any offers in hand

 

Thanks & Regards,

LION & ELEPHANTS CONSULTANCY PVT LTD TEAM

SINGAPORE | INDIA

 

Read more
ZyBiSys

at ZyBiSys

4 candid answers
8 recruiters
Subash S
Posted by Subash S
Bengaluru (Bangalore)
10 - 15 yrs
₹20L - ₹30L / yr
skill iconRedis
skill iconGo Programming (Golang)
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconMongoDB
+16 more

Job Title : Principal Software Architect – AI/ML & Product Innovation

Location : Bangalore, Karnataka & Trichy, Tamilnadu, India (No remote work available)

Company : Zybisys Consulting Services LLP

Reports To : CEO

Job Type : Full-Time


Experience Required: Minimum of 10+ years in software development, with at least 5 years in software architect role.

 

About Us:

At Zybisys, we’re not just another cloud hosting and software development company—we’re all about pushing boundaries in the FinTech world. We don’t just solve problems; we rethink how businesses operate, making things smoother, smarter, and more efficient. Our tech helps FinTech companies stay ahead in the digital game with confidence and flexibility.

Innovation is in our DNA, and we’re always on the lookout for bold thinkers who can tackle big challenges with creativity and precision. At Zybisys, we believe in growing together, nurturing talent, and building a future where technology transforms the way FinTech works.


Role Overview:

We're looking for a Principal Software Architect who’s passionate about AI/ML and product innovation. In this role, you’ll be at the forefront of designing and building smart, AI-driven solutions that tackle complex business challenges. You’ll work closely with teams across product, development, and research to shape our tech strategy and ensure everything aligns with our next-gen platform. If you love pushing the boundaries of technology and driving real innovation, this is the role for you!

 

Key Responsibilities:

  • Architect & Design: Architect, design, and develop large-scale distributed cloud services and solutions with a focus on AI/ML, high availability, scalability, and robustness. Design scalable and efficient solutions, considering factors such as performance, security, and cost-effectiveness.
  • AI/ML Integration: Spearhead the application of AI/ML in solving business problems at scale. Stay at the forefront of AI/ML technologies, trends, and industry standards to provide cutting-edge solutions
  • Product Roadmap : Work closely with Product Management to set the technical product roadmap, definition, and direction. Analyze the current technology landscape and identify opportunities for improvement and innovation.
  • Technology Evaluation: Evaluate different programming languages and frameworks to determine the most suitable ones for project requirements
  • Component Design: Develop and oversee the creation of modular software components that can be reused and adapted across different projects.
  • UI/UX Collaboration: Work closely with design teams to craft intuitive and engaging user interfaces and experiences.
  • Project Oversight: Oversee projects from initiation to completion, creating project plans, defining objectives, and managing resources effectively
  • Team Mentorship: Guide and inspire a team of engineers and designers, fostering a culture of continuous learning and improvement.
  • Innovation & Ideation: Champion the generation of new ideas for product features, staying ahead of industry trends and customer needs.
  • Research & Development: Leading initiatives that explore new technologies or methodologies.
  • Strategic Planning: Participating in high-level decisions that shape the direction of products and services.
  • Industry Influence: Representing the company in industry forums or partnerships with academic institutions.
  • Open-Source Community Handling: Manage and contribute to the open-source community, fostering collaboration, sharing knowledge, and ensuring adherence to open-source best practices.

 

Qualifications:

  • Experience: Minimum of 10 years in software development, with at least 5 years in a scalable software architect role.
  • Technical Expertise: Proficient in software architecture, AI/ML technologies, and UI/UX principles.
  • Leadership Skills: Proven track record of mentoring teams and driving cross-functional collaboration.
  • Innovative Mindset: Demonstrated ability to think creatively and introduce groundbreaking ideas.
  • Communication: Excellent verbal and written skills, with the ability to engage effectively with both technical and non-technical stakeholders.
  • Education: Bachelor's or Master's degree in Computer Science, Engineering, or a related field.

 

What We Offer:

  • A dynamic work environment where your ideas truly matter.
  • Opportunities to attend and speak at industry conferences.
  • Collaboration with cutting-edge technology and tools.
  • A culture that values innovation, autonomy, and personal growth.


Read more
Rigel Networks Pvt Ltd
Pune
5 - 9 yrs
₹8L - ₹15L / yr
Big data Engineer
Software deployment
Release Management
Software release life cycle
Release engineering
+6 more

Dear Candidate,

We are urgently looking for a Release- Big data Engineer For Pune Location.


Experience : 5-8 yrs

Location : Pune

Skills: Big data Engineer , Release Engineer ,DevOps, Aws/Azure/GCP Cloud exp. ,


JD:

  • Oversee the end-to-end release lifecycle, from planning to post-production monitoring. Coordinate with cross-functional teams (DBA, BizOps, DevOps, DNS).
  • Partner with development teams to resolve technical challenges in deployment and automation test runs
  • Work with shared services DBA teams for schema-based multi-tenancy designs and smooth migrations.
  • Drive automation for batch deployments and DR exercises. YAML based micro service deployment using shell/Python/Go
  • Provide oversight for Big Data toolsets for deployment (e.g., Spark, Hive, HBase) in private cloud and public cloud CDP environments
  • Ensure high-quality releases with a focus on stability and long-term performance.
  • Able to run the automation batch scripts and debug the deployment and functional aspects/ work with dev leads to resolve the release cycle issues.



Regards,

Minakshi Soni

Executive- Talent Acquisition

Rigel Networks

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Bengaluru (Bangalore), Pune
5 - 10 yrs
Best in industry
PythonAnywhere
skill iconAmazon Web Services (AWS)
Big Data

At least 5 years of experience in testing and developing automation tests.

A minimum of 3 years of experience writing tests in Python, with a preference for experience in designing automation frameworks.

Experience in developing automation for big data testing, including data ingestion, data processing, and data migration, is highly desirable.

Familiarity with Playwright or other browser application testing frameworks is a significant advantage.

Proficiency in object-oriented programming and principles is required.

Extensive knowledge of AWS services is essential.

Strong expertise in REST API testing and SQL is required.

A solid understanding of testing and development life cycle methodologies is necessary.

Knowledge of the financial industry and trading systems is a plus

Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Gurugram
5 - 12 yrs
₹5L - ₹20L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+17 more

Job Title : Senior AWS Data Engineer

Experience : 5+ Years

Location : Gurugram

Employment Type : Full-Time

Job Summary :

Seeking a Senior AWS Data Engineer with expertise in AWS to design, build, and optimize scalable data pipelines and data architectures. The ideal candidate will have experience in ETL/ELT, data warehousing, and big data technologies.

Key Responsibilities :

  • Build and optimize data pipelines using AWS (Glue, EMR, Redshift, S3, etc.).
  • Maintain data lakes & warehouses for analytics.
  • Ensure data integrity through quality checks.
  • Collaborate with data scientists & engineers to deliver solutions.

Qualifications :

  • 7+ Years in Data Engineering.
  • Expertise in AWS services, SQL, Python, Spark, Kafka.
  • Experience with CI/CD, DevOps practices.
  • Strong problem-solving skills.

Preferred Skills :

  • Experience with Snowflake, Databricks.
  • Knowledge of BI tools (Tableau, Power BI).
  • Healthcare/Insurance domain experience is a plus.
Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Gurugram
7 - 15 yrs
₹5L - ₹20L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+20 more

Job Title : Tech Lead - Data Engineering (AWS, 7+ Years)

Location : Gurugram

Employment Type : Full-Time


Job Summary :

Seeking a Tech Lead - Data Engineering with expertise in AWS to design, build, and optimize scalable data pipelines and data architectures. The ideal candidate will have experience in ETL/ELT, data warehousing, and big data technologies.


Key Responsibilities :

  • Build and optimize data pipelines using AWS (Glue, EMR, Redshift, S3, etc.).
  • Maintain data lakes & warehouses for analytics.
  • Ensure data integrity through quality checks.
  • Collaborate with data scientists & engineers to deliver solutions.

Qualifications :

  • 7+ Years in Data Engineering.
  • Expertise in AWS services, SQL, Python, Spark, Kafka.
  • Experience with CI/CD, DevOps practices.
  • Strong problem-solving skills.

Preferred Skills :

  • Experience with Snowflake, Databricks.
  • Knowledge of BI tools (Tableau, Power BI).
  • Healthcare/Insurance domain experience is a plus.


Read more
Mphasis
Agency job
via Rigel Networks Pvt Ltd by Minakshi Soni
Bengaluru (Bangalore), Hyderabad
6 - 11 yrs
₹10L - ₹15L / yr
Software Testing (QA)
Test Automation (QA)
API Testing
UFT
skill iconJava
+11 more

Dear Candidate,

We are Urgently hiring QA Automation Engineers and Test leads At Hyderabad and Bangalore

Exp: 6-10 yrs

Locations: Hyderabad ,Bangalore


JD:

we are Hiring Automation Testers with 6-10 years of Automation testing experience using QA automation tools like Java, UFT, Selenium, API Testing, ETL & others

 

Must Haves:

·        Experience in Financial Domain is a must

·        Extensive Hands-on experience in Design, implement and maintain automation framework using Java, UFT, ETL, Selenium tools and automation concepts.

·        Experience with AWS concept and framework design/ testing.

·        Experience in Data Analysis, Data Validation, Data Cleansing, Data Verification and identifying data mismatch.

·        Experience with Databricks, Python, Spark, Hive, Airflow, etc.

·        Experience in validating and analyzing kubernetics log files.

·        API testing experience

·        Backend testing skills with ability to write SQL queries in Databricks and in Oracle databases

·        Experience in working with globally distributed Agile project teams

·        Ability to work in a fast-paced, globally structured and team-based environment, as well as independently

·        Experience in test management tools like Jira

·        Good written and verbal communication skills

Good To have:

  • Business and finance knowledge desirable

 

Best Regards,

Minakshi Soni

Executive - Talent Acquisition (L2)

Worldwide Locations: USA | HK | IN 

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Pune
5 - 10 yrs
Best in industry
Object Oriented Programming (OOPs)
Amazon Redshift
DSA
Big Data
Hadoop
+3 more

Job Summary:

We are seeking a skilled Senior Data Engineer with expertise in application programming, big data technologies, and cloud services. This role involves solving complex problems, designing scalable systems, and working with advanced technologies to deliver innovative solutions.

Key Responsibilities:

  • Develop and maintain scalable applications using OOP principles, data structures, and problem-solving skills.
  • Build robust solutions using Java, Python, or Scala.
  • Work with big data technologies like Apache Spark for large-scale data processing.
  • Utilize AWS services, especially Amazon Redshift, for cloud-based solutions.
  • Manage databases including SQL, NoSQL (e.g., MongoDB, Cassandra), with Snowflake as a plus.

Qualifications:

  • 5+ years of experience in software development.
  • Strong skills in OOPS, data structures, and problem-solving.
  • Proficiency in Java, Python, or Scala.
  • Experience with Spark, AWS (Redshift mandatory), and databases (SQL/NoSQL).
  • Snowflake experience is good to have.
Read more
DocNexus
Mahek Chhatrapati
Posted by Mahek Chhatrapati
Remote, Hyderabad, Pune
2 - 10 yrs
₹10L - ₹25L / yr
SQL Query Analyzer
Big Data
Customer Support

At DocNexus, we’re revolutionizing how life sciences companies search and generate insights. Our search platform unlocks powerful insights, and we're seeking a Customer Success Team Member with strong technical skills to help our customers harness its full potential.

What you’ll do:

  • Customer Support: Troubleshoot and resolve customer queries, particularly around referral reports, data anomalies, and data generation using our platform.
  • Data Queries (BigQuery/ClickHouse): Respond to customer requests for custom data queries, working with large datasets in BigQuery and ClickHouse to deliver precise insights.
  • Onboarding & Training: Lead onboarding for new customers, guide teams on platform usage, and manage access requests.
  • Listen & Improve: Collect and act on customer feedback to continuously improve the platform, collaborating with the product team to enhance functionality.
  • Technical Documentation: Assist with technical resources and help create training materials for both internal and customer use.

What you bring:

  • Strong Technical Skills: Proficient in querying with BigQuery and ClickHouse. Comfortable working with complex data, writing custom queries, and resolving technical issues.
  • Customer-Focused: Excellent communication skills, able to translate technical data insights to non-technical users and provide solutions clearly and effectively.
  • Problem-Solver: Strong analytical skills and a proactive mindset to address customer needs and overcome challenges in a fast-paced environment.
  • Team Player: Work collaboratively with both internal teams and customers to ensure success.

If you're passionate about data, thrive in a technical environment, and are excited to support life sciences teams in their data-driven decision-making, we'd love to hear from you!

Read more
Cornertree

at Cornertree

1 recruiter
Deepesh Shrimal
Posted by Deepesh Shrimal
Bengaluru (Bangalore), Pune, Hyderabad, Gurugram, Noida
5 - 10 yrs
₹15L - ₹30L / yr
Cassandra
PySpark
Data engineering
Big Data
Hadoop
+3 more

Skills:

Experience with Cassandra, including installing configuring and monitoring a Cassandra cluster.

Experience with Cassandra data modeling and CQL scripting. Experience with DataStax Enterprise Graph

Experience with both Windows and Linux Operating Systems. Knowledge of Microsoft .NET Framework (C#, NETCore).

Ability to perform effectively in a team-oriented environment

Read more
Affine
Rishika Chadha
Posted by Rishika Chadha
Remote only
5 - 8 yrs
Best in industry
skill iconScala
ETL
Apache Kafka
Object Oriented Programming (OOPs)
CI/CD
+4 more

Role Objective:


Big Data Engineer will be responsible for expanding and optimizing our data and database architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products


Roles & Responsibilities:

  • Sound knowledge in Spark architecture and distributed computing and Spark streaming.
  • Proficient in Spark – including RDD and Data frames core functions, troubleshooting and performance tuning.
  • SFDC(Data modelling experience) would be given preference
  • Good understanding in object-oriented concepts and hands on experience on Scala with excellent programming logic and technique.
  • Good in functional programming and OOPS concept on Scala
  • Good experience in SQL – should be able to write complex queries.
  • Managing the team of Associates and Senior Associates and ensuring the utilization is maintained across the project.
  • Able to mentor new members for onboarding to the project.
  • Understand the client requirement and able to design, develop from scratch and deliver.
  • AWS cloud experience would be preferable.
  • Design, build and operationalize large scale enterprise data solutions and applications using one or more of AWS data and analytics services - DynamoDB, RedShift, Kinesis, Lambda, S3, etc. (preferred)
  • Hands on experience utilizing AWS Management Tools (CloudWatch, CloudTrail) to proactively monitor large and complex deployments (preferred)
  • Experience in analyzing, re-architecting, and re-platforming on-premises data warehouses to data platforms on AWS (preferred)
  • Leading the client calls to flag off any delays, blockers, escalations and collate all the requirements.
  • Managing project timing, client expectations and meeting deadlines.
  • Should have played project and team management roles.
  • Facilitate meetings within the team on regular basis.
  • Understand business requirement and analyze different approaches and plan deliverables and milestones for the project.
  • Optimization, maintenance, and support of pipelines.
  • Strong analytical and logical skills.
  • Ability to comfortably tackling new challenges and learn
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort