50+ Data engineering Jobs in India
Apply to 50+ Data engineering Jobs on CutShort.io. Find your next job, effortlessly. Browse Data engineering Jobs and apply today!

Notice Period - 0-15 days Max
Apply only who are currently in Karnataka
F2F interview
Interview - 4 rounds
Job Title: AI Specialist
Company Overview: We are the Technology Center of Excellence for Long Arc Capital
which provides growth capital to businesses with a sustainable competitive advantage and
a strong management team with whom we can partner to build a category leader. We focus
on North American and European companies where technology is transforming traditional
business models in the Financial Services, Business Services, Technology, Media and
Telecommunications sectors.
As part of our mission to leverage AI for business innovation, we are establishing AI COE to
develop Generative AI (GenAI) and Agentic AI solutions that enhance decision-making,
automation, and user experiences.
Job Overview: We are seeking dynamic and talented individuals to join our AI COE. This
team will focus on developing advanced AI models, integrating them into our cloud-based
platform, and delivering impactful solutions that drive efficiency, innovation, and customer
value.
Key Responsibilities:
• As a Full Stack AI Engineer, research, design, and develop AI solutions for text,
image, audio, and video generation
• Build and deploy Agentic AI systems for autonomous decision-making across
business outcomes and enhancing associate productivity.
• Work with domain experts to design and fine-tune AI solutions tailored to portfoliospecific challenges.
• Partner with data engineers across portfolio companies to –
o Preprocess large datasets and ensure high-quality input for training AI
models.
o Develop scalable and efficient AI pipelines using frameworks like
TensorFlow, PyTorch, and Hugging Face.
• Implement MLOps best practices for AI model deployment, versioning, and
monitoring using tools like MLflow and Kubernetes.
• Ensure AI solutions adhere to ethical standards, comply with regulations (e.g.,
GDPR, CCPA), and mitigate biases.
• Design intuitive and user-friendly interfaces for AI-driven applications, collaborating
with UX designers and frontend developers.
Internal Use Only
• Stay up to date with the latest AI research and tools and evaluate their applicability
to our business needs.
Key Qualifications:
Technical Expertise:
• Proficiency in full stack application development (specifically using Angular, React).
• Expertise in backend technologies (Django, Flask) and cloud platforms (AWS
SageMaker/Azure AI Studio).
• Proficiency in deep learning frameworks (TensorFlow, PyTorch, JAX).
• Proficiency with Large Language Models (LLMs) and generative AI tools (e.g., OpenAI
APIs, LangChain, Stable Diffusion).
• Solid understanding of data engineering workflows, including ETL processes and
distributed computing tools (Apache Spark, Kafka).
• Experience with data pipelines, big data processing, and database management
(SQL, NoSQL).
• Knowledge of containerization (Docker) and orchestration (Kubernetes) for scalable
AI deployment.
• Familiarity with CI/CD pipelines and automation tools (Terraform, Jenkins).
• Good understanding of AI ethics, bias mitigation, and compliance standards.
• Excellent problem-solving abilities and innovative thinking.
• Strong collaboration and communication skills, with the ability to work in crossfunctional teams.
• Proven ability to work in a fast-paced and dynamic environment.
Preferred Qualifications:
• Advanced studies in Artificial Intelligence, or a related field.
• Experience with reinforcement learning, multi-agent systems, or autonomous
decision-making
1. Design, develop, and maintain data pipelines using Azure Data Factory
2. Create and manage data models in PostgreSQL, ensuring efficient data storage and retrieval.
3. Optimize query performance and database performance in PostgreSQL, including indexing, query tuning, and performance monitoring.
4. Strong knowledge on data modeling and mapping from various sources to data model.
5. Develop and maintain logging mechanisms in Azure Data Factory to monitor and troubleshoot data pipelines.
6. Strong knowledge on Key Vault, Azure Data lake, PostgreSQL
7. Manage file handling within Azure Data Factory, including reading, writing, and transforming data from various file formats.
8. Strong SQL query skills with the ability to handle multiple scenarios and optimize query performance.
9. Excellent problem-solving skills and ability to handle complex data scenarios.
10. Collaborate with Business stakeholder, data architects and PO's to understand and meet their data requirements.
11. Ensure data quality and integrity through validation and quality checks.
12. Having Power BI knowledge, creating and configuring semantic models & reports would be preferred.
Duties
- Perform site survey and analysis on all noise and vibration requirement
- Develop acoustic system design concepts and report to achieve project and product performance requirements
- Troubleshoot noise issue and provide solution
- Collaborate with the sale team to provide acoustic solution through site visits and measurement
- Study Project specification and propose suitable product and solution
- Prepare project BOQ and Technical submittal
- Develop and improve the company acoustic products
Experience
- Good knowledge and understanding of acoustic testing and measurement techniques
- Good experience in Acoustic Software's
- Good Experience in AutoCAD and other modelling software

Key Responsibilities
● Design & Development
○ Architect and implement data ingestion pipelines using Microsoft Fabric Data Factory (Dataflows) and OneLake sources
○ Build and optimize Lakehouse and Warehouse solutions leveraging Delta Lake, Spark Notebooks, and SQL Endpoints
○ Define and enforce Medallion (Bronze–Silver–Gold) architecture patterns for raw, enriched, and curated datasets
● Data Modeling & Transformation
○ Develop scalable transformation logic in Spark (PySpark/Scala) and Fabric SQL to support reporting and analytics
○ Implement slowly changing dimensions (SCD Type 2), change-data-capture (CDC) feeds, and time-windowed aggregations
● Performance Tuning & Optimization
○ Monitor and optimize data pipelines for throughput, cost efficiency, and reliability
○ Apply partitioning, indexing, caching, and parallelism best practices in Fabric Lakehouses and Warehouse compute
● Data Quality & Governance
○ Integrate Microsoft Purview for metadata cataloging, lineage tracking, and data discovery
○ Develop automated quality checks, anomaly detection rules, and alerts for data reliability
● CI/CD & Automation
○ Implement infrastructure-as-code (ARM templates or Terraform) for Fabric workspaces, pipelines, and artifacts
○ Set up Git-based version control, CI/CD pipelines (e.g. Azure DevOps) for seamless deployment across environments
● Collaboration & Support
○ Partner with data scientists, BI developers, and business analysts to understand requirements and deliver data solutions
○ Provide production support, troubleshoot pipeline failures, and drive root-cause analysis
Required Qualifications
● 5+ years of professional experience in data engineering roles, with at least 1 year working hands-on in Microsoft Fabric
● Strong proficiency in:
○ Languages: SQL (T-SQL), Python, and/or Scala
○ Fabric Components: Data Factory Dataflows, OneLake, Spark Notebooks, Lakehouse, Warehouse
○ Data Storage: Delta Lake, Parquet, CSV, JSON formats
● Deep understanding of data modeling principles (star schemas, snowflake schemas, normalized vs. denormalized)
● Experience with CI/CD and infrastructure-as-code for data platforms (ARM templates, Terraform, Git)
● Familiarity with data governance tools, especially Microsoft Purview
● Excellent problem-solving skills and ability to communicate complex technical concepts clearly
NOTE: Candidate should be willing to take one technical round F2F from any of the branch location. (Pune/ Mumbai/ Bangalore)
∙
Good experience in 5+ SQL and NoSQL database development and optimization.
∙Strong hands-on experience with Amazon Redshift, MySQL, MongoDB, and Flyway.
∙In-depth understanding of data warehousing principles and performance tuning techniques.
∙Strong hands-on experience in building complex aggregation pipelines in NoSQL databases such as MongoDB.
∙Proficient in Python or Scala for data processing and automation.
∙3+ years of experience working with AWS-managed database services.
∙3+ years of experience with Power BI or similar BI/reporting platforms.
Job Title : Data Engineer – GCP + Spark + DBT
Location : Bengaluru (On-site at Client Location | 3 Days WFO)
Experience : 8 to 12 Years
Level : Associate Architect
Type : Full-time
Job Overview :
We are looking for a seasoned Data Engineer to join the Data Platform Engineering team supporting a Unified Data Platform (UDP). This role requires hands-on expertise in DBT, GCP, BigQuery, and PySpark, with a solid foundation in CI/CD, data pipeline optimization, and agile delivery.
Mandatory Skills : GCP, DBT, Google Dataform, BigQuery, PySpark/Spark SQL, Advanced SQL, CI/CD, Git, Agile Methodologies.
Key Responsibilities :
- Design, build, and optimize scalable data pipelines using BigQuery, DBT, and PySpark.
- Leverage GCP-native services like Cloud Storage, Pub/Sub, Dataproc, Cloud Functions, and Composer for ETL/ELT workflows.
- Implement and maintain CI/CD for data engineering projects with Git-based version control.
- Collaborate with cross-functional teams including Infra, Security, and DataOps for reliable, secure, and high-quality data delivery.
- Lead code reviews, mentor junior engineers, and enforce best practices in data engineering.
- Participate in Agile sprints, backlog grooming, and Jira-based project tracking.
Must-Have Skills :
- Strong experience with DBT, Google Dataform, and BigQuery
- Hands-on expertise with PySpark/Spark SQL
- Proficient in GCP for data engineering workflows
- Solid knowledge of SQL optimization, Git, and CI/CD pipelines
- Agile team experience and strong problem-solving abilities
Nice-to-Have Skills :
- Familiarity with Databricks, Delta Lake, or Kafka
- Exposure to data observability and quality frameworks (e.g., Great Expectations, Soda)
- Knowledge of MDM patterns, Terraform, or IaC is a plus


Job Title : AI Architect
Location : Pune (On-site | 3 Days WFO)
Experience : 6+ Years
Shift : US or flexible shifts
Job Summary :
We are looking for an experienced AI Architect to design and deploy AI/ML solutions that align with business goals.
The role involves leading end-to-end architecture, model development, deployment, and integration using modern AI/ML tools and cloud platforms (AWS/Azure/GCP).
Key Responsibilities :
- Define AI strategy and identify business use cases
- Design scalable AI/ML architectures
- Collaborate on data preparation, model development & deployment
- Ensure data quality, governance, and ethical AI practices
- Integrate AI into existing systems and monitor performance
Must-Have Skills :
- Machine Learning, Deep Learning, NLP, Computer Vision
- Data Engineering, Model Deployment (CI/CD, MLOps)
- Python Programming, Cloud (AWS/Azure/GCP)
- Distributed Systems, Data Governance
- Strong communication & stakeholder collaboration
Good to Have :
- AI certifications (Azure/GCP/AWS)
- Experience in big data and analytics
Job Title : Senior Data Engineer
Experience : 6 to 10 Years
Location : Gurgaon (Hybrid – 3 days office / 2 days WFH)
Notice Period : Immediate to 30 days (Buyout option available)
About the Role :
We are looking for an experienced Senior Data Engineer to join our Digital IT team in Gurgaon.
This role involves building scalable data pipelines, managing data architecture, and ensuring smooth data flow across the organization while maintaining high standards of security and compliance.
Mandatory Skills :
Azure Data Factory (ADF), Azure Cloud Services, SQL, Data Modelling, CI/CD tools, Git, Data Governance, RDBMS & NoSQL databases (e.g., SQL Server, PostgreSQL, Redis, ElasticSearch), Data Lake migration.
Key Responsibilities :
- Design and develop secure, scalable end-to-end data pipelines using Azure Data Factory (ADF) and Azure services.
- Build and optimize data architectures (including Medallion Architecture).
- Collaborate with cross-functional teams on cybersecurity, data privacy (e.g., GDPR), and governance.
- Manage structured/unstructured data migration to Data Lake.
- Ensure CI/CD integration for data workflows and version control using Git.
- Identify and integrate data sources (internal/external) in line with business needs.
- Proactively highlight gaps and risks related to data compliance and integrity.
Required Skills :
- Azure Data Factory (ADF) – Mandatory
- Strong SQL and Data Modelling expertise.
- Hands-on with Azure Cloud Services and data architecture.
- Experience with CI/CD tools and version control (Git).
- Good understanding of Data Governance practices.
- Exposure to ETL/ELT pipelines and Data Lake migration.
- Working knowledge of RDBMS and NoSQL databases (e.g., SQL Server, PostgreSQL, Redis, ElasticSearch).
- Understanding of RESTful APIs, deployment on cloud/on-prem infrastructure.
- Strong problem-solving, communication, and collaboration skills.
Additional Info :
- Work Mode : Hybrid (No remote); relocation to Gurgaon required for non-NCR candidates.
- Communication : Above-average verbal and written English skills.
Perks & Benefits :
- 5 Days work week
- Global exposure and leadership collaboration.
- Health insurance, employee-friendly policies, training and development.

Role Overview:
We are seeking a Senior Software Engineer (SSE) with strong expertise in Kafka, Python, and Azure Databricks to lead and contribute to our healthcare data engineering initiatives. This role is pivotal in building scalable, real-time data pipelines and processing large-scale healthcare datasets in a secure and compliant cloud environment.
The ideal candidate will have a solid background in real-time streaming, big data processing, and cloud platforms, along with strong leadership and stakeholder engagement capabilities.
Key Responsibilities:
- Design and develop scalable real-time data streaming solutions using Apache Kafka and Python.
- Architect and implement ETL/ELT pipelines using Azure Databricks for both structured and unstructured healthcare data.
- Optimize and maintain Kafka applications, Python scripts, and Databricks workflows to ensure performance and reliability.
- Ensure data integrity, security, and compliance with healthcare standards such as HIPAA and HITRUST.
- Collaborate with data scientists, analysts, and business stakeholders to gather requirements and translate them into robust data solutions.
- Mentor junior engineers, perform code reviews, and promote engineering best practices.
- Stay current with evolving technologies in cloud, big data, and healthcare data standards.
- Contribute to the development of CI/CD pipelines and containerized environments (Docker, Kubernetes).
Required Skills & Qualifications:
- 4+ years of hands-on experience in data engineering roles.
- Strong proficiency in Kafka (including Kafka Streams, Kafka Connect, Schema Registry).
- Proficient in Python for data processing and automation.
- Experience with Azure Databricks (or readiness to ramp up quickly).
- Solid understanding of cloud platforms, with a preference for Azure (AWS/GCP is a plus).
- Strong knowledge of SQL and NoSQL databases; data modeling for large-scale systems.
- Familiarity with containerization tools like Docker and orchestration using Kubernetes.
- Exposure to CI/CD pipelines for data applications.
- Prior experience with healthcare datasets (EHR, HL7, FHIR, claims data) is highly desirable.
- Excellent problem-solving abilities and a proactive mindset.
- Strong communication and interpersonal skills to work in cross-functional teams.
Strong Data Architect, Lead Data Engineer, Engineering Manager / Director Profile
Mandatory (Experience 1) - Must have 10+ YOE in Data Engineering roles, with at least 2+ years in a Leadership role
Mandatory (Experience 2) - Must have 7+ YOE in hands-on Tech development with Java (Highly preferred) or Python, Node.JS, GoLang
Mandatory (Experience 3) - Must have recent 4+ YOE with high-growth Product startups, and should have implemented Data Engineering systems from an early stage in the Company
Mandatory (Experience 4) - Must have strong experience in large data technologies, tools like HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, Presto etc.
Mandatory (Experience 5) - Strong expertise in HLD and LLD, to design scalable, maintainable data architectures.
Mandatory (Team Management) - Must have managed a team of atleast 5+ Data Engineers (Read Leadership role in CV)
Mandatory (Education) - Must be from Tier - 1 Colleges, preferred IIT
Mandatory (Company) - B2B Product Companies with High data-traffic
Preferred Companies
MoEngage, Whatfix, Netcore Cloud, Clevertap, Hevo Data, Snowflake, Chargebee, Fractor.ai, Databricks, Dataweave, Wingman, Postman, Zoho, HighRadius, Freshworks, Mindtickle

🛠️ Key Responsibilities
- Design, build, and maintain scalable data pipelines using Python and Apache Spark (PySpark or Scala APIs)
- Develop and optimize ETL processes for batch and real-time data ingestion
- Collaborate with data scientists, analysts, and DevOps teams to support data-driven solutions
- Ensure data quality, integrity, and governance across all stages of the data lifecycle
- Implement data validation, monitoring, and alerting mechanisms for production pipelines
- Work with cloud platforms (AWS, GCP, or Azure) and tools like Airflow, Kafka, and Delta Lake
- Participate in code reviews, performance tuning, and documentation
🎓 Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
- 3–6 years of experience in data engineering with a focus on Python and Spark
- Experience with distributed computing and handling large-scale datasets (10TB+)
- Familiarity with data security, PII handling, and compliance standards is a plus

About the Company
Hypersonix.ai is disrupting the e-commerce space with AI, ML and advanced decision capabilities to drive real-time business insights. Hypersonix.ai has been built ground up with new age technology to simplify the consumption of data for our customers in various industry verticals. Hypersonix.ai is seeking a well-rounded, hands-on product leader to help lead product management of key capabilities and features.
About the Role
We are looking for talented and driven Data Engineers at various levels to work with customers to build the data warehouse, analytical dashboards and ML capabilities as per customer needs.
Roles and Responsibilities
- Create and maintain optimal data pipeline architecture
- Assemble large, complex data sets that meet functional / non-functional business requirements; should write complex queries in an optimized way
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies
- Run ad-hoc analysis utilizing the data pipeline to provide actionable insights
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs
- Keep our data separated and secure across national boundaries through multiple data centers and AWS regions
- Work with analytics and data scientist team members and assist them in building and optimizing our product into an innovative industry leader
Requirements
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
- Strong analytic skills related to working with unstructured datasets
- Build processes supporting data transformation, data structures, metadata, dependency and workload management
- A successful history of manipulating, processing and extracting value from large disconnected datasets
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores
- Experience supporting and working with cross-functional teams in a dynamic environment
- We are looking for a candidate with 7+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Information Technology or completed MCA.

Job Title : Python Data Engineer
Experience : 4+ Years
Location : Bangalore / Hyderabad (On-site)
Job Summary :
We are seeking a skilled Python Data Engineer to work on cloud-native data platforms and backend services.
The role involves building scalable APIs, working with diverse data systems, and deploying containerized services using modern cloud infrastructure.
Mandatory Skills : Python, AWS, RESTful APIs, Microservices, SQL/PostgreSQL/NoSQL, Docker, Kubernetes, CI/CD (Jenkins/GitLab CI/AWS CodePipeline)
Key Responsibilities :
- Design, develop, and maintain backend systems using Python.
- Build and manage RESTful APIs and microservices architectures.
- Work extensively with AWS cloud services for deployment and data storage.
- Implement and manage SQL, PostgreSQL, and NoSQL databases.
- Containerize applications using Docker and orchestrate with Kubernetes.
- Set up and maintain CI/CD pipelines using Jenkins, GitLab CI, or AWS CodePipeline.
- Collaborate with teams to ensure scalable and reliable software delivery.
- Troubleshoot and optimize application performance.
Must-Have Skills :
- 4+ years of hands-on experience in Python backend development.
- Strong experience with AWS cloud infrastructure.
- Proficiency in building microservices and APIs.
- Good knowledge of relational and NoSQL databases.
- Experience with Docker and Kubernetes.
- Familiarity with CI/CD tools and DevOps processes.
- Strong problem-solving and collaboration skills.

About the Role:
We are seeking a talented Lead Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.
Responsibilities:
- Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
- Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
- Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
- Team Management: Able to handle team.
- Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
- Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
- Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
- Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.
Skills:
- Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
- Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
- Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
- Understanding of data modeling and data architecture concepts.
- Experience with ETL/ELT tools and frameworks.
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
Preferred Qualifications:
- Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
- Knowledge of machine learning and artificial intelligence concepts.
- Experience with data visualization tools (e.g., Tableau, Power BI).
- Certification in cloud platforms or data engineering.
Job Title : Data Engineer – Snowflake Expert
Location : Pune (Onsite)
Experience : 10+ Years
Employment Type : Contractual
Mandatory Skills : Snowflake, Advanced SQL, ETL/ELT (Snowpipe, Tasks, Streams), Data Modeling, Performance Tuning, Python, Cloud (preferably Azure), Security & Data Governance.
Job Summary :
We are seeking a seasoned Data Engineer with deep expertise in Snowflake to design, build, and maintain scalable data solutions.
The ideal candidate will have a strong background in data modeling, ETL/ELT, SQL optimization, and cloud data warehousing principles, with a passion for leveraging Snowflake to drive business insights.
Responsibilities :
- Collaborate with data teams to optimize and enhance data pipelines and models on Snowflake.
- Design and implement scalable ELT pipelines with performance and cost-efficiency in mind.
- Ensure high data quality, security, and adherence to governance frameworks.
- Conduct code reviews and align development with best practices.
Qualifications :
- Bachelor’s in Computer Science, Data Science, IT, or related field.
- Snowflake certifications (Pro/Architect) preferred.


What You’ll Be Doing:
● Design and build parts of our data pipeline architecture for extraction, transformation, and loading of data from a wide variety of data sources using the latest Big Data technologies.
● Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
● Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
● Work with machine learning, data, and analytics experts to drive innovation, accuracy and greater functionality in our data system. Qualifications:
● Bachelor's degree in Engineering, Computer Science, or relevant field.
● 10+ years of relevant and recent experience in a Data Engineer role. ● 5+ years recent experience with Apache Spark and solid understanding of the fundamentals.
● Deep understanding of Big Data concepts and distributed systems.
● Strong coding skills with Scala, Python, Java and/or other languages and the ability to quickly switch between them with ease.
● Advanced working SQL knowledge and experience working with a variety of relational databases such as Postgres and/or MySQL.
● Cloud Experience with DataBricks
● Experience working with data stored in many formats including Delta Tables, Parquet, CSV and JSON.
● Comfortable working in a linux shell environment and writing scripts as needed.
● Comfortable working in an Agile environment
● Machine Learning knowledge is a plus.
● Must be capable of working independently and delivering stable, efficient and reliable software.
● Excellent written and verbal communication skills in English.
● Experience supporting and working with cross-functional teams in a dynamic environment
EMPLOYMENT TYPE: Full-Time, Permanent
LOCATION: Remote (Pan India)
SHIFT TIMINGS: 2.00 pm-11:00pm IST

About the Role:
We are seeking a talented Lead Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.
Responsibilities:
- Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
- Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
- Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
- Team Management: Able to handle team.
- Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
- Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
- Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
- Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.
Skills:
- Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
- Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
- Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
- Understanding of data modeling and data architecture concepts.
- Experience with ETL/ELT tools and frameworks.
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
Preferred Qualifications:
- Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
- Knowledge of machine learning and artificial intelligence concepts.
- Experience with data visualization tools (e.g., Tableau, Power BI).
- Certification in cloud platforms or data engineering.

Key Responsibilities :
- Centralize structured and unstructured data
- Contribute to data strategy through data modeling, management, and governance
- Build, optimize, and maintain data pipelines and management frameworks
- Collaborate with cross-functional teams to develop scalable data and AI-driven solutions
- Take ownership of projects from ideation to production
Ideal Qualifications and Skills :
- Bachelor's degree in Computer Science or equivalent experience
- 8+ years of industry experience
- Strong expertise in data modeling and management concepts
- Experience with Snowflake, data warehousing, and data pipelines
- Proficiency in Python or another programming language
- Excellent communication, collaboration, and ownership mindset
- Foundational knowledge of API development and integration
Nice to Have :
- Experience with Tableau, Alteryx
- Master data management implementation experience
Success Factors :
- Strong technical foundation
- Collaborative mindset
- Ability to navigate complex data challenges
- Ownership mindset and startup-like culture fit

Job Description: Data Engineer
Position Overview:
Role Overview
We are seeking a skilled Python Data Engineer with expertise in designing and implementing data solutions using the AWS cloud platform. The ideal candidate will be responsible for building and maintaining scalable, efficient, and secure data pipelines while leveraging Python and AWS services to enable robust data analytics and decision-making processes.
Key Responsibilities
· Design, develop, and optimize data pipelines using Python and AWS services such as Glue, Lambda, S3, EMR, Redshift, Athena, and Kinesis.
· Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses).
· Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions.
· Monitor, troubleshoot, and enhance data workflows for performance and cost optimization.
· Ensure data quality and consistency by implementing validation and governance practices.
· Work on data security best practices in compliance with organizational policies and regulations.
· Automate repetitive data engineering tasks using Python scripts and frameworks.
· Leverage CI/CD pipelines for deployment of data workflows on AWS.
Ideal Candidate
10+ years of experience in software/data engineering, with at least 3+ years in a leadership role.
Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS.
Proficiency in SQL, Python, and Scala for data processing and analytics.
Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services.
Strong foundation and expertise in HLD and LLD, as well as design patterns, preferably using Spring Boot or Google Guice
Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks.
Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery
Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.).
Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB.
Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK.
Proven ability to drive technical strategy and align it with business objectives.
Strong leadership, communication, and stakeholder management skills.
Ideal Candidate
10+ years of experience in software/data engineering, with at least 3+ years in a leadership role.
Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS.
Proficiency in SQL, Python, and Scala for data processing and analytics.
Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services.
Strong foundation and expertise in HLD and LLD, as well as design patterns, preferably using Spring Boot or Google Guice
Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks.
Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery
Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.).
Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB.
Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK.
Proven ability to drive technical strategy and align it with business objectives.
Strong leadership, communication, and stakeholder management skills.
Ideal Candidate
10+ years of experience in software/data engineering, with at least 3+ years in a leadership role.
Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS.
Proficiency in SQL, Python, and Scala for data processing and analytics.
Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services.
Strong foundation and expertise in HLD and LLD, as well as design patterns, preferably using Spring Boot or Google Guice
Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks.
Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery
Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.).
Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB.
Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK.
Proven ability to drive technical strategy and align it with business objectives.
Strong leadership, communication, and stakeholder management skills

Work Mode: Hybrid
Need B.Tech, BE, M.Tech, ME candidates - Mandatory
Must-Have Skills:
● Educational Qualification :- B.Tech, BE, M.Tech, ME in any field.
● Minimum of 3 years of proven experience as a Data Engineer.
● Strong proficiency in Python programming language and SQL.
● Experience in DataBricks and setting up and managing data pipelines, data warehouses/lakes.
● Good comprehension and critical thinking skills.
● Kindly note Salary bracket will vary according to the exp. of the candidate -
- Experience from 4 yrs to 6 yrs - Salary upto 22 LPA
- Experience from 5 yrs to 8 yrs - Salary upto 30 LPA
- Experience more than 8 yrs - Salary upto 40 LPA
Hiring for Big4 Company'
GCP Data engineer
GCP - Mandate
3-7 Years
Gurgaon location
only serving candidate or immediately Joiner can apply
Notice period - less than 30 Days

Job Description: Data Engineer
Position Overview:
Role Overview
We are seeking a skilled Python Data Engineer with expertise in designing and implementing data solutions using the AWS cloud platform. The ideal candidate will be responsible for building and maintaining scalable, efficient, and secure data pipelines while leveraging Python and AWS services to enable robust data analytics and decision-making processes.
Key Responsibilities
· Design, develop, and optimize data pipelines using Python and AWS services such as Glue, Lambda, S3, EMR, Redshift, Athena, and Kinesis.
· Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses).
· Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions.
· Monitor, troubleshoot, and enhance data workflows for performance and cost optimization.
· Ensure data quality and consistency by implementing validation and governance practices.
· Work on data security best practices in compliance with organizational policies and regulations.
· Automate repetitive data engineering tasks using Python scripts and frameworks.
· Leverage CI/CD pipelines for deployment of data workflows on AWS.
Required Skills and Qualifications
· Professional Experience: 5+ years of experience in data engineering or a related field.
· Programming: Strong proficiency in Python, with experience in libraries like pandas, pySpark, or boto3.
· AWS Expertise: Hands-on experience with core AWS services for data engineering, such as:
· AWS Glue for ETL/ELT.
· S3 for storage.
· Redshift or Athena for data warehousing and querying.
· Lambda for serverless compute.
· Kinesis or SNS/SQS for data streaming.
· IAM Roles for security.
· Databases: Proficiency in SQL and experience with relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., DynamoDB) databases.
· Data Processing: Knowledge of big data frameworks (e.g., Hadoop, Spark) is a plus.
· DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline.
· Version Control: Proficient with Git-based workflows.
· Problem Solving: Excellent analytical and debugging skills.
Optional Skills
· Knowledge of data modeling and data warehouse design principles.
· Experience with data visualization tools (e.g., Tableau, Power BI).
· Familiarity with containerization (e.g., Docker) and orchestration (e.g., Kubernetes).
· Exposure to other programming languages like Scala or Java.
Education
· Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
Why Join Us?
· Opportunity to work on cutting-edge AWS technologies.
· Collaborative and innovative work environment.
Ideal Candidate
10+ years of experience in software/data engineering, with at least 3+ years in a leadership role.
Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS.
Proficiency in SQL, Python, and Scala for data processing and analytics.
Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services.
Strong foundation and expertise in HLD and LLD, as well as design patterns, preferably using Spring Boot or Google Guice
Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks.
Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery
Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.).
Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB.
Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK.
Proven ability to drive technical strategy and align it with business objectives.
Strong leadership, communication, and stakeholder management skills
Here is the Job Description -
Location -- Viman Nagar, Pune
Mode - 5 Days Working
Required Tech Skills:
● Strong at PySpark, Python
● Good understanding of Data Structure
● Good at SQL query/optimization
● Strong fundamentals of OOPs programming
● Good understanding of AWS Cloud, Big Data.
● Data Lake, AWS Glue, Athena, S3, Kinesis, SQL/NoSQL DB
Role & Responsibilities
Data Organization and Governance: Define and maintain governance standards that span multiple systems (AWS, Fivetran, Snowflake, PostgreSQL, Salesforce/nCino, Looker), ensuring that data remains accurate, accessible, and organized across the organization.
Solve Data Problems Proactively: Address recurring data issues that sidetrack operational and strategic initiatives by implementing processes and tools to anticipate, identify, and resolve root causes effectively.
System Integration: Lead the integration of diverse systems into a cohesive data environment, optimizing workflows and minimizing manual intervention.
Hands-On Problem Solving: Take a hands-on approach to resolving reporting issues and troubleshooting data challenges when necessary, ensuring minimal disruption to business operations.
Collaboration Across Teams: Work closely with business and technical stakeholders to understand and solve our biggest challenges
Mentorship and Leadership: Guide and mentor team members, fostering a culture of accountability and excellence in data management practices.
Strategic Data Support: Ensure that marketing, analytics, and other strategic initiatives are not derailed by data integrity issues, enabling the organization to focus on growth and innovation.

Role & Responsibilities
Lead and mentor a team of data engineers, ensuring high performance and career growth.
Architect and optimize scalable data infrastructure, ensuring high availability and reliability.
Drive the development and implementation of data governance frameworks and best practices.
Work closely with cross-functional teams to define and execute a data roadmap.
Optimize data processing workflows for performance and cost efficiency.
Ensure data security, compliance, and quality across all data platforms.
Foster a culture of innovation and technical excellence within the data team.
We are seeking a skilled Cloud Data Engineer who has experience with cloud data platforms like AWS or Azure and especially Snowflake and dbt to join our dynamic team. As a consultant, you will be responsible for developing new data platforms and create the data processes. You will collaborate with cross-functional teams to design, develop, and deploy high-quality frontend solutions.
Responsibilities:
Customer consulting: You develop data-driven products in the Snowflake Cloud and connect data & analytics with specialist departments. You develop ELT processes using dbt (data build tool)
Specifying requirements: You develop concrete requirements for future-proof cloud data architectures.
Develop data routes: You design scalable and powerful data management processes.
Analyze data: You derive sound findings from data sets and present them in an understandable way.
Requirements:
Requirements management and project experience: You successfully implement cloud-based data & analytics projects.
Data architectures: You are proficient in DWH/data lake concepts and modeling with Data Vault 2.0.
Cloud expertise: You have extensive knowledge of Snowflake, dbt and other cloud technologies (e.g. MS Azure, AWS, GCP).
SQL know-how: You have a sound and solid knowledge of SQL.
Data management: You are familiar with topics such as master data management and data quality.
Bachelor's degree in computer science, or a related field.
Strong communication and collaboration abilities to work effectively in a team environment.
Skills & Requirements
Cloud Data Engineering, AWS, Azure, Snowflake, dbt, ELT processes, Data-driven consulting, Cloud data architectures, Scalable data management, Data analysis, Requirements management, Data warehousing, Data lake, Data Vault 2.0, SQL, Master data management, Data quality, GCP, Strong communication, Collaboration.
Overview
adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job Description
We are seeking a skilled Cloud Data Engineer who has experience with cloud data platforms like AWS or Azure and especially Snowflake and dbt to join our dynamic team. As a consultant, you will be responsible for developing new data platforms and create the data processes. You will collaborate with cross-functional teams to design, develop, and deploy high-quality frontend solutions.
Responsibilities:
Customer consulting: You develop data-driven products in the Snowflake Cloud and connect data & analytics with specialist departments. You develop ELT processes using dbt (data build tool)
Specifying requirements: You develop concrete requirements for future-proof cloud data architectures.
Develop data routes: You design scalable and powerful data management processes.
Analyze data: You derive sound findings from data sets and present them in an understandable way.
Requirements:
Requirements management and project experience: You successfully implement cloud-based data & analytics projects.
Data architectures: You are proficient in DWH/data lake concepts and modeling with Data Vault 2.0.
Cloud expertise: You have extensive knowledge of Snowflake, dbt and other cloud technologies (e.g. MS Azure, AWS, GCP).
SQL know-how: You have a sound and solid knowledge of SQL.
Data management: You are familiar with topics such as master data management and data quality.
Bachelor's degree in computer science, or a related field.
Strong communication and collaboration abilities to work effectively in a team environment.
Skills & Requirements
Cloud Data Engineering, AWS, Azure, Snowflake, dbt, ELT processes, Data-driven consulting, Cloud data architectures, Scalable data management, Data analysis, Requirements management, Data warehousing, Data lake, Data Vault 2.0, SQL, Master data management, Data quality, GCP, Strong communication, Collaboration.


Role & Responsibilities
Lead and mentor a team of data engineers, ensuring high performance and career growth.
Architect and optimize scalable data infrastructure, ensuring high availability and reliability.
Drive the development and implementation of data governance frameworks and best practices.
Work closely with cross-functional teams to define and execute a data roadmap.
Optimize data processing workflows for performance and cost efficiency.
Ensure data security, compliance, and quality across all data platforms.
Foster a culture of innovation and technical excellence within the data team.


Role & Responsibilities
Lead and mentor a team of data engineers, ensuring high performance and career growth.
Architect and optimize scalable data infrastructure, ensuring high availability and reliability.
Drive the development and implementation of data governance frameworks and best practices.
Work closely with cross-functional teams to define and execute a data roadmap.
Optimize data processing workflows for performance and cost efficiency.
Ensure data security, compliance, and quality across all data platforms.
Foster a culture of innovation and technical excellence within the data team.

Job Title : Sr. Data Engineer
Experience : 5+ Years
Location : Noida (Hybrid – 3 Days in Office)
Shift Timing : 2-11 PM
Availability : Immediate
Job Description :
- We are seeking a Senior Data Engineer to design, develop, and optimize data solutions.
- The role involves building ETL pipelines, integrating data into BI tools, and ensuring data quality while working with SQL, Python (Pandas, NumPy), and cloud platforms (AWS/GCP).
- You will also develop dashboards using Looker Studio and work with AWS services like S3, Lambda, Glue ETL, Athena, RDS, and Redshift.
- Strong debugging, collaboration, and communication skills are essential.
Experience: 4+ years.
Location: Vadodara & Pune
Skills Set- Snowflake, Power Bi, ETL, SQL, Data Pipelines
What you'll be doing:
- Develop, implement, and manage scalable Snowflake data warehouse solutions using advanced features such as materialized views, task automation, and clustering.
- Design and build real-time data pipelines from Kafka and other sources into Snowflake using Kafka Connect, Snowpipe, or custom solutions for streaming data ingestion.
- Create and optimize ETL/ELT workflows using tools like DBT, Airflow, or cloud-native solutions to ensure efficient data processing and transformation.
- Tune query performance, warehouse sizing, and pipeline efficiency by utilizing Snowflakes Query Profiling, Resource Monitors, and other diagnostic tools.
- Work closely with architects, data analysts, and data scientists to translate complex business requirements into scalable technical solutions.
- Enforce data governance and security standards, including data masking, encryption, and RBAC, to meet organizational compliance requirements.
- Continuously monitor data pipelines, address performance bottlenecks, and troubleshoot issues using monitoring frameworks such as Prometheus, Grafana, or Snowflake-native tools.
- Provide technical leadership, guidance, and code reviews for junior engineers, ensuring best practices in Snowflake and Kafka development are followed.
- Research emerging tools, frameworks, and methodologies in data engineering and integrate relevant technologies into the data stack.
What you need:
Basic Skills:
- 3+ years of hands-on experience with Snowflake data platform, including data modeling, performance tuning, and optimization.
- Strong experience with Apache Kafka for stream processing and real-time data integration.
- Proficiency in SQL and ETL/ELT processes.
- Solid understanding of cloud platforms such as AWS, Azure, or Google Cloud.
- Experience with scripting languages like Python, Shell, or similar for automation and data integration tasks.
- Familiarity with tools like dbt, Airflow, or similar orchestration platforms.
- Knowledge of data governance, security, and compliance best practices.
- Strong analytical and problem-solving skills with the ability to troubleshoot complex data issues.
- Ability to work in a collaborative team environment and communicate effectively with cross-functional teams
Responsibilities:
- Design, develop, and maintain Snowflake data warehouse solutions, leveraging advanced Snowflake features like clustering, partitioning, materialized views, and time travel to optimize performance, scalability, and data reliability.
- Architect and optimize ETL/ELT pipelines using tools such as Apache Airflow, DBT, or custom scripts, to ingest, transform, and load data into Snowflake from sources like Apache Kafka and other streaming/batch platforms.
- Work in collaboration with data architects, analysts, and data scientists to gather and translate complex business requirements into robust, scalable technical designs and implementations.
- Design and implement Apache Kafka-based real-time messaging systems to efficiently stream structured and semi-structured data into Snowflake, using Kafka Connect, KSQL, and Snow pipe for real-time ingestion.
- Monitor and resolve performance bottlenecks in queries, pipelines, and warehouse configurations using tools like Query Profile, Resource Monitors, and Task Performance Views.
- Implement automated data validation frameworks to ensure high-quality, reliable data throughout the ingestion and transformation lifecycle.
- Pipeline Monitoring and Optimization: Deploy and maintain pipeline monitoring solutions using Prometheus, Grafana, or cloud-native tools, ensuring efficient data flow, scalability, and cost-effective operations.
- Implement and enforce data governance policies, including role-based access control (RBAC), data masking, and auditing to meet compliance standards and safeguard sensitive information.
- Provide hands-on technical mentorship to junior data engineers, ensuring adherence to coding standards, design principles, and best practices in Snowflake, Kafka, and cloud data engineering.
- Stay current with advancements in Snowflake, Kafka, cloud services (AWS, Azure, GCP), and data engineering trends, and proactively apply new tools and methodologies to enhance the data platform.
For a startup run by a serial founder:
Job Description :
- Architect and develop our B2B platform, focusing on delivering an exceptional user experience.
- Recruit and mentor an engineering team to build scalable and reliable technology solutions.
- Work in tandem with co-founders to ensure technology strategies are well-aligned with business objectives.
- Manage technical architecture to guarantee performance, scalability, and adaptability for future needs.
- Drive innovation by adopting advanced technologies into our product development cycle.
- Promote a culture of technical excellence and collaborative spirit within the engineering team.
Qualifications :
- Over 8 years of experience in technology team management roles, with a proven track record in software development and system architecture + Dev Ops
- Entrepreneurial mindset with a strong interest in building and scaling innovative products.
- Exceptional leadership abilities with experience in team building and mentorship.
- Strategic thinker with a history of delivering effective technology solutions.
Skills :
- Expertise in modern programming languages and application frameworks.
- Deep knowledge of cloud platforms, databases, and system design principles.
- Excellent analytical, problem-solving, and decision-making skills.
- Strong communication skills with the ability to lead cross-functional teams effectively.


Job Description: Product Manager for GenAI Applications on Data Products About the Company: We are a forward-thinking technology company specializing in creating innovative data products and AI applications. Our mission is to harness the power of data and AI to drive business growth and efficiency. We are seeking a dynamic and experienced Product Manager to join our team and lead the development of cutting-edge GenAI applications. Role Overview: As a Product Manager for GenAI Applications, you will be responsible for conceptualizing, developing, and managing AI-driven products that leverage our data platforms. You will work closely with cross-functional teams, including engineering, data science, marketing, and sales, to ensure the successful delivery of high-impact AI solutions. Your understanding of business user needs and ability to translate them into effective AI applications will be crucial. Key Responsibilities: - Lead the end-to-end product lifecycle from ideation to launch for GenAI applications. - Collaborate with engineering and data science teams to design, develop, and deploy AI solutions. - Conduct market research and gather user feedback to identify opportunities for new product features and improvements. - Develop detailed product requirements, roadmaps, and user stories to guide development efforts. - Work with business stakeholders to understand their needs and ensure the AI applications meet their requirements. - Drive the product vision and strategy, aligning it with company goals and market demands. - Monitor and analyze product performance, leveraging data to make informed decisions and optimizations. - Coordinate with marketing and sales teams to create go-to-market strategies and support product launches. - Foster a culture of innovation and continuous improvement within the product development team. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, Business, or a related field. - 3-5 years of experience in product management, specifically in building AI applications. - Proven track record of developing and launching AI-driven products from scratch. - Experience working with data application layers and understanding data architecture. - Strong understanding of the psyche of business users and the ability to translate their needs into technical solutions. - Excellent project management skills, with the ability to prioritize tasks and manage multiple projects simultaneously. - Strong analytical and problem-solving skills, with a data-driven approach to decision making. - Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams. - Passion for AI and a deep understanding of the latest trends and technologies in the field. Benefits: - Competitive salary and benefits package. - Opportunity to work on cutting-edge AI technologies and products. - Collaborative and innovative work environment. - Professional development opportunities and career growth. If you are a passionate Product Manager with a strong background in AI and data products, and you are excited about building transformative AI applications, we would love to hear from you. Apply now to join our dynamic team and make an impact in the world of AI and data.
Data Engineer
Brief Posting Description:
This person will work independently or with a team of data engineers on cloud technology products, projects, and initiatives. Work with all customers, both internal and external, to make sure all data related features are implemented in each solution. Will collaborate with business partners and other technical teams across the organization as required to deliver proposed solutions.
Detailed Description:
· Works with Scrum masters, product owners, and others to identify new features for digital products.
· Works with IT leadership and business partners to design features for the cloud data platform.
· Troubleshoots production issues of all levels and severities, and tracks progress from identification through resolution.
· Maintains culture of open communication, collaboration, mutual respect and productive behaviors; participates in the hiring, training, and retention of top tier talent and mentors team members to new and fulfilling career experiences.
· Identifies risks, barriers, efficiencies and opportunities when thinking through development approach; presents possible platform-wide architectural solutions based on facts, data, and best practices.
· Explores all technical options when considering solution, including homegrown coding, third-party sub-systems, enterprise platforms, and existing technology components.
· Actively participates in collaborative effort through all phases of software development life cycle (SDLC), including requirements analysis, technical design, coding, testing, release, and customer technical support.
· Develops technical documentation, such as system context diagrams, design documents, release procedures, and other pertinent artifacts.
· Understands lifecycle of various technology sub-systems that comprise the enterprise data platform (i.e., version, release, roadmap), including current capabilities, compatibilities, limitations, and dependencies; understands and advises of optimal upgrade paths.
· Establishes relationships with key IT, QA, and other corporate partners, and regularly communicates and collaborates accordingly while working on cross-functional projects or production issues.
Job Requirements:
EXPERIENCE:
2 years required; 3 - 5 years preferred experience in a data engineering role.
2 years required, 3 - 5 years preferred experience in Azure data services (Data Factory, Databricks, ADLS, Synapse, SQL DB, etc.)
EDUCATION:
Bachelor’s degree information technology, computer science, or data related field preferred
SKILLS/REQUIREMENTS:
Expertise working with databases and SQL.
Strong working knowledge of Azure Data Factory and Databricks
Strong working knowledge of code management and continuous integrations systems (Azure DevOps or Github preferred)
Strong working knowledge of cloud relational databases (Azure Synapse and Azure SQL preferred)
Familiarity with Agile delivery methodologies
Familiarity with NoSQL databases (such as CosmosDB) preferred.
Any experience with Python, DAX, Azure Logic Apps, Azure Functions, IoT technologies, PowerBI, Power Apps, SSIS, Informatica, Teradata, Oracle DB, and Snowflake preferred but not required.
Ability to multi-task and reprioritize in a dynamic environment.
Outstanding written and verbal communication skills
Working Environment:
General Office – Work is generally performed within an office environment, with standard office equipment. Lighting and temperature are adequate and there are no hazardous or unpleasant conditions caused by noise, dust, etc.
physical requirements:
Work is generally sedentary in nature but may require standing and walking for up to 10% of the time.
Mental requirements:
Employee required to organize and coordinate schedules.
Employee required to analyze and interpret complex data.
Employee required to problem-solve.
Employee required to communicate with the public.

Role Overview
We are looking for a Tech Lead with a strong background in fintech, especially with experience or a strong interest in fraud prevention and Anti-Money Laundering (AML) technologies.
This role is critical in leading our fintech product development, ensuring the integration of robust security measures, and guiding our team in Hyderabad towards delivering high-quality, secure, and compliant software solutions.
Responsibilities
- Lead the development of fintech solutions, focusing on fraud prevention and AML, using Typescript, ReactJs, Python, and SQL databases.
- Architect and deploy secure, scalable applications on AWS or Azure, adhering to the best practices in financial security and data protection.
- Design and manage databases with an emphasis on security, integrity, and performance, ensuring compliance with fintech regulatory standards.
- Guide and mentor the development team, promoting a culture of excellence, innovation, and continuous learning in the fintech space.
- Collaborate with stakeholders across the company, including product management, design, and QA, to ensure project alignment with business goals and regulatory requirements.
- Keep abreast of the latest trends and technologies in fintech, fraud prevention, and AML, applying this knowledge to drive the company's objectives.
Requirements
- 5-7 years of experience in software development, with a focus on fintech solutions and a strong understanding of fraud prevention and AML strategies.
- Expertise in Typescript, ReactJs, and familiarity with Python.
- Proven experience with SQL databases and cloud services (AWS or Azure), with certifications in these areas being a plus.
- Demonstrated ability to design and implement secure, high-performance software architectures in the fintech domain.
- Exceptional leadership and communication skills, with the ability to inspire and lead a team towards achieving excellence.
- A bachelor's degree in Computer Science, Engineering, or a related field, with additional certifications in fintech, security, or compliance being highly regarded.
Why Join Us?
- Opportunity to be at the cutting edge of fintech innovation, particularly in fraud prevention and AML.
- Contribute to a company with ambitious goals to revolutionize software development and make a historical impact.
- Be part of a visionary team dedicated to creating a lasting legacy in the tech industry.
- Work in an environment that values innovation, leadership, and the long-term success of its employees.

🚀 Exciting Opportunity: Data Engineer Position in Gurugram 🌐
Hello
We are actively seeking a talented and experienced Data Engineer to join our dynamic team at Reality Motivational Venture in Gurugram (Gurgaon). If you're passionate about data, thrive in a collaborative environment, and possess the skills we're looking for, we want to hear from you!
Position: Data Engineer
Location: Gurugram (Gurgaon)
Experience: 5+ years
Key Skills:
- Python
- Spark, Pyspark
- Data Governance
- Cloud (AWS/Azure/GCP)
Main Responsibilities:
- Define and set up analytics environments for "Big Data" applications in collaboration with domain experts.
- Implement ETL processes for telemetry-based and stationary test data.
- Support in defining data governance, including data lifecycle management.
- Develop large-scale data processing engines and real-time search and analytics based on time series data.
- Ensure technical, methodological, and quality aspects.
- Support CI/CD processes.
- Foster know-how development and transfer, continuous improvement of leading technologies within Data Engineering.
- Collaborate with solution architects on the development of complex on-premise, hybrid, and cloud solution architectures.
Qualification Requirements:
- BSc, MSc, MEng, or PhD in Computer Science, Informatics/Telematics, Mathematics/Statistics, or a comparable engineering degree.
- Proficiency in Python and the PyData stack (Pandas/Numpy).
- Experience in high-level programming languages (C#/C++/Java).
- Familiarity with scalable processing environments like Dask (or Spark).
- Proficient in Linux and scripting languages (Bash Scripts).
- Experience in containerization and orchestration of containerized services (Kubernetes).
- Education in database technologies (SQL/OLAP and Non-SQL).
- Interest in Big Data storage technologies (Elastic, ClickHouse).
- Familiarity with Cloud technologies (Azure, AWS, GCP).
- Fluent English communication skills (speaking and writing).
- Ability to work constructively with a global team.
- Willingness to travel for business trips during development projects.
Preferable:
- Working knowledge of vehicle architectures, communication, and components.
- Experience in additional programming languages (C#/C++/Java, R, Scala, MATLAB).
- Experience in time-series processing.
How to Apply:
Interested candidates, please share your updated CV/resume with me.
Thank you for considering this exciting opportunity.
A candidate with 2 to 6 years of relevant experience in Snowflakes Data Cloud.
Mandatory Skills:
• Excellent knowledge in Snowflakes Data Cloud.
• Excellent knowledge in SQL
• Good knowledge in ETL
• Must have working knowledge of Azure Data Factory
• Must have general awareness of Azure Cloud
• Must be aware of optimization techniques in Data retrieval and Data loading.

About the role:
Hopscotch is looking for a passionate Data Engineer to join our team. You will work closely with other teams like data analytics, marketing, data science and individual product teams to specify, validate, prototype, scale, and deploy data pipelines features and data architecture.
Here’s what will be expected out of you:
➢ Ability to work in a fast-paced startup mindset. Should be able to manage all aspects of data extraction transfer and load activities.
➢ Develop data pipelines that make data available across platforms.
➢ Should be comfortable in executing ETL (Extract, Transform and Load) processes which include data ingestion, data cleaning and curation into a data warehouse, database, or data platform.
➢ Work on various aspects of the AI/ML ecosystem – data modeling, data and ML pipelines.
➢ Work closely with Devops and senior Architect to come up with scalable system and model architectures for enabling real-time and batch services.
What we want:
➢ 5+ years of experience as a data engineer or data scientist with a focus on data engineering and ETL jobs.
➢ Well versed with the concept of Data warehousing, Data Modelling and/or Data Analysis.
➢ Experience using & building pipelines and performing ETL with industry-standard best practices on Redshift (more than 2+ years).
➢ Ability to troubleshoot and solve performance issues with data ingestion, data processing & query execution on Redshift.
➢ Good understanding of orchestration tools like Airflow.
➢ Strong Python and SQL coding skills.
➢ Strong Experience in distributed systems like spark.
➢ Experience with AWS Data and ML Technologies (AWS Glue,MWAA, Data Pipeline,EMR,Athena, Redshift,Lambda etc).
➢ Solid hands on with various data extraction techniques like CDC or Time/batch based and the related tools (Debezium, AWS DMS, Kafka Connect, etc) for near real time and batch data extraction.
Note :
Product based companies, Ecommerce companies is added advantage

Requirements:
- 2+ years of experience (4+ for Senior Data Engineer) with system/data integration, development or implementation of enterprise and/or cloud software Engineering degree in Computer Science, Engineering or related field.
- Extensive hands-on experience with data integration/EAI technologies (File, API, Queues, Streams), ETL Tools and building custom data pipelines.
- Demonstrated proficiency with Python, JavaScript and/or Java
- Familiarity with version control/SCM is a must (experience with git is a plus).
- Experience with relational and NoSQL databases (any vendor) Solid understanding of cloud computing concepts.
- Strong organisational and troubleshooting skills with attention to detail.
- Strong analytical ability, judgment and problem-solving techniques Interpersonal and communication skills with the ability to work effectively in a cross functional team.
We are looking for a passionate technologist with experience in building SaaS tech experience and products for a once-in-a-lifetime opportunity to lead Engineering for an AI powered Financial Operations platform to seamlessly monitor, optimize, reconcile and forecast cashflow with ease.
Background
An incredible rare opportunity for a VP Engineering to join a top tier incubated VC SaaS startup and outstanding management team. Product is currently in the build stage with a solid design partners pipeline of ~$250K and soon raising a pre-seed/seed round with marquee investors.
Responsibilities
- Develop and implement the company's technical strategy and roadmap, ensuring that it aligns with the overall business objectives and is scalable, reliable, and secure.
- Manage and optimize the company's technical resources, including staffing, software, hardware, and infrastructure, to ensure that they are being used effectively and efficiently.
- Work with the founding team and other executives to identify opportunities for innovation and new technology solutions, and evaluate the feasibility and impact of these solutions on the business.
- Lead the engineering function in developing and deploying high-quality software products and solutions, ensuring that they meet or exceed customer requirements and industry standards.
- Analyze and evaluate technical data and metrics, identifying areas for improvement and implementing changes to drive efficiency and effectiveness.
- Ensure that the company is in compliance with all legal and regulatory requirements, including data privacy and security regulations.
Eligibility criteria:
- 6+ years of experience in developing scalable SaaS products.
- Strong technical background with 6+ years of experience with a strong focus on SaaS, AI, and finance software.
- Prior experience in leadership roles.
- Entrepreneurial mindset, with a strong desire to innovate and grow a startup from the ground up.
Perks:
- Vested Equity.
- Ownership in the company.
- Build alongside passionate and smart individuals.
1. Flink Sr. Developer
Location: Bangalore(WFO)
Mandatory Skills & Exp -10+ Years : Must have Hands on Experience on FLINK, Kubernetes , Docker, Microservices, any one of Kafka/Pulsar, CI/CD and Java.
Job Responsibilities:
As the Data Engineer lead, you are expected to engineer, develop, support, and deliver real-time
streaming applications that model real-world network entities, and have a good understanding of the
Telecom Network KPIs to improve the customer experience through automation of operational network
data. Real-time application development will include building stateful in-memory backends, real-time
streaming APIs , leveraging real-time databases such as Apache Druid.
Architecting and creating the streaming data pipelines that will enrich the data and support
the use cases for telecom networks
Collaborating closely with multiple stakeholders, gathering requirements and seeking
iterative feedback on recently delivered application features.
Participating in peer review sessions to provide teammates with code review as well as
architectural and design feedback.
Composing detailed low-level design documentation, call flows, and architecture diagrams
for the solutions you build.
Running to a crisis anytime the Operations team needs help.
Perform duties with minimum supervision and participate in cross-functional projects as
scheduled.
Skills:
Flink Sr. Developer, who has implemented and dealt with failure scenarios of
processing data through Flink.
Experience with Java, K8S, Argo CD/Workflow, Prometheus, and Aether.
Familiarity with object-oriented design patterns.
Experience with Application Development DevOps Tools.
Experience with distributed cloud-native application design deployed on Kubernetes
platforms.
Experience with PostGres, Druid, and Oracle databases.
Experience with Messaging Bus - Kafka/Pulsar
Experience with AI/ML - Kubeflow, JupyterHub
Experience with building real-time applications which leverage streaming data.
Experience with streaming message bus platforming, either Kafka or Pulsar.
Experience with Apache Spark applications and Hadoop platforms.
Strong problem solving skills.
Strong written and oral communication skills.
Job description Position: Data Engineer Experience: 6+ years Work Mode: Work from Office Location: Bangalore Please note: This position is focused on development rather than migration. Experience in Nifi or Tibco is mandatory.Mandatory Skills: ETL, DevOps platform, Nifi or Tibco We are seeking an experienced Data Engineer to join our team. As a Data Engineer, you will play a crucial role in developing and maintaining our data infrastructure and ensuring the smooth operation of our data platforms. The ideal candidate should have a strong background in advanced data engineering, scripting languages, cloud and big data technologies, ETL tools, and database structures.
Responsibilities: • Utilize advanced data engineering techniques, including ETL (Extract, Transform, Load), SQL, and other advanced data manipulation techniques. • Develop and maintain data-oriented scripting using languages such as Python. • Create and manage data structures to ensure efficient and accurate data storage and retrieval. • Work with cloud and big data technologies, specifically AWS and Azure stack, to process and analyze large volumes of data. • Utilize ETL tools such as Nifi and Tibco to extract, transform, and load data into various systems. • Have hands-on experience with database structures, particularly MSSQL and Vertica, to optimize data storage and retrieval. • Manage and maintain the operations of data platforms, ensuring data availability, reliability, and security. • Collaborate with cross-functional teams to understand data requirements and design appropriate data solutions. • Stay up-to-date with the latest industry trends and advancements in data engineering and suggest improvements to enhance our data infrastructure.
Requirements: • A minimum of 6 years of relevant experience as a Data Engineer. • Proficiency in ETL, SQL, and other advanced data engineering techniques. • Strong programming skills in scripting languages such as Python. • Experience in creating and maintaining data structures for efficient data storage and retrieval. • Familiarity with cloud and big data technologies, specifically AWS and Azure stack. • Hands-on experience with ETL tools, particularly Nifi and Tibco. • In-depth knowledge of database structures, including MSSQL and Vertica. • Proven experience in managing and operating data platforms. • Strong problem-solving and analytical skills with the ability to handle complex data challenges. • Excellent communication and collaboration skills to work effectively in a team environment. • Self-motivated with a strong drive for learning and keeping up-to-date with the latest industry trends.

Requirements:
● Understanding our data sets and how to bring them together.
● Working with our engineering team to support custom solutions offered to the product development.
● Filling the gap between development, engineering and data ops.
● Creating, maintaining and documenting scripts to support ongoing custom solutions.
● Excellent organizational skills, including attention to precise details
● Strong multitasking skills and ability to work in a fast-paced environment
● 5+ years experience with Python to develop scripts.
● Know your way around RESTFUL APIs.[Able to integrate not necessary to publish]
● You are familiar with pulling and pushing files from SFTP and AWS S3.
● Experience with any Cloud solutions including GCP / AWS / OCI / Azure.
● Familiarity with SQL programming to query and transform data from relational Databases.
● Familiarity to work with Linux (and Linux work environment).
● Excellent written and verbal communication skills
● Extracting, transforming, and loading data into internal databases and Hadoop
● Optimizing our new and existing data pipelines for speed and reliability
● Deploying product build and product improvements
● Documenting and managing multiple repositories of code
● Experience with SQL and NoSQL databases (Casendra, MySQL)
● Hands-on experience in data pipelining and ETL. (Any of these frameworks/tools: Hadoop, BigQuery,
RedShift, Athena)
● Hands-on experience in AirFlow
● Understanding of best practices, common coding patterns and good practices around
● storing, partitioning, warehousing and indexing of data
● Experience in reading the data from Kafka topic (both live stream and offline)
● Experience in PySpark and Data frames
Responsibilities:
You’ll
● Collaborating across an agile team to continuously design, iterate, and develop big data systems.
● Extracting, transforming, and loading data into internal databases.
● Optimizing our new and existing data pipelines for speed and reliability.
● Deploying new products and product improvements.
● Documenting and managing multiple repositories of code.
As AWS Data Engineer, you are a full-stack data engineer that loves solving business problems. You work with business leads, analysts, and data scientists to understand the business domain and engage with fellow engineers to build data products that empower better decision-making. You are passionate about the data quality of our business metrics and the flexibility of your solution that scales to respond to broader business questions.
If you love to solve problems using your skills, then come join the Team Mactores. We have a casual and fun office environment that actively steers clear of rigid "corporate" culture, focuses on productivity and creativity, and allows you to be part of a world-class team while still being yourself.
What you will do?
- Write efficient code in - PySpark, Amazon Glue
- Write SQL Queries in - Amazon Athena, Amazon Redshift
- Explore new technologies and learn new techniques to solve business problems creatively
- Collaborate with many teams - engineering and business, to build better data products and services
- Deliver the projects along with the team collaboratively and manage updates to customers on time
What are we looking for?
- 1 to 3 years of experience in Apache Spark, PySpark, Amazon Glue
- 2+ years of experience in writing ETL jobs using pySpark, and SparkSQL
- 2+ years of experience in SQL queries and stored procedures
- Have a deep understanding of all the Dataframe API with all the transformation functions supported by Spark 2.7+
You will be preferred if you have
- Prior experience in working on AWS EMR, Apache Airflow
- Certifications AWS Certified Big Data – Specialty OR Cloudera Certified Big Data Engineer OR Hortonworks Certified Big Data Engineer
- Understanding of DataOps Engineering
Life at Mactores
We care about creating a culture that makes a real difference in the lives of every Mactorian. Our 10 Core Leadership Principles that honor Decision-making, Leadership, Collaboration, and Curiosity drive how we work.
1. Be one step ahead
2. Deliver the best
3. Be bold
4. Pay attention to the detail
5. Enjoy the challenge
6. Be curious and take action
7. Take leadership
8. Own it
9. Deliver value
10. Be collaborative
We would like you to read more details about the work culture on https://mactores.com/careers
The Path to Joining the Mactores Team
At Mactores, our recruitment process is structured around three distinct stages:
Pre-Employment Assessment:
You will be invited to participate in a series of pre-employment evaluations to assess your technical proficiency and suitability for the role.
Managerial Interview: The hiring manager will engage with you in multiple discussions, lasting anywhere from 30 minutes to an hour, to assess your technical skills, hands-on experience, leadership potential, and communication abilities.
HR Discussion: During this 30-minute session, you'll have the opportunity to discuss the offer and next steps with a member of the HR team.
At Mactores, we are committed to providing equal opportunities in all of our employment practices, and we do not discriminate based on race, religion, gender, national origin, age, disability, marital status, military status, genetic information, or any other category protected by federal, state, and local laws. This policy extends to all aspects of the employment relationship, including recruitment, compensation, promotions, transfers, disciplinary action, layoff, training, and social and recreational programs. All employment decisions will be made in compliance with these principles.
I am looking for Mulesoft Developer for a reputed MNC
Experience: 6+ Years
Relevant experience: 4 Years
Location : Pune, Mumbai, Bangalore, Indore, Kolkata
Skills:
Mulesoft
Experience: 6+ Years
Relevant experience: 4 Years
Location : Pune, Mumbai, Bangalore, Indore, Kolkata
• S/he possesses a wide exposure to complete lifecycle of data starting from creation to consumption
• S/he has in the past built repeatable tools / data-models to solve specific business problems
• S/he should have hand-on experience of having worked on projects (either as a consultant or with in a company) that needed them to
o Provide consultation to senior client personnel o Implement and enhance data warehouses or data lakes.
o Worked with business teams or was a part of the team that implemented process re-engineering driven by data analytics/insights
• Should have deep appreciation of how data can be used in decision-making
• Should have perspective on newer ways of solving business problems. E.g. external data, innovative techniques, newer technology
• S/he must have a solution-creation mindset.
Ability to design and enhance scalable data platforms to address the business need
• Working experience on data engineering tool for one or more cloud platforms -Snowflake, AWS/Azure/GCP
• Engage with technology teams from Tredence and Clients to create last mile connectivity of the solutions
o Should have experience of working with technology teams
• Demonstrated ability in thought leadership – Articles/White Papers/Interviews
Mandatory Skills Program Management, Data Warehouse, Data Lake, Analytics, Cloud Platform