Cutshort logo
Data engineering jobs

50+ Data engineering Jobs in India

Apply to 50+ Data engineering Jobs on CutShort.io. Find your next job, effortlessly. Browse Data engineering Jobs and apply today!

icon
Premier global software products and services firm

Premier global software products and services firm

Agency job
via Recruiting Bond by Pavan Kumar
Hyderabad, Ahmedabad, Indore
5 - 10 yrs
₹10L - ₹20L / yr
Data engineering
Data modeling
Database Design
Data Warehouse (DWH)
Datawarehousing
+9 more

Job Summary: 

As a Data Engineering Lead, your role will involve designing, developing, and implementing interactive dashboards and reports using data engineering tools. You will work closely with stakeholders to gather requirements and translate them into effective data visualizations that provide valuable insights. Additionally, you will be responsible for extracting, transforming, and loading data from multiple sources into Power BI, ensuring its accuracy and integrity. Your expertise in Power BI and data analytics will contribute to informed decision-making and support the organization in driving data-centric strategies and initiatives.


We are looking for you!

As an ideal candidate for the Data Engineering Lead position, you embody the qualities of a team player with a relentless get-it-done attitude. Your intellectual curiosity and customer focus drive you to continuously seek new ways to add value to your job accomplishments.


You thrive under pressure, maintaining a positive attitude and understanding that your career is a journey. You are willing to make the right choices to support your growth. In addition to your excellent communication skills, both written and verbal, you have a proven ability to create visually compelling designs using tools like Power BI and Tableau that effectively communicate our core values. 


You build high-performing, scalable, enterprise-grade applications and teams. Your creativity and proactive nature enable you to think differently, find innovative solutions, deliver high-quality outputs, and ensure customers remain referenceable. With over eight years of experience in data engineering, you possess a strong sense of self-motivation and take ownership of your responsibilities. You prefer to work independently with little to no supervision. 


You are process-oriented, adopt a methodical approach, and demonstrate a quality-first mindset. You have led mid to large-size teams and accounts, consistently using constructive feedback mechanisms to improve productivity, accountability, and performance within the team. Your track record showcases your results-driven approach, as you have consistently delivered successful projects with customer case studies published on public platforms. Overall, you possess a unique combination of skills, qualities, and experiences that make you an ideal fit to lead our data engineering team(s).


You value inclusivity and want to join a culture that empowers you to show up as your authentic self. 


You know that success hinges on commitment, our differences make us stronger, and the finish line is always sweeter when the whole team crosses together. In your role, you should be driving the team using data, data, and more data. You will manage multiple teams, oversee agile stories and their statuses, handle escalations and mitigations, plan ahead, identify hiring needs, collaborate with recruitment teams for hiring, enable sales with pre-sales teams, and work closely with development managers/leads for solutioning and delivery statuses, as well as architects for technology research and solutions.


What You Will Do: 

  • Analyze Business Requirements.
  • Analyze the Data Model and do GAP analysis with Business Requirements and Power BI. Design and Model Power BI schema.
  • Transformation of Data in Power BI/SQL/ETL Tool.
  • Create DAX Formula, Reports, and Dashboards. Able to write DAX formulas.
  • Experience writing SQL Queries and stored procedures.
  • Design effective Power BI solutions based on business requirements.
  • Manage a team of Power BI developers and guide their work.
  • Integrate data from various sources into Power BI for analysis.
  • Optimize performance of reports and dashboards for smooth usage.
  • Collaborate with stakeholders to align Power BI projects with goals.
  • Knowledge of Data Warehousing(must), Data Engineering is a plus


What we need?

  • B. Tech computer science or equivalent
  • Minimum 5+ years of relevant experience 


Why join us?

  • Work with a passionate and innovative team in a fast-paced, growth-oriented environment.
  • Gain hands-on experience in content marketing with exposure to real-world projects.
  • Opportunity to learn from experienced professionals and enhance your marketing skills.
  • Contribute to exciting initiatives and make an impact from day one.
  • Competitive stipend and potential for growth within the company.
  • Recognized for excellence in data and AI solutions with industry awards and accolades.


Read more
Big4

Big4

Agency job
via Black Turtle by Kajol Teli
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
3 - 7 yrs
₹5L - ₹20L / yr
Google Cloud Platform (GCP)
Data engineering

Hiring for Big4 Company'

GCP Data engineer

GCP - Mandate

3-7 Years

Gurgaon location

only serving candidate or immediately Joiner can apply

Notice period - less than 30 Days

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Seema Srivastava
Posted by Seema Srivastava
Mumbai
5 - 10 yrs
Best in industry
skill iconPython
SQL
Databases
Data engineering
skill iconAmazon Web Services (AWS)

Job Description: Data Engineer 

Position Overview:

Role Overview

We are seeking a skilled Python Data Engineer with expertise in designing and implementing data solutions using the AWS cloud platform. The ideal candidate will be responsible for building and maintaining scalable, efficient, and secure data pipelines while leveraging Python and AWS services to enable robust data analytics and decision-making processes.

 

Key Responsibilities

· Design, develop, and optimize data pipelines using Python and AWS services such as Glue, Lambda, S3, EMR, Redshift, Athena, and Kinesis.

· Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses).

· Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions.

· Monitor, troubleshoot, and enhance data workflows for performance and cost optimization.

· Ensure data quality and consistency by implementing validation and governance practices.

· Work on data security best practices in compliance with organizational policies and regulations.

· Automate repetitive data engineering tasks using Python scripts and frameworks.

· Leverage CI/CD pipelines for deployment of data workflows on AWS.

 

Required Skills and Qualifications

· Professional Experience: 5+ years of experience in data engineering or a related field.

· Programming: Strong proficiency in Python, with experience in libraries like pandas, pySpark, or boto3.

· AWS Expertise: Hands-on experience with core AWS services for data engineering, such as:

· AWS Glue for ETL/ELT.

· S3 for storage.

· Redshift or Athena for data warehousing and querying.

· Lambda for serverless compute.

· Kinesis or SNS/SQS for data streaming.

· IAM Roles for security.

· Databases: Proficiency in SQL and experience with relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., DynamoDB) databases.

· Data Processing: Knowledge of big data frameworks (e.g., Hadoop, Spark) is a plus.

· DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline.

· Version Control: Proficient with Git-based workflows.

· Problem Solving: Excellent analytical and debugging skills.

 

Optional Skills

· Knowledge of data modeling and data warehouse design principles.

· Experience with data visualization tools (e.g., Tableau, Power BI).

· Familiarity with containerization (e.g., Docker) and orchestration (e.g., Kubernetes).

· Exposure to other programming languages like Scala or Java.

 

Education

· Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.

 

Why Join Us?

· Opportunity to work on cutting-edge AWS technologies.

· Collaborative and innovative work environment.

 

 

Read more
MindInventory

at MindInventory

1 video
4 recruiters
Khushi Bhatt
Posted by Khushi Bhatt
Ahmedabad
3 - 5 yrs
₹4L - ₹12L / yr
Data engineering
ETL
Google Cloud Platform (GCP)
Apache Airflow
Snow flake schema
+3 more
  • Required Minimum 3 years of Experience as a Data Engineer
  • Database Knowledge: Experience with Timeseries and Graph Database is must along with SQL, PostgreSQL, MySQL, or NoSQL databases like FireStore, MongoDB,
  • Data Pipelines: Understanding data Pipeline process like ETL, ELT, Streaming Pipelines with tools like AWS Glue, Google Dataflow, Apache Airflow, Apache NiFi.
  • Data Modeling: Knowledge of Snowflake Schema, Fact & Dimension Tables.
  • Data Warehousing Tools: Experience with Google BigQuery, Snowflake, Databricks
  • Performance Optimization: Indexing, partitioning, caching, query optimization techniques.
  • Python or SQL Scripting: Ability to write scripts for data processing and automation


Read more
For a Service Based Company

For a Service Based Company

Agency job
via Vikash Technologies by Rishika Teja
Remote only
7 - 11 yrs
₹12L - ₹24L / yr
Data engineering
Snow flake schema
Datawarehousing
SQL


Key Responsibilities:

  • Design, develop, and optimize data pipelines using Snowflake.
  • Develop complex SQL queries and stored procedures for data transformation and analysis.
  • Implement and maintain enterprise data warehouse solutions.
  • Ensure data quality, integrity, and consistency across multiple data sources.
  • Collaborate with business and analytics teams to understand data requirements.
  • Monitor and improve performance of data loads and transformations.
  • Implement security and data governance best practices within the Snowflake environment.


Required Skills & Qualifications:

  • 7+ years of experience in Data Engineering or related field.
  • Strong hands-on experience with Snowflake.
  • Expert-level proficiency in SQL and query optimization.
  • Deep understanding of data warehouse concepts, data modeling, and ETL/ELT processes.
  • Experience with cloud platforms (e.g., AWS, Azure, or GCP).
  • Familiarity with data integration tools (e.g., Informatica, Matillion, or DBT) is a plus.
  • Strong problem-solving and analytical skills.


Preferred Qualifications:

  • Experience with Python or other scripting languages.
  • Knowledge of data governance and security frameworks.
  • Experience working in Agile/Scrum environments.


Read more
Fountane inc
HR Fountane
Posted by HR Fountane
Remote only
7 - 10 yrs
₹27L - ₹35L / yr
Data engineering
skill iconData Science
skill iconData Analytics

Position Overview: We are looking for an experienced and highly skilled Data Architect to join our team and help design, implement, and optimize data systems that support high-end analytical solutions for our clients. As a Data Architect, you will work closely with clients to understand their business needs and translate them into robust, scalable, and efficient technical solutions. You will be responsible for end-to-end data modelling, integration workflows, and data transformation processes while ensuring security, privacy, and compliance.In this role, you will also leverage the latest advancements in artificial intelligence, machine learning, and large language models (LLMs) to deliver high-impact solutions that drive business success. The ideal candidate will have a deep understanding of data infrastructure, optimization techniques, and cost-effective data management


Key Responsibilities:


Customer Collaboration:

  • Partner with clients to gather and understand their business

requirements, translating them into actionable technical specifications.

  • Act as the primary technical consultant to guide clients through data challenges and deliver tailored solutions that drive value.

Data Modeling & Integration:

  • Design and implement scalable, efficient, and optimized data models to support business operations and analytical needs.
  • Develop and maintain data integration workflows to seamlessly extract, transform, and load (ETL) data from various sources into data repositories.
  • Ensure smooth integration between multiple data sources and platforms, including cloud and on-premise systems

Data Processing & Optimization:

  • Develop, optimize, and manage data processing pipelines to enable real-time and batch data processing at scale.
  • Continuously evaluate and improve data processing performance, optimizing for throughput while minimizing infrastructure costs.

Data Governance & Security:

  • Implement and enforce data governance policies and best practices, ensuring data security, privacy, and compliance with relevant industry regulations (e.g., GDPR, HIPAA).
  • Collaborate with security teams to safeguard sensitive data and maintain privacy controls across data environments.

Cross-Functional Collaboration:

  • Work closely with data engineers, data scientists, and business

analysts to ensure that the data architecture aligns with organizational objectives and delivers actionable insights.

  • Foster collaboration across teams to streamline data workflows and optimize solution delivery.

Leveraging Advanced Technologies:

  • Utilize AI, machine learning models, and large language models (LLMs) to automate processes, accelerate delivery, and provide
  • smart, data-driven solutions to business challenges.
  • Identify opportunities to apply cutting-edge technologies to improve the efficiency, speed, and quality of data processing and analytics.

Cost Optimization:

  • Proactively manage infrastructure and cloud resources to optimize throughput while minimizing operational costs.
  • Make data-driven recommendations to reduce infrastructure overhead and increase efficiency without sacrificing performance.

Qualifications:


Experience:

  • Proven experience (5+ years) as a Data Architect or similar role, designing and implementing data solutions at scale.
  • Strong expertise in data modelling, data integration (ETL), and data transformation processes.
  • Experience with cloud platforms (AWS, Azure, Google Cloud) and big data technologies (e.g., Hadoop, Spark).

Technical Skills:

  • Advanced proficiency in SQL, data modelling tools (e.g., Erwin,PowerDesigner), and data integration frameworks (e.g., Apache
  • NiFi, Talend).
  • Strong understanding of data security protocols, privacy regulations, and compliance requirements.
  • Experience with data storage solutions (e.g., data lakes, data warehouses, NoSQL, relational databases).

AI & Machine Learning Exposure:

  • Familiarity with leveraging AI and machine learning technologies (e.g., TensorFlow, PyTorch, scikit-learn) to optimize data processing and analytical tasks.
  • Ability to apply advanced algorithms and automation techniques to improve business processes.

 Soft Skills:

  • Excellent communication skills to collaborate with clients, stakeholders, and cross-functional teams.
  • Strong problem-solving ability with a customer-centric approach to solution design.
  • Ability to translate complex technical concepts into clear, understandable terms for non-technical audiences.

Education:

Bachelor’s or Master’s degree in Computer Science, Information Systems, Data Science, or a related field (or equivalent practical experience).


LIFE AT FOUNTANE:

  • Fountane offers an environment where all members are supported, challenged, recognized & given opportunities to grow to their fullest potential.
  • Competitive pay
  • Health insurance for spouses, kids, and parents.
  • PF/ESI or equivalent
  • Individual/team bonuses
  • Employee stock ownership plan
  • Fun/challenging variety of projects/industries
  • Flexible workplace policy - remote/physical
  • Flat organization - no micromanagement
  • Individual contribution - set your deadlines
  • Above all - culture that helps you grow exponentially!


A LITTLE BIT ABOUT THE COMPANY:

Established in 2017, Fountane Inc is a Ventures Lab incubating and investing in new competitive technology businesses from scratch. Thus far, we’ve created half a dozen multi-million valuation companies in the US and a handful of sister ventures for large corporations, including Target, US Ventures, and Imprint Engine.

We’re a team of 120+ strong from around the world that are radically open-minded and believes in excellence, respecting one another, and pushing our boundaries to the furthest it's ever been.


Read more
Pluginlive

at Pluginlive

1 recruiter
Harsha Saggi
Posted by Harsha Saggi
Chennai
2 - 4 yrs
₹15L - ₹20L / yr
Data engineering
skill iconPython
SQL
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)

What you’ll do

  • Design, build, and maintain robust ETL/ELT pipelines for product and analytics data
  • Work closely with business, product, analytics, and ML teams to define data needs
  • Ensure high data quality, lineage, versioning, and observability
  • Optimize performance of batch and streaming jobs
  • Automate and scale ingestion, transformation, and monitoring workflows
  • Document data models and key business metrics in a self-serve way
  • Use AI tools to accelerate development, troubleshooting, and documentation


Must-Haves:

  • 2–4 years of experience as a data engineer (product or analytics-focused preferred)
  • Solid hands-on experience with Python and SQL
  • Experience with data pipeline orchestration tools like Airflow or Prefect
  • Understanding of data modeling, warehousing concepts, and performance optimization
  • Familiarity with cloud platforms (GCP, AWS, or Azure)
  • Bachelor's in Computer Science, Data Engineering, or a related field
  • Strong problem-solving mindset and AI-native tooling comfort (Copilot, GPTs)


Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
2 - 4 yrs
Best in industry
AWS Lambda
databricks
Database migration
Apache Kafka
Apache Spark
+3 more

About NonStop io Technologies:

NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.

Brief Description:

We are looking for a talented Data Engineer to join our team. In this role, you will design, implement, and manage data pipelines, ensuring the accessibility and reliability of data for critical business processes. This is an exciting opportunity to work on scalable solutions that power data-driven decisions

Skillset:

Here is a list of some of the technologies you will work with (the list below is not set in stone)

Data Pipeline Orchestration and Execution:

● AWS Glue

● AWS Step Functions

● Databricks Change

Data Capture:

● Amazon Database Migration Service

● Amazon Managed Streaming for Apache Kafka with Debezium Plugin

Batch:

● AWS step functions (and Glue Jobs)

● Asynchronous queueing of batch job commands with RabbitMQ to various “ETL Jobs”

● Cron and subervisord processing on dedicated job server(s): Python & PHP

Streaming:

● Real-time processing via AWS MSK (Kafka), Apache Hudi, & Apache Flink

● Near real-time processing via worker (listeners) spread over AWS Lambda, custom server (daemons) written in Python and PHP Symfony

● Languages: Python & PySpark, Unix Shell, PHP Symfony (with Doctrine ORM)

● Monitoring & Reliability: Datadog & Cloudwatch

Things you will do:

● Build dashboards using Datadog and Cloudwatch to ensure system health and user support

● Build schema registries that enable data governance

● Partner with end-users to resolve service disruptions and evangelize our data product offerings

● Vigilantly oversee data quality and alert upstream data producers of issues

● Support and contribute to the data platform architecture strategy, roadmap, and implementation plans to support the company’s data-driven initiatives and business objective

● Work with Business Intelligence (BI) consumers to deliver enterprise-wide fact and dimension data product tables to enable data-driven decision-making across the organization.

● Other duties as assigned

Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Remote only
8 - 13 yrs
₹50L - ₹100L / yr
Data engineering
Mandatory (Experience 2) - Must have 7+ YOE in hands-on...
skill iconJava
Mandatory (Experience 3) - Must have recent 4+ YOE with...
Mandatory (Experience 4) - Must have strong experience in large...
+3 more

Ideal Candidate

10+ years of experience in software/data engineering, with at least 3+ years in a leadership role.

Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS.

Proficiency in SQL, Python, and Scala for data processing and analytics.

Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services.

Strong foundation and expertise in HLD and LLD, as well as design patterns, preferably using Spring Boot or Google Guice

Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks.

Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery

Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.).

Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB.

Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK.

Proven ability to drive technical strategy and align it with business objectives.

Strong leadership, communication, and stakeholder management skills

Read more
Data Havn

Data Havn

Agency job
via Infinium Associate by Toshi Srivastava
Noida
5 - 9 yrs
₹40L - ₹60L / yr
skill iconPython
SQL
Data engineering
Snowflake
ETL
+5 more

About the Role:

We are seeking a talented Lead Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.

Responsibilities:

  • Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
  • Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
  • Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
  • Team Management: Able to handle team.
  • Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
  • Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
  • Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
  • Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.

 

 

 

 

Skills:

  • Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
  • Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
  • Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
  • Understanding of data modeling and data architecture concepts.
  • Experience with ETL/ELT tools and frameworks.
  • Excellent problem-solving and analytical skills.
  • Ability to work independently and as part of a team.

Preferred Qualifications:

  • Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
  • Knowledge of machine learning and artificial intelligence concepts.
  • Experience with data visualization tools (e.g., Tableau, Power BI).
  • Certification in cloud platforms or data engineering.


Read more
Deqode

at Deqode

1 recruiter
Shraddha Katare
Posted by Shraddha Katare
Pune
2 - 5 yrs
₹3L - ₹10L / yr
PySpark
skill iconAmazon Web Services (AWS)
AWS Lambda
SQL
Data engineering
+2 more


Here is the Job Description - 


Location -- Viman Nagar, Pune

Mode - 5 Days Working


Required Tech Skills:


 ● Strong at PySpark, Python

 ● Good understanding of Data Structure 

 ● Good at SQL query/optimization 

 ● Strong fundamentals of OOPs programming 

 ● Good understanding of AWS Cloud, Big Data. 

 ● Data Lake, AWS Glue, Athena, S3, Kinesis, SQL/NoSQL DB  


Read more
Remote only
7 - 12 yrs
₹20L - ₹35L / yr
Snowflake
Looker / LookML
Data engineering
skill iconAmazon Web Services (AWS)
Data modeling

Role & Responsibilities

Data Organization and Governance: Define and maintain governance standards that span multiple systems (AWS, Fivetran, Snowflake, PostgreSQL, Salesforce/nCino, Looker), ensuring that data remains accurate, accessible, and organized across the organization.

Solve Data Problems Proactively: Address recurring data issues that sidetrack operational and strategic initiatives by implementing processes and tools to anticipate, identify, and resolve root causes effectively.

System Integration: Lead the integration of diverse systems into a cohesive data environment, optimizing workflows and minimizing manual intervention.

Hands-On Problem Solving: Take a hands-on approach to resolving reporting issues and troubleshooting data challenges when necessary, ensuring minimal disruption to business operations.

Collaboration Across Teams: Work closely with business and technical stakeholders to understand and solve our biggest challenges

Mentorship and Leadership: Guide and mentor team members, fostering a culture of accountability and excellence in data management practices.

Strategic Data Support: Ensure that marketing, analytics, and other strategic initiatives are not derailed by data integrity issues, enabling the organization to focus on growth and innovation.

Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Remote only
8 - 13 yrs
₹70L - ₹90L / yr
Data engineering
Apache Spark
Apache Kafka
skill iconJava
skill iconPython
+6 more

Role & Responsibilities

Lead and mentor a team of data engineers, ensuring high performance and career growth.

Architect and optimize scalable data infrastructure, ensuring high availability and reliability.

Drive the development and implementation of data governance frameworks and best practices.

Work closely with cross-functional teams to define and execute a data roadmap.

Optimize data processing workflows for performance and cost efficiency.

Ensure data security, compliance, and quality across all data platforms.

Foster a culture of innovation and technical excellence within the data team.

Read more
Adesso

Adesso

Agency job
via HashRoot by Maheswari M
Kochi (Cochin), Chennai, Pune
3 - 6 yrs
₹4L - ₹24L / yr
Data engineering
skill iconAmazon Web Services (AWS)
Windows Azure
Snowflake
Data Transformation Tool (DBT)
+3 more

We are seeking a skilled Cloud Data Engineer who has experience with cloud data platforms like AWS or Azure and especially Snowflake and dbt to join our dynamic team. As a consultant, you will be responsible for developing new data platforms and create the data processes. You will collaborate with cross-functional teams to design, develop, and deploy high-quality frontend solutions. 

Responsibilities:

Customer consulting: You develop data-driven products in the Snowflake Cloud and connect data & analytics with specialist departments. You develop ELT processes using dbt (data build tool) 

Specifying requirements: You develop concrete requirements for future-proof cloud data architectures.

Develop data routes: You design scalable and powerful data management processes.

Analyze data: You derive sound findings from data sets and present them in an understandable way.

Requirements:

Requirements management and project experience: You successfully implement cloud-based data & analytics projects.

Data architectures: You are proficient in DWH/data lake concepts and modeling with Data Vault 2.0.

Cloud expertise: You have extensive knowledge of Snowflake, dbt and other cloud technologies (e.g. MS Azure, AWS, GCP).

SQL know-how: You have a sound and solid knowledge of SQL.

Data management: You are familiar with topics such as master data management and data quality.

Bachelor's degree in computer science, or a related field.

Strong communication and collaboration abilities to work effectively in a team environment.

 

Skills & Requirements

Cloud Data Engineering, AWS, Azure, Snowflake, dbt, ELT processes, Data-driven consulting, Cloud data architectures, Scalable data management, Data analysis, Requirements management, Data warehousing, Data lake, Data Vault 2.0, SQL, Master data management, Data quality, GCP, Strong communication, Collaboration.

Read more
Adesso India

Adesso India

Agency job
via HashRoot by Deepak S
Remote only
3 - 11 yrs
₹6L - ₹27L / yr
Data engineering
Data architecture
skill iconAmazon Web Services (AWS)
Windows Azure
Data Transformation Tool (DBT)
+3 more

Overview

adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.

Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.


Job Description

We are seeking a skilled Cloud Data Engineer who has experience with cloud data platforms like AWS or Azure and especially Snowflake and dbt to join our dynamic team. As a consultant, you will be responsible for developing new data platforms and create the data processes. You will collaborate with cross-functional teams to design, develop, and deploy high-quality frontend solutions. 


Responsibilities:

Customer consulting: You develop data-driven products in the Snowflake Cloud and connect data & analytics with specialist departments. You develop ELT processes using dbt (data build tool) 

Specifying requirements: You develop concrete requirements for future-proof cloud data architectures.

Develop data routes: You design scalable and powerful data management processes.

Analyze data: You derive sound findings from data sets and present them in an understandable way.


Requirements:

Requirements management and project experience: You successfully implement cloud-based data & analytics projects.

Data architectures: You are proficient in DWH/data lake concepts and modeling with Data Vault 2.0.

Cloud expertise: You have extensive knowledge of Snowflake, dbt and other cloud technologies (e.g. MS Azure, AWS, GCP).

SQL know-how: You have a sound and solid knowledge of SQL.

Data management: You are familiar with topics such as master data management and data quality.

Bachelor's degree in computer science, or a related field.

Strong communication and collaboration abilities to work effectively in a team environment.

 

Skills & Requirements

Cloud Data Engineering, AWS, Azure, Snowflake, dbt, ELT processes, Data-driven consulting, Cloud data architectures, Scalable data management, Data analysis, Requirements management, Data warehousing, Data lake, Data Vault 2.0, SQL, Master data management, Data quality, GCP, Strong communication, Collaboration.

Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Remote only
11 - 18 yrs
₹70L - ₹80L / yr
skill iconJava
skill iconGo Programming (Golang)
skill iconNodeJS (Node.js)
skill iconPython
Apache Kafka
+7 more

Role & Responsibilities

Lead and mentor a team of data engineers, ensuring high performance and career growth.

Architect and optimize scalable data infrastructure, ensuring high availability and reliability.

Drive the development and implementation of data governance frameworks and best practices.

Work closely with cross-functional teams to define and execute a data roadmap.

Optimize data processing workflows for performance and cost efficiency.

Ensure data security, compliance, and quality across all data platforms.

Foster a culture of innovation and technical excellence within the data team.

Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Remote only
11 - 18 yrs
₹50L - ₹70L / yr
skill iconJava
Data engineering
skill iconNodeJS (Node.js)
skill iconPython
skill iconGo Programming (Golang)
+5 more

Role & Responsibilities

Lead and mentor a team of data engineers, ensuring high performance and career growth.

Architect and optimize scalable data infrastructure, ensuring high availability and reliability.

Drive the development and implementation of data governance frameworks and best practices.

Work closely with cross-functional teams to define and execute a data roadmap.

Optimize data processing workflows for performance and cost efficiency.

Ensure data security, compliance, and quality across all data platforms.

Foster a culture of innovation and technical excellence within the data team.

Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Noida
5 - 10 yrs
₹2L - ₹10L / yr
Looker
SQL
skill iconPython
ETL
skill iconAmazon Web Services (AWS)
+6 more

Job Title : Sr. Data Engineer

Experience : 5+ Years

Location : Noida (Hybrid – 3 Days in Office)

Shift Timing : 2-11 PM

Availability : Immediate


Job Description :

  • We are seeking a Senior Data Engineer to design, develop, and optimize data solutions.
  • The role involves building ETL pipelines, integrating data into BI tools, and ensuring data quality while working with SQL, Python (Pandas, NumPy), and cloud platforms (AWS/GCP).
  • You will also develop dashboards using Looker Studio and work with AWS services like S3, Lambda, Glue ETL, Athena, RDS, and Redshift.
  • Strong debugging, collaboration, and communication skills are essential.
Read more
Intellikart Ventures LLP
Prajwal Shinde
Posted by Prajwal Shinde
Pune
2 - 5 yrs
₹9L - ₹15L / yr
PowerBI
SQL
ETL
snowflake
Apache Kafka
+1 more

Experience: 4+ years.

Location: Vadodara & Pune

Skills Set- Snowflake, Power Bi, ETL, SQL, Data Pipelines

What you'll be doing:

  • Develop, implement, and manage scalable Snowflake data warehouse solutions using advanced features such as materialized views, task automation, and clustering.
  • Design and build real-time data pipelines from Kafka and other sources into Snowflake using Kafka Connect, Snowpipe, or custom solutions for streaming data ingestion.
  • Create and optimize ETL/ELT workflows using tools like DBT, Airflow, or cloud-native solutions to ensure efficient data processing and transformation.
  • Tune query performance, warehouse sizing, and pipeline efficiency by utilizing Snowflakes Query Profiling, Resource Monitors, and other diagnostic tools.
  • Work closely with architects, data analysts, and data scientists to translate complex business requirements into scalable technical solutions.
  • Enforce data governance and security standards, including data masking, encryption, and RBAC, to meet organizational compliance requirements.
  • Continuously monitor data pipelines, address performance bottlenecks, and troubleshoot issues using monitoring frameworks such as Prometheus, Grafana, or Snowflake-native tools.
  • Provide technical leadership, guidance, and code reviews for junior engineers, ensuring best practices in Snowflake and Kafka development are followed.
  • Research emerging tools, frameworks, and methodologies in data engineering and integrate relevant technologies into the data stack.


What you need:

Basic Skills:


  • 3+ years of hands-on experience with Snowflake data platform, including data modeling, performance tuning, and optimization.
  • Strong experience with Apache Kafka for stream processing and real-time data integration.
  • Proficiency in SQL and ETL/ELT processes.
  • Solid understanding of cloud platforms such as AWS, Azure, or Google Cloud.
  • Experience with scripting languages like Python, Shell, or similar for automation and data integration tasks.
  • Familiarity with tools like dbt, Airflow, or similar orchestration platforms.
  • Knowledge of data governance, security, and compliance best practices.
  • Strong analytical and problem-solving skills with the ability to troubleshoot complex data issues.
  • Ability to work in a collaborative team environment and communicate effectively with cross-functional teams


Responsibilities:

  • Design, develop, and maintain Snowflake data warehouse solutions, leveraging advanced Snowflake features like clustering, partitioning, materialized views, and time travel to optimize performance, scalability, and data reliability.
  • Architect and optimize ETL/ELT pipelines using tools such as Apache Airflow, DBT, or custom scripts, to ingest, transform, and load data into Snowflake from sources like Apache Kafka and other streaming/batch platforms.
  • Work in collaboration with data architects, analysts, and data scientists to gather and translate complex business requirements into robust, scalable technical designs and implementations.
  • Design and implement Apache Kafka-based real-time messaging systems to efficiently stream structured and semi-structured data into Snowflake, using Kafka Connect, KSQL, and Snow pipe for real-time ingestion.
  • Monitor and resolve performance bottlenecks in queries, pipelines, and warehouse configurations using tools like Query Profile, Resource Monitors, and Task Performance Views.
  • Implement automated data validation frameworks to ensure high-quality, reliable data throughout the ingestion and transformation lifecycle.
  • Pipeline Monitoring and Optimization: Deploy and maintain pipeline monitoring solutions using Prometheus, Grafana, or cloud-native tools, ensuring efficient data flow, scalability, and cost-effective operations.
  • Implement and enforce data governance policies, including role-based access control (RBAC), data masking, and auditing to meet compliance standards and safeguard sensitive information.
  • Provide hands-on technical mentorship to junior data engineers, ensuring adherence to coding standards, design principles, and best practices in Snowflake, Kafka, and cloud data engineering.
  • Stay current with advancements in Snowflake, Kafka, cloud services (AWS, Azure, GCP), and data engineering trends, and proactively apply new tools and methodologies to enhance the data platform. 


Read more
VyTCDC
Gobinath Sundaram
Posted by Gobinath Sundaram
Hyderabad, Noida, Gurugram
4 - 10 yrs
₹4L - ₹25L / yr
Data Warehouse (DWH)
Informatica
ETL
skill iconAmazon Web Services (AWS)
Data engineering
+3 more

• Required Qualifications

Bachelor’s degree or equivalent in Computer Science, Engineering, or related field; or equivalent work experience.

4-10 years of proven experience in Data Engineering

At least 4+ years of experience on AWS Cloud

Strong understanding in data warehousing principals and data modeling

Expert with SQL including knowledge of advanced query optimization techniques - build queries and data visualizations to support business use cases/analytics.

Proven experience on the AWS environment including access governance, infrastructure changes and implementation of CI/CD processes to support automated development and deployment

Proven experience with software tools including Pyspark and Python, PowerBI, QuickSite and core AWS tools such as Lambda, RDS, Cloudwatch, Cloudtrail, SNS, SQS, etc.

Experience building services/APIs on AWS Cloud environment.

Data ingestion and curation as well as implementation of data pipelines.


• Preferred Qualifications

Experience in Informatica/ETL technology will be a plus.

Experience with AI/ML Ops – model build through implementation lifecycle in AWS Cloud environment.

Hands-on experience on Snowflake would be good to have.

Experience in DevOps and microservices would be preferred.

Experience in Financial industry a plus.

Read more
top MNC

top MNC

Agency job
via Vy Systems by thirega thanasekaran
Hyderabad, Chennai
10 - 15 yrs
₹8L - ₹20L / yr
Data engineering
ETL
Datawarehousing
cicd
skill iconJenkins
+3 more

Key Responsibilities:

  • Lead Data Engineering Team: Provide leadership and mentorship to junior data engineers and ensure best practices in data architecture and pipeline design.
  • Data Pipeline Development: Design, implement, and maintain end-to-end ETL (Extract, Transform, Load) processes to support analytics, reporting, and data science activities.
  • Cloud Architecture (GCP): Architect and optimize data infrastructure on Google Cloud Platform (GCP), ensuring scalability, reliability, and performance of data systems.
  • CI/CD Pipelines: Implement and maintain CI/CD pipelines using Jenkins and other tools to ensure the seamless deployment and automation of data workflows.
  • Data Warehousing: Design and implement data warehousing solutions, ensuring optimal performance and efficient data storage using technologies like Teradata, Oracle, and SQL Server.
  • Workflow Orchestration: Use Apache Airflow to orchestrate complex data workflows and scheduling of data pipeline jobs.
  • Automation with Terraform: Implement Infrastructure as Code (IaC) using Terraform to provision and manage cloud resources.

Share Cv to




Thirega@ vysystems dot com - WhatsApp - 91Five0033Five2Three

Read more
Cloudesign Technology Solutions
Remote only
5 - 13 yrs
₹15L - ₹28L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+8 more

Job Description: Data Engineer

Location: Remote

Experience Required: 6 to 12 years in Data Engineering

Employment Type: [Full-time]

Notice: Looking for candidates, Who can join immediately or 15days Max

 

About the Role:

We are looking for a highly skilled Data Engineer with extensive experience in Python, Databricks, and Azure services. The ideal candidate will have a strong background in building and optimizing ETL processes, managing large-scale data infrastructures, and implementing data transformation and modeling tasks.

 

Key Responsibilities:

ETL Development:

Use Python as an ETL tool to read data from various sources, perform data type transformations, handle errors, implement logging mechanisms, and load data into Databricks-managed delta tables.

Develop robust data pipelines to support analytics and reporting needs.

Data Transformation & Optimization:

Perform data transformations and evaluations within Databricks.

Work on optimizing data workflows for performance and scalability.

Azure Expertise:

Implement and manage Azure services, including Azure SQL Database, Azure Data Factory, Azure Synapse Analytics, and Azure Data Lake.

Coding & Development:

Utilize Python for complex tasks involving classes, objects, methods, dictionaries, loops, packages, wheel files, and database connectivity.

Write scalable and maintainable code to manage streaming and batch data processing.

Cloud & Infrastructure Management:

Leverage Spark, Scala, and cloud-based solutions to design and maintain large-scale data infrastructures.

Work with cloud data warehouses, data lakes, and storage formats.

Project Leadership:

Lead data engineering projects and collaborate with cross-functional teams to deliver solutions on time.


Required Skills & Qualifications:

Technical Proficiency:

  • Expertise in Python for ETL and data pipeline development.
  • Strong experience with Databricks and Apache Spark.
  • Proven skills in handling Azure services, including Azure SQL Database, Azure Data Factory, Azure Synapse Analytics, and Azure Data Lake.


Experience & Knowledge:

  • Minimum 6+ years of experience in data engineering.
  • Solid understanding of data modeling, ETL processes, and optimizing data pipelines.
  • Familiarity with Unix shell scripting and scheduling tools.

Other Skills:

  • Knowledge of cloud warehouses and storage formats.
  • Experience in handling large-scale data infrastructures and streaming data.

 

Preferred Qualifications:

  • Proven experience with Spark and Scala for big data processing.
  • Prior experience in leading or mentoring data engineering teams.
  • Hands-on experience with end-to-end project lifecycle in data engineering.

 

What We Offer:

  • Opportunity to work on challenging and impactful data projects.
  • A collaborative and innovative work environment.
  • Competitive compensation and benefits.

 

How to Apply:

https://cloudesign.keka.com/careers/jobdetails/73555

Read more
B2B startup web platform

B2B startup web platform

Agency job
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
8 - 15 yrs
₹60L - ₹80L / yr
Artificial Intelligence (AI)
Backend testing
User Interface (UI) Design
Engineering Management
Data engineering

For a startup run by a serial founder:


Job Description :


- Architect and develop our B2B platform, focusing on delivering an exceptional user experience.


- Recruit and mentor an engineering team to build scalable and reliable technology solutions.


- Work in tandem with co-founders to ensure technology strategies are well-aligned with business objectives.


- Manage technical architecture to guarantee performance, scalability, and adaptability for future needs.


- Drive innovation by adopting advanced technologies into our product development cycle.


- Promote a culture of technical excellence and collaborative spirit within the engineering team.


Qualifications :


- Over 8 years of experience in technology team management roles, with a proven track record in software development and system architecture + Dev Ops


- Entrepreneurial mindset with a strong interest in building and scaling innovative products.


- Exceptional leadership abilities with experience in team building and mentorship.


- Strategic thinker with a history of delivering effective technology solutions.


Skills :


- Expertise in modern programming languages and application frameworks.


- Deep knowledge of cloud platforms, databases, and system design principles.


- Excellent analytical, problem-solving, and decision-making skills.


- Strong communication skills with the ability to lead cross-functional teams effectively.

Read more
The Modern Data Company
Remote only
5 - 11 yrs
₹35L - ₹55L / yr
genai
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
skill iconData Analytics
Data engineering

Job Description: Product Manager for GenAI Applications on Data Products About the Company: We are a forward-thinking technology company specializing in creating innovative data products and AI applications. Our mission is to harness the power of data and AI to drive business growth and efficiency. We are seeking a dynamic and experienced Product Manager to join our team and lead the development of cutting-edge GenAI applications. Role Overview: As a Product Manager for GenAI Applications, you will be responsible for conceptualizing, developing, and managing AI-driven products that leverage our data platforms. You will work closely with cross-functional teams, including engineering, data science, marketing, and sales, to ensure the successful delivery of high-impact AI solutions. Your understanding of business user needs and ability to translate them into effective AI applications will be crucial. Key Responsibilities: - Lead the end-to-end product lifecycle from ideation to launch for GenAI applications. - Collaborate with engineering and data science teams to design, develop, and deploy AI solutions. - Conduct market research and gather user feedback to identify opportunities for new product features and improvements. - Develop detailed product requirements, roadmaps, and user stories to guide development efforts. - Work with business stakeholders to understand their needs and ensure the AI applications meet their requirements. - Drive the product vision and strategy, aligning it with company goals and market demands. - Monitor and analyze product performance, leveraging data to make informed decisions and optimizations. - Coordinate with marketing and sales teams to create go-to-market strategies and support product launches. - Foster a culture of innovation and continuous improvement within the product development team. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, Business, or a related field. - 3-5 years of experience in product management, specifically in building AI applications. - Proven track record of developing and launching AI-driven products from scratch. - Experience working with data application layers and understanding data architecture. - Strong understanding of the psyche of business users and the ability to translate their needs into technical solutions. - Excellent project management skills, with the ability to prioritize tasks and manage multiple projects simultaneously. - Strong analytical and problem-solving skills, with a data-driven approach to decision making. - Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams. - Passion for AI and a deep understanding of the latest trends and technologies in the field. Benefits: - Competitive salary and benefits package. - Opportunity to work on cutting-edge AI technologies and products. - Collaborative and innovative work environment. - Professional development opportunities and career growth. If you are a passionate Product Manager with a strong background in AI and data products, and you are excited about building transformative AI applications, we would love to hear from you. Apply now to join our dynamic team and make an impact in the world of AI and data.

Read more
product base company based at Bangalore location and working

product base company based at Bangalore location and working

Agency job
Remote only
4 - 9 yrs
₹20L - ₹30L / yr
Data Structures
Large Language Models (LLM) tuning
GPT
Llama2
Mistral
+9 more

We are seeking an experienced Data Scientist with a proven track record in Machine Learning, Deep Learning, and a demonstrated focus on Large Language Models (LLMs) to join our cutting-edge Data Science team. You will play a pivotal role in developing and deploying innovative AI solutions that drive real-world impact to patients and healthcare providers.

Responsibilities

• LLM Development and Fine-tuning: fine-tune, customize, and adapt large language models (e.g., GPT, Llama2, Mistral, etc.) for specific business applications and NLP tasks such as text classification, named entity recognition, sentiment analysis, summarization, and question answering. Experience in other transformer-based NLP models such as BERT, etc. will be an added advantage.

• Data Engineering: collaborate with data engineers to develop efficient data pipelines, ensuring the quality and integrity of large-scale text datasets used for LLM training and fine-tuning

• Experimentation and Evaluation: develop rigorous experimentation frameworks to evaluate model performance, identify areas for improvement, and inform model selection. Experience in LLM testing frameworks such as TruLens will be an added advantage.

• Production Deployment: work closely with MLOps and Data Engineering teams to integrate models into scalable production systems.

• Predictive Model Design and Implementation: leverage machine learning/deep learning and LLM methods to design, build, and deploy predictive models in oncology (e.g., survival models)

• Cross-functional Collaboration: partner with product managers, domain experts, and stakeholders to understand business needs and drive the successful implementation of data science solutions

• Knowledge Sharing: mentor junior team members and stay up to date with the latest advancements in machine learning and LLMs

Qualifications Required

• Doctoral or master’s degree in computer science, Data Science, Artificial Intelligence, or related field

• 5+ years of hands-on experience in designing, implementing, and deploying machine learning and deep learning models

• 12+ months of in-depth experience working with LLMs. Proficiency in Python and NLP-focused libraries (e.g., spaCy, NLTK, Transformers, TensorFlow/PyTorch).

• Experience working with cloud-based platforms (AWS, GCP, Azure)

Additional Skills

• Excellent problem-solving and analytical abilities

• Strong communication skills, both written and verbal

• Ability to thrive in a collaborative and fast-paced environment

Read more
Gipfel & Schnell Consultings Pvt Ltd
TanmayaKumar Pattanaik
Posted by TanmayaKumar Pattanaik
Bengaluru (Bangalore)
3 - 9 yrs
Best in industry
Data engineering
ADF
data factory
SQL Azure
databricks
+4 more

Data Engineer

 

Brief Posting Description:

This person will work independently or with a team of data engineers on cloud technology products, projects, and initiatives. Work with all customers, both internal and external, to make sure all data related features are implemented in each solution. Will collaborate with business partners and other technical teams across the organization as required to deliver proposed solutions.

 

Detailed Description:

·        Works with Scrum masters, product owners, and others to identify new features for digital products.

·        Works with IT leadership and business partners to design features for the cloud data platform.

·        Troubleshoots production issues of all levels and severities, and tracks progress from identification through resolution.

·        Maintains culture of open communication, collaboration, mutual respect and productive behaviors; participates in the hiring, training, and retention of top tier talent and mentors team members to new and fulfilling career experiences.

·        Identifies risks, barriers, efficiencies and opportunities when thinking through development approach; presents possible platform-wide architectural solutions based on facts, data, and best practices.

·        Explores all technical options when considering solution, including homegrown coding, third-party sub-systems, enterprise platforms, and existing technology components.

·        Actively participates in collaborative effort through all phases of software development life cycle (SDLC), including requirements analysis, technical design, coding, testing, release, and customer technical support.

·        Develops technical documentation, such as system context diagrams, design documents, release procedures, and other pertinent artifacts.

·        Understands lifecycle of various technology sub-systems that comprise the enterprise data platform (i.e., version, release, roadmap), including current capabilities, compatibilities, limitations, and dependencies; understands and advises of optimal upgrade paths.

·        Establishes relationships with key IT, QA, and other corporate partners, and regularly communicates and collaborates accordingly while working on cross-functional projects or production issues.

 

 

 

 

Job Requirements:

 

EXPERIENCE:

2 years required; 3 - 5 years preferred experience in a data engineering role.

2 years required, 3 - 5 years preferred experience in Azure data services (Data Factory, Databricks, ADLS, Synapse, SQL DB, etc.)

 

EDUCATION:

Bachelor’s degree information technology, computer science, or data related field preferred

 

SKILLS/REQUIREMENTS:

Expertise working with databases and SQL.

Strong working knowledge of Azure Data Factory and Databricks

Strong working knowledge of code management and continuous integrations systems (Azure DevOps or Github preferred)

Strong working knowledge of cloud relational databases (Azure Synapse and Azure SQL preferred)

Familiarity with Agile delivery methodologies

Familiarity with NoSQL databases (such as CosmosDB) preferred.

Any experience with Python, DAX, Azure Logic Apps, Azure Functions, IoT technologies, PowerBI, Power Apps, SSIS, Informatica, Teradata, Oracle DB, and Snowflake preferred but not required.

Ability to multi-task and reprioritize in a dynamic environment.

Outstanding written and verbal communication skills

 

Working Environment:

General Office – Work is generally performed within an office environment, with standard office equipment. Lighting and temperature are adequate and there are no hazardous or unpleasant conditions caused by noise, dust, etc. 

 

physical requirements:                     

Work is generally sedentary in nature but may require standing and walking for up to 10% of the time. 

 

Mental requirements:

Employee required to organize and coordinate schedules.

Employee required to analyze and interpret complex data.

Employee required to problem-solve. 

Employee required to communicate with the public.

Read more
RAPTORX.AI

at RAPTORX.AI

2 candid answers
Pratyusha Vemuri
Posted by Pratyusha Vemuri
Hyderabad
5 - 7 yrs
₹10L - ₹25L / yr
skill iconNodeJS (Node.js)
skill iconReact.js
Data Visualization
Graph Databases
Neo4J
+2 more

Role Overview

We are looking for a Tech Lead with a strong background in fintech, especially with experience or a strong interest in fraud prevention and Anti-Money Laundering (AML) technologies. 

This role is critical in leading our fintech product development, ensuring the integration of robust security measures, and guiding our team in Hyderabad towards delivering high-quality, secure, and compliant software solutions.

Responsibilities

  • Lead the development of fintech solutions, focusing on fraud prevention and AML, using Typescript, ReactJs, Python, and SQL databases.
  • Architect and deploy secure, scalable applications on AWS or Azure, adhering to the best practices in financial security and data protection.
  • Design and manage databases with an emphasis on security, integrity, and performance, ensuring compliance with fintech regulatory standards.
  • Guide and mentor the development team, promoting a culture of excellence, innovation, and continuous learning in the fintech space.
  • Collaborate with stakeholders across the company, including product management, design, and QA, to ensure project alignment with business goals and regulatory requirements.
  • Keep abreast of the latest trends and technologies in fintech, fraud prevention, and AML, applying this knowledge to drive the company's objectives.

Requirements

  • 5-7 years of experience in software development, with a focus on fintech solutions and a strong understanding of fraud prevention and AML strategies.
  • Expertise in Typescript, ReactJs, and familiarity with Python.
  • Proven experience with SQL databases and cloud services (AWS or Azure), with certifications in these areas being a plus.
  • Demonstrated ability to design and implement secure, high-performance software architectures in the fintech domain.
  • Exceptional leadership and communication skills, with the ability to inspire and lead a team towards achieving excellence.
  • A bachelor's degree in Computer Science, Engineering, or a related field, with additional certifications in fintech, security, or compliance being highly regarded.

Why Join Us?

  • Opportunity to be at the cutting edge of fintech innovation, particularly in fraud prevention and AML.
  • Contribute to a company with ambitious goals to revolutionize software development and make a historical impact.
  • Be part of a visionary team dedicated to creating a lasting legacy in the tech industry.
  • Work in an environment that values innovation, leadership, and the long-term success of its employees.


Read more
Wissen Technology
Mumbai
5 - 10 yrs
₹5L - ₹15L / yr
skill iconJava
J2EE
skill iconSpring Boot
Hibernate (Java)
Data engineering

Skills required


-5+ years of software development experience.

-Excellent skills in Java and/or Scala programming, with expertise in backend architectures, messaging technologies, and related frameworks.

-Developing Data Pipelines (Batch/Streaming).Developing Complex data transformations, ETL Orchestration, Data Migration, Develop and Maintain Datawarehouse / Data Lakes.

-Extensive experience in complex SQL queries, database development, and data engineering, including the development of procedures, packages, functions, and handling exceptions.

-Knowledgeable in issue tracking tools (e.g., JIRA), code collaboration tools (e.g., Git/GitLab), and team collaboration tools (e.g., Confluence/Wiki).

Proficient in Linux/Unix, including shell scripting.

-Ability to translate business and architectural features into quality, consistent software design.

-Solid understanding of programming practices, emphasizin

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Lokesh Manikappa
Posted by Lokesh Manikappa
Mumbai
4 - 9 yrs
₹15L - ₹32L / yr
skill iconJava
ETL
SQL
Data engineering
skill iconScala

Java/Scala + Data Engineer

 

Experience: 5-10 years

Location: Mumbai

Notice: Immediate to 30 days

Required Skills:

·       5+ years of software development experience.

·       Excellent skills in Java and/or Scala programming, with expertise in backend architectures, messaging technologies, and related frameworks.

·       Developing Data Pipelines (Batch/Streaming). Developing Complex data transformations, ETL Orchestration, Data Migration, Develop and Maintain Datawarehouse / Data Lakes.

·       Extensive experience in complex SQL queries, database development, and data engineering, including the development of procedures, packages, functions, and handling exceptions.

·       Knowledgeable in issue tracking tools (e.g., JIRA), code collaboration tools (e.g., Git/GitLab), and team collaboration tools (e.g., Confluence/Wiki).

·       Proficient in Linux/Unix, including shell scripting.

·       Ability to translate business and architectural features into quality, consistent software design.

·       Solid understanding of programming practices, emphasizing reusable, flexible, and reliable code.

Read more
Career Forge

at Career Forge

2 candid answers
Mohammad Faiz
Posted by Mohammad Faiz
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5 - 7 yrs
₹12L - ₹15L / yr
skill iconPython
Apache Spark
PySpark
Data engineering
ETL
+10 more

🚀 Exciting Opportunity: Data Engineer Position in Gurugram 🌐


Hello 


We are actively seeking a talented and experienced Data Engineer to join our dynamic team at Reality Motivational Venture in Gurugram (Gurgaon). If you're passionate about data, thrive in a collaborative environment, and possess the skills we're looking for, we want to hear from you!


Position: Data Engineer  

Location: Gurugram (Gurgaon)  

Experience: 5+ years 


Key Skills:

- Python

- Spark, Pyspark

- Data Governance

- Cloud (AWS/Azure/GCP)


Main Responsibilities:

- Define and set up analytics environments for "Big Data" applications in collaboration with domain experts.

- Implement ETL processes for telemetry-based and stationary test data.

- Support in defining data governance, including data lifecycle management.

- Develop large-scale data processing engines and real-time search and analytics based on time series data.

- Ensure technical, methodological, and quality aspects.

- Support CI/CD processes.

- Foster know-how development and transfer, continuous improvement of leading technologies within Data Engineering.

- Collaborate with solution architects on the development of complex on-premise, hybrid, and cloud solution architectures.


Qualification Requirements:

- BSc, MSc, MEng, or PhD in Computer Science, Informatics/Telematics, Mathematics/Statistics, or a comparable engineering degree.

- Proficiency in Python and the PyData stack (Pandas/Numpy).

- Experience in high-level programming languages (C#/C++/Java).

- Familiarity with scalable processing environments like Dask (or Spark).

- Proficient in Linux and scripting languages (Bash Scripts).

- Experience in containerization and orchestration of containerized services (Kubernetes).

- Education in database technologies (SQL/OLAP and Non-SQL).

- Interest in Big Data storage technologies (Elastic, ClickHouse).

- Familiarity with Cloud technologies (Azure, AWS, GCP).

- Fluent English communication skills (speaking and writing).

- Ability to work constructively with a global team.

- Willingness to travel for business trips during development projects.


Preferable:

- Working knowledge of vehicle architectures, communication, and components.

- Experience in additional programming languages (C#/C++/Java, R, Scala, MATLAB).

- Experience in time-series processing.


How to Apply:

Interested candidates, please share your updated CV/resume with me.


Thank you for considering this exciting opportunity.

Read more
Remote only
2 - 5 yrs
₹10L - ₹20L / yr
Data engineering

A candidate with 2 to 6 years of relevant experience in Snowflakes Data Cloud.

Mandatory Skills:

• Excellent knowledge in Snowflakes Data Cloud.

• Excellent knowledge in SQL

• Good knowledge in ETL

• Must have working knowledge of Azure Data Factory

• Must have general awareness of Azure Cloud

• Must be aware of optimization techniques in Data retrieval and Data loading.

Read more
hopscotch
Bengaluru (Bangalore)
5 - 8 yrs
₹6L - ₹15L / yr
skill iconPython
Amazon Redshift
skill iconAmazon Web Services (AWS)
PySpark
Data engineering
+3 more

About the role:

 Hopscotch is looking for a passionate Data Engineer to join our team. You will work closely with other teams like data analytics, marketing, data science and individual product teams to specify, validate, prototype, scale, and deploy data pipelines features and data architecture.


Here’s what will be expected out of you:

➢ Ability to work in a fast-paced startup mindset. Should be able to manage all aspects of data extraction transfer and load activities.

➢ Develop data pipelines that make data available across platforms.

➢ Should be comfortable in executing ETL (Extract, Transform and Load) processes which include data ingestion, data cleaning and curation into a data warehouse, database, or data platform.

➢ Work on various aspects of the AI/ML ecosystem – data modeling, data and ML pipelines.

➢ Work closely with Devops and senior Architect to come up with scalable system and model architectures for enabling real-time and batch services.


What we want:

➢ 5+ years of experience as a data engineer or data scientist with a focus on data engineering and ETL jobs.

➢ Well versed with the concept of Data warehousing, Data Modelling and/or Data Analysis.

➢ Experience using & building pipelines and performing ETL with industry-standard best practices on Redshift (more than 2+ years).

➢ Ability to troubleshoot and solve performance issues with data ingestion, data processing & query execution on Redshift.

➢ Good understanding of orchestration tools like Airflow.

 ➢ Strong Python and SQL coding skills.

➢ Strong Experience in distributed systems like spark.

➢ Experience with AWS Data and ML Technologies (AWS Glue,MWAA, Data Pipeline,EMR,Athena, Redshift,Lambda etc).

➢ Solid hands on with various data extraction techniques like CDC or Time/batch based and the related tools (Debezium, AWS DMS, Kafka Connect, etc) for near real time and batch data extraction.


Note :

Product based companies, Ecommerce companies is added advantage

Read more
AI Domain US Based Product Based Company

AI Domain US Based Product Based Company

Agency job
via New Era India by Asha P
Bengaluru (Bangalore)
3 - 10 yrs
₹30L - ₹50L / yr
Data engineering
Data modeling
skill iconPython

Requirements:

  • 2+ years of experience (4+ for Senior Data Engineer) with system/data integration, development or implementation of enterprise and/or cloud software Engineering degree in Computer Science, Engineering or related field.
  • Extensive hands-on experience with data integration/EAI technologies (File, API, Queues, Streams), ETL Tools and building custom data pipelines.
  • Demonstrated proficiency with Python, JavaScript and/or Java
  • Familiarity with version control/SCM is a must (experience with git is a plus).
  • Experience with relational and NoSQL databases (any vendor) Solid understanding of cloud computing concepts.
  • Strong organisational and troubleshooting skills with attention to detail.
  • Strong analytical ability, judgment and problem-solving techniques Interpersonal and communication skills with the ability to work effectively in a cross functional team.


Read more
SimpliFin
Bengaluru (Bangalore)
6 - 14 yrs
₹20L - ₹50L / yr
SaaS
Engineering Management
Artificial Intelligence (AI)
Data engineering
Financial services

We are looking for a passionate technologist with experience in building SaaS tech experience and products for a once-in-a-lifetime opportunity to lead Engineering for an AI powered Financial Operations platform to seamlessly monitor, optimize, reconcile and forecast cashflow with ease.


Background


An incredible rare opportunity for a VP Engineering to join a top tier incubated VC SaaS startup and outstanding management team. Product is currently in the build stage with a solid design partners pipeline of ~$250K and soon raising a pre-seed/seed round with marquee investors.


Responsibilities


  • Develop and implement the company's technical strategy and roadmap, ensuring that it aligns with the overall business objectives and is scalable, reliable, and secure.


  • Manage and optimize the company's technical resources, including staffing, software, hardware, and infrastructure, to ensure that they are being used effectively and efficiently.


  • Work with the founding team and other executives to identify opportunities for innovation and new technology solutions, and evaluate the feasibility and impact of these solutions on the business.


  • Lead the engineering function in developing and deploying high-quality software products and solutions, ensuring that they meet or exceed customer requirements and industry standards.


  • Analyze and evaluate technical data and metrics, identifying areas for improvement and implementing changes to drive efficiency and effectiveness.


  • Ensure that the company is in compliance with all legal and regulatory requirements, including data privacy and security regulations.


Eligibility criteria:


  • 6+ years of experience in developing scalable SaaS products.


  • Strong technical background with 6+ years of experience with a strong focus on SaaS, AI, and finance software.


  • Prior experience in leadership roles.


  • Entrepreneurial mindset, with a strong desire to innovate and grow a startup from the ground up.


Perks:


  • Vested Equity.


  • Ownership in the company.


  • Build alongside passionate and smart individuals.


Read more
codersbrain

at codersbrain

1 recruiter
Tanuj Uppal
Posted by Tanuj Uppal
Bengaluru (Bangalore)
10 - 18 yrs
Best in industry
flink
apache flink
skill iconJava
Data engineering

1. Flink Sr. Developer


Location: Bangalore(WFO)


Mandatory Skills & Exp -10+ Years : Must have Hands on Experience on FLINK, Kubernetes , Docker, Microservices, any one of Kafka/Pulsar, CI/CD and Java.


Job Responsibilities:


As the Data Engineer lead, you are expected to engineer, develop, support, and deliver real-time


streaming applications that model real-world network entities, and have a good understanding of the


Telecom Network KPIs to improve the customer experience through automation of operational network


data. Real-time application development will include building stateful in-memory backends, real-time


streaming APIs , leveraging real-time databases such as Apache Druid.


 Architecting and creating the streaming data pipelines that will enrich the data and support


the use cases for telecom networks


 Collaborating closely with multiple stakeholders, gathering requirements and seeking


iterative feedback on recently delivered application features.


 Participating in peer review sessions to provide teammates with code review as well as


architectural and design feedback.


 Composing detailed low-level design documentation, call flows, and architecture diagrams


for the solutions you build.


 Running to a crisis anytime the Operations team needs help.


 Perform duties with minimum supervision and participate in cross-functional projects as


scheduled.


Skills:


 Flink Sr. Developer, who has implemented and dealt with failure scenarios of


processing data through Flink.


 Experience with Java, K8S, Argo CD/Workflow, Prometheus, and Aether.


 Familiarity with object-oriented design patterns.


 Experience with Application Development DevOps Tools.


 Experience with distributed cloud-native application design deployed on Kubernetes


platforms.


 Experience with PostGres, Druid, and Oracle databases.


 Experience with Messaging Bus - Kafka/Pulsar


 Experience with AI/ML - Kubeflow, JupyterHub


 Experience with building real-time applications which leverage streaming data.


 Experience with streaming message bus platforming, either Kafka or Pulsar.


 Experience with Apache Spark applications and Hadoop platforms.


 Strong problem solving skills.


 Strong written and oral communication skills.

Read more
Mobile Programming LLC

at Mobile Programming LLC

1 video
34 recruiters
Sukhdeep Singh
Posted by Sukhdeep Singh
Bengaluru (Bangalore)
6 - 10 yrs
₹10L - ₹15L / yr
Data engineering
Nifi
DevOps
ETL

Job description Position: Data Engineer Experience: 6+ years Work Mode: Work from Office Location: Bangalore Please note: This position is focused on development rather than migration. Experience in Nifi or Tibco is mandatory.Mandatory Skills: ETL, DevOps platform, Nifi or Tibco We are seeking an experienced Data Engineer to join our team. As a Data Engineer, you will play a crucial role in developing and maintaining our data infrastructure and ensuring the smooth operation of our data platforms. The ideal candidate should have a strong background in advanced data engineering, scripting languages, cloud and big data technologies, ETL tools, and database structures.

 

Responsibilities: •  Utilize advanced data engineering techniques, including ETL (Extract, Transform, Load), SQL, and other advanced data manipulation techniques. •   Develop and maintain data-oriented scripting using languages such as Python. •   Create and manage data structures to ensure efficient and accurate data storage and retrieval. •   Work with cloud and big data technologies, specifically AWS and Azure stack, to process and analyze large volumes of data. •   Utilize ETL tools such as Nifi and Tibco to extract, transform, and load data into various systems. •   Have hands-on experience with database structures, particularly MSSQL and Vertica, to optimize data storage and retrieval. •   Manage and maintain the operations of data platforms, ensuring data availability, reliability, and security. •   Collaborate with cross-functional teams to understand data requirements and design appropriate data solutions. •   Stay up-to-date with the latest industry trends and advancements in data engineering and suggest improvements to enhance our data infrastructure.

 

Requirements: •  A minimum of 6 years of relevant experience as a Data Engineer. •  Proficiency in ETL, SQL, and other advanced data engineering techniques. •   Strong programming skills in scripting languages such as Python. •   Experience in creating and maintaining data structures for efficient data storage and retrieval. •   Familiarity with cloud and big data technologies, specifically AWS and Azure stack. •   Hands-on experience with ETL tools, particularly Nifi and Tibco. •   In-depth knowledge of database structures, including MSSQL and Vertica. •   Proven experience in managing and operating data platforms. •   Strong problem-solving and analytical skills with the ability to handle complex data challenges. •   Excellent communication and collaboration skills to work effectively in a team environment. •   Self-motivated with a strong drive for learning and keeping up-to-date with the latest industry trends.

Read more
TensorGo Software Private Limited
Deepika Agarwal
Posted by Deepika Agarwal
Remote only
5 - 8 yrs
₹5L - ₹15L / yr
skill iconPython
PySpark
apache airflow
Spark
Hadoop
+4 more

Requirements:

● Understanding our data sets and how to bring them together.

● Working with our engineering team to support custom solutions offered to the product development.

● Filling the gap between development, engineering and data ops.

● Creating, maintaining and documenting scripts to support ongoing custom solutions.

● Excellent organizational skills, including attention to precise details

● Strong multitasking skills and ability to work in a fast-paced environment

● 5+ years experience with Python to develop scripts.

● Know your way around RESTFUL APIs.[Able to integrate not necessary to publish]

● You are familiar with pulling and pushing files from SFTP and AWS S3.

● Experience with any Cloud solutions including GCP / AWS / OCI / Azure.

● Familiarity with SQL programming to query and transform data from relational Databases.

● Familiarity to work with Linux (and Linux work environment).

● Excellent written and verbal communication skills

● Extracting, transforming, and loading data into internal databases and Hadoop

● Optimizing our new and existing data pipelines for speed and reliability

● Deploying product build and product improvements

● Documenting and managing multiple repositories of code

● Experience with SQL and NoSQL databases (Casendra, MySQL)

● Hands-on experience in data pipelining and ETL. (Any of these frameworks/tools: Hadoop, BigQuery,

RedShift, Athena)

● Hands-on experience in AirFlow

● Understanding of best practices, common coding patterns and good practices around

● storing, partitioning, warehousing and indexing of data

● Experience in reading the data from Kafka topic (both live stream and offline)

● Experience in PySpark and Data frames

Responsibilities:

You’ll

● Collaborating across an agile team to continuously design, iterate, and develop big data systems.

● Extracting, transforming, and loading data into internal databases.

● Optimizing our new and existing data pipelines for speed and reliability.

● Deploying new products and product improvements.

● Documenting and managing multiple repositories of code.

Read more
Mactores Cognition Private Limited
Remote only
2 - 15 yrs
₹6L - ₹40L / yr
skill iconAmazon Web Services (AWS)
PySpark
athena
Data engineering

As AWS Data Engineer, you are a full-stack data engineer that loves solving business problems. You work with business leads, analysts, and data scientists to understand the business domain and engage with fellow engineers to build data products that empower better decision-making. You are passionate about the data quality of our business metrics and the flexibility of your solution that scales to respond to broader business questions. 


If you love to solve problems using your skills, then come join the Team Mactores. We have a casual and fun office environment that actively steers clear of rigid "corporate" culture, focuses on productivity and creativity, and allows you to be part of a world-class team while still being yourself.

What you will do?

  • Write efficient code in - PySpark, Amazon Glue
  • Write SQL Queries in - Amazon Athena, Amazon Redshift
  • Explore new technologies and learn new techniques to solve business problems creatively
  • Collaborate with many teams - engineering and business, to build better data products and services 
  • Deliver the projects along with the team collaboratively and manage updates to customers on time


What are we looking for?

  • 1 to 3 years of experience in Apache Spark, PySpark, Amazon Glue
  • 2+ years of experience in writing ETL jobs using pySpark, and SparkSQL
  • 2+ years of experience in SQL queries and stored procedures
  • Have a deep understanding of all the Dataframe API with all the transformation functions supported by Spark 2.7+


You will be preferred if you have

  • Prior experience in working on AWS EMR, Apache Airflow
  • Certifications AWS Certified Big Data – Specialty OR Cloudera Certified Big Data Engineer OR Hortonworks Certified Big Data Engineer
  • Understanding of DataOps Engineering


Life at Mactores


We care about creating a culture that makes a real difference in the lives of every Mactorian. Our 10 Core Leadership Principles that honor Decision-making, Leadership, Collaboration, and Curiosity drive how we work.


1. Be one step ahead

2. Deliver the best

3. Be bold

4. Pay attention to the detail

5. Enjoy the challenge

6. Be curious and take action

7. Take leadership

8. Own it

9. Deliver value

10. Be collaborative


We would like you to read more details about the work culture on https://mactores.com/careers 


The Path to Joining the Mactores Team

At Mactores, our recruitment process is structured around three distinct stages:


Pre-Employment Assessment: 

You will be invited to participate in a series of pre-employment evaluations to assess your technical proficiency and suitability for the role.


Managerial Interview: The hiring manager will engage with you in multiple discussions, lasting anywhere from 30 minutes to an hour, to assess your technical skills, hands-on experience, leadership potential, and communication abilities.


HR Discussion: During this 30-minute session, you'll have the opportunity to discuss the offer and next steps with a member of the HR team.


At Mactores, we are committed to providing equal opportunities in all of our employment practices, and we do not discriminate based on race, religion, gender, national origin, age, disability, marital status, military status, genetic information, or any other category protected by federal, state, and local laws. This policy extends to all aspects of the employment relationship, including recruitment, compensation, promotions, transfers, disciplinary action, layoff, training, and social and recreational programs. All employment decisions will be made in compliance with these principles.

Read more
persistent

persistent

Agency job
via Bohiyaanam Talent Solutions LLP by TrishaDutt Tekgminus
Pune, Mumbai, Bengaluru (Bangalore), Indore, Kolkata
6 - 7 yrs
₹12L - ₹18L / yr
MuleSoft
ETL QA
Automation
Data engineering

I am looking for Mulesoft Developer for a reputed MNC

 

Experience: 6+ Years

Relevant experience: 4 Years

Location : Pune, Mumbai, Bangalore, Indore, Kolkata

 

Skills:

Mulesoft

Experience: 6+ Years

Relevant experience: 4 Years

Location : Pune, Mumbai, Bangalore, Indore, Kolkata

Read more
Tredence
Rohit S
Posted by Rohit S
Chennai, Pune, Bengaluru (Bangalore), Gurugram
11 - 16 yrs
₹20L - ₹32L / yr
Data Warehouse (DWH)
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Data engineering
Data migration
+1 more
• Engages with Leadership of Tredence’s clients to identify critical business problems, define the need for data engineering solutions and build strategy and roadmap
• S/he possesses a wide exposure to complete lifecycle of data starting from creation to consumption
• S/he has in the past built repeatable tools / data-models to solve specific business problems
• S/he should have hand-on experience of having worked on projects (either as a consultant or with in a company) that needed them to
o Provide consultation to senior client personnel o Implement and enhance data warehouses or data lakes.
o Worked with business teams or was a part of the team that implemented process re-engineering driven by data analytics/insights
• Should have deep appreciation of how data can be used in decision-making
• Should have perspective on newer ways of solving business problems. E.g. external data, innovative techniques, newer technology
• S/he must have a solution-creation mindset.
Ability to design and enhance scalable data platforms to address the business need
• Working experience on data engineering tool for one or more cloud platforms -Snowflake, AWS/Azure/GCP
• Engage with technology teams from Tredence and Clients to create last mile connectivity of the solutions
o Should have experience of working with technology teams
• Demonstrated ability in thought leadership – Articles/White Papers/Interviews
Mandatory Skills Program Management, Data Warehouse, Data Lake, Analytics, Cloud Platform
Read more
Tredence
Bengaluru (Bangalore), Pune, Gurugram, Chennai
8 - 12 yrs
₹12L - ₹30L / yr
Snow flake schema
Snowflake
SQL
Data modeling
Data engineering
+1 more

JOB DESCRIPTION:. THE IDEAL CANDIDATE WILL:

• Ensure new features and subject areas are modelled to integrate with existing structures and provide a consistent view. Develop and maintain documentation of the data architecture, data flow and data models of the data warehouse appropriate for various audiences. Provide direction on adoption of Cloud technologies (Snowflake) and industry best practices in the field of data warehouse architecture and modelling.

• Providing technical leadership to large enterprise scale projects. You will also be responsible for preparing estimates and defining technical solutions to proposals (RFPs). This role requires a broad range of skills and the ability to step into different roles depending on the size and scope of the project Roles & Responsibilities.

ELIGIBILITY CRITERIA: Desired Experience/Skills:
• Must have total 5+ yrs. in IT and 2+ years' experience working as a snowflake Data Architect and 4+ years in Data warehouse, ETL, BI projects.
• Must have experience at least two end to end implementation of Snowflake cloud data warehouse and 3 end to end data warehouse implementations on-premise preferably on Oracle.

• Expertise in Snowflake – data modelling, ELT using Snowflake SQL, implementing complex stored Procedures and standard DWH and ETL concepts
• Expertise in Snowflake advanced concepts like setting up resource monitors, RBAC controls, virtual warehouse sizing, query performance tuning, Zero copy clone, time travel and understand how to use these features
• Expertise in deploying Snowflake features such as data sharing, events and lake-house patterns
• Hands-on experience with Snowflake utilities, SnowSQL, SnowPipe, Big Data model techniques using Python
• Experience in Data Migration from RDBMS to Snowflake cloud data warehouse
• Deep understanding of relational as well as NoSQL data stores, methods and approaches (star and snowflake, dimensional modelling)
• Experience with data security and data access controls and design
• Experience with AWS or Azure data storage and management technologies such as S3 and ADLS
• Build processes supporting data transformation, data structures, metadata, dependency and workload management
• Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot
• Provide resolution to an extensive range of complicated data pipeline related problems, proactively and as issues surface
• Must have expertise in AWS or Azure Platform as a Service (PAAS)
• Certified Snowflake cloud data warehouse Architect (Desirable)
• Should be able to troubleshoot problems across infrastructure, platform and application domains.
• Must have experience of Agile development methodologies
• Strong written communication skills. Is effective and persuasive in both written and oral communication

Nice to have Skills/Qualifications:Bachelor's and/or master’s degree in computer science or equivalent experience.
• Strong communication, analytical and problem-solving skills with a high attention to detail.

 

About you:
• You are self-motivated, collaborative, eager to learn, and hands on
• You love trying out new apps, and find yourself coming up with ideas to improve them
• You stay ahead with all the latest trends and technologies
• You are particular about following industry best practices and have high standards regarding quality

Read more
Miracle Software Systems, Inc
Ratnakumari Modhalavalasa
Posted by Ratnakumari Modhalavalasa
Visakhapatnam
3 - 5 yrs
₹2L - ₹4L / yr
Hadoop
Apache Sqoop
Apache Hive
Apache Spark
Apache Pig
+9 more
Position : Data Engineer

Duration : Full Time

Location : Vishakhapatnam, Bangalore, Chennai

years of experience : 3+ years

Job Description :

- 3+ Years of working as a Data Engineer with thorough understanding of data frameworks that collect, manage, transform and store data that can derive business insights.

- Strong communications (written and verbal) along with being a good team player.

- 2+ years of experience within the Big Data ecosystem (Hadoop, Sqoop, Hive, Spark, Pig, etc.)

- 2+ years of strong experience with SQL and Python (Data Engineering focused).

- Experience with GCP Data Services such as BigQuery, Dataflow, Dataproc, etc. is an added advantage and preferred.

- Any prior experience in ETL tools such as DataStage, Informatica, DBT, Talend, etc. is an added advantage for the role.
Read more
Getinz

at Getinz

11 recruiters
kousalya k
Posted by kousalya k
Remote only
4 - 8 yrs
₹10L - ₹15L / yr
Penetration testing
skill iconPython
Powershell
Bash
Spark
+5 more
-3 + years of Red Team experience
-5+ years hands on experience with penetration testing would be added plus
-Strong Knowledge of programming or scripting languages, such as Python, PowerShell, Bash
-Industry certifications like OSCP and AWS are highly desired for this role
-Well-rounded knowledge in security tools, software and processes
Read more
Astegic

at Astegic

3 recruiters
Nikita Pasricha
Posted by Nikita Pasricha
Remote only
5 - 7 yrs
₹8L - ₹15L / yr
Data engineering
SQL
Relational Database (RDBMS)
Big Data
skill iconScala
+14 more

WHAT YOU WILL DO:

  • ●  Create and maintain optimal data pipeline architecture.

  • ●  Assemble large, complex data sets that meet functional / non-functional business requirements.

  • ●  Identify, design, and implement internal process improvements: automating manual processes,

    optimizing data delivery, re-designing infrastructure for greater scalability, etc.

  • ●  Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide

    variety of data sources using Spark,Hadoop and AWS 'big data' technologies.(EC2, EMR, S3, Athena).

  • ●  Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition,

    operational efficiency and other key business performance metrics.

  • ●  Work with stakeholders including the Executive, Product, Data and Design teams to assist with

    data-related technical issues and support their data infrastructure needs.

  • ●  Keep our data separated and secure across national boundaries through multiple data centers and AWS

    regions.

  • ●  Create data tools for analytics and data scientist team members that assist them in building and

    optimizing our product into an innovative industry leader.

  • ●  Work with data and analytics experts to strive for greater functionality in our data systems.

    REQUIRED SKILLS & QUALIFICATIONS:

  • ●  5+ years of experience in a Data Engineer role.

  • ●  Advanced working SQL knowledge and experience working with relational databases, query authoring

    (SQL) as well as working familiarity with a variety of databases.

  • ●  Experience building and optimizing 'big data' data pipelines, architectures and data sets.

  • ●  Experience performing root cause analysis on internal and external data and processes to answer

    specific business questions and identify opportunities for improvement.

  • ●  Strong analytic skills related to working with unstructured datasets.

  • ●  Build processes supporting data transformation, data structures, metadata, dependency and workload

    management.

  • ●  A successful history of manipulating, processing and extracting value from large disconnected datasets.

  • ●  Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores.

  • ●  Strong project management and organizational skills.

  • ●  Experience supporting and working with cross-functional teams in a dynamic environment

  • ●  Experience with big data tools: Hadoop, Spark, Pig, Vetica, etc.

  • ●  Experience with AWS cloud services: EC2, EMR, S3, Athena

  • ●  Experience with Linux

  • ●  Experience with object-oriented/object function scripting languages: Python, Java, Shell, Scala, etc.


    PREFERRED SKILLS & QUALIFICATIONS:

● Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.

Read more
Network Science
Leena Shirsale
Posted by Leena Shirsale
Mumbai, Navi Mumbai
5 - 8 yrs
₹20L - ₹25L / yr
ETL
Informatica
Data Warehouse (DWH)
Data engineering
skill iconData Science
+4 more
  • Collaborate with the business teams to understand the data environment in the organization; develop and lead the Data Scientists team to test and scale new algorithms through pilots and subsequent scaling up of the solutions
  • Influence, build and maintain the large-scale data infrastructure required for the AI projects, and integrate with external IT infrastructure/service
  • Act as the single point source for all data related queries; strong understanding of internal and external data sources; provide inputs in deciding data-schemas
  • Design, develop and maintain the framework for the analytics solutions pipeline
  • Provide inputs to the organization’s initiatives on data quality and help implement frameworks and tools for the various related initiatives
  • Work in cross-functional teams of software/machine learning engineers, data scientists, product managers, and others to build the AI ecosystem
  • Collaborate with the external organizations including vendors, where required, in respect of all data-related queries as well as implementation initiatives
Read more
Myrsa Technology Solutions
Dipali G
Posted by Dipali G
Remote, Thane
3 - 5 yrs
₹3L - ₹5L / yr
Analytical Skills
Data engineering
Product Management
Search Engine Optimization (SEO)

This role is focused on Growth initiatives at cure.fit. As a Growth PM, you will be responsible for driving growth in users, revenue via a data-driven, experiment-based, systematic approach to growth and execute on key growth initiatives to achieve growth targets. Key responsibilities: - Identify levers and opportunities using which we can have more users, higher sales, better engagement - Build and deploy experiments to capitalise on these opportunities - Identify what works and what doesn't and scale up/down experiments accordingly - Develop a deep understanding of customers and engagement loops - Productify key learning’s from the experiments - Participate in tactical sales or marketing initiatives - Over time, build a growth machinery and systematised approach to driving growth - Strengthen capabilities in key areas such as SEO/ASO/Content/Referral This is a foundational role on the team and you will have the one-of-a-kind opportunity to develop a deep understanding of the core business/engagement loops and have high degree of impact. PMs on the team enjoy high degree of responsibility and ownership. You will work closely with Sales, Marketing and Technology teams to conceptualise, plan, execute and productize growth initiatives.

 

Looking for:

Understanding of consumer behaviour and psychology

We are looking for PMs/Engineers/Marketers with 3+ years of PM/Software Engineering experience in building consumer software products. You are very customer focused and have a strong understanding of the customer behaviour and psychology to come up with ideas and hypothesis to drive growth. You care not just about the technology but also the psychology that makes great products.

 

Impact orientation, curiosity, and hacker-mindset

You are impact-oriented and are constantly looking to achieve scale and leverage by leveraging data and consumer insights. You are highly curious and eager to learn/develop intelligence - you are never satisfied by just knowing “what happened” but are keen to understand “why it happened”. You have a hacker mindset and a sense of hustle which you use for prioritizing and bringing ideas to life.

 

Strong analytical and scrappy technical skills

You have 3+ years of experience as a PM or as a Software Engineer focused on building consumer products, especially on mobile devices (apps, progressive web). You come with analytical and data operations skills (ETL) and you are largely self-sufficient in pulling and analyzing data and deriving sound conclusions.

 

Skill set:

  • Prior experience of 3+ years in building consumer products
  • Strong understanding of hypothesis-driven, data-backed experimentation
  • Strong analytical and data engineering skills
Read more
Opportunity with Largest Conglomerate

Opportunity with Largest Conglomerate

Agency job
via Seven N Half by Shreeja Shetty
Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Kochi (Cochin)
15 - 20 yrs
₹30L - ₹60L / yr
Enterprise architecture
Data architecture
Data engineering
Technical Architecture

Digital and Technology Architecture

  • Head the digital transformation and innovation projects from idea generation to new ventures in close collaboration with business stakeholders and external partners
  • Present the vision & value of proposed architectures and solutions to a wide range of audiences in alignment with business priorities and objectives
  • Plans tasks and estimates for the required research and volume of activities to complete work
  • Own and assess non-functional requirements and propose solutions for Availability, Backup, Capacity, Performance, Redundancy, Reliability, Scalability, Supportability, Risks and Costs models
  • Provide strategic guidance to teams on managing third-party service providers in terms of service levels, costing, etc.
  • Drive the team to ensure appropriate documentation is developed in support of value realization
  • Lead the technical team and head collaboration between Business Users and software providers to build digital solutions

Enterprise Architecture

  • Ownership of overall Enterprise Architecture including compliances and standards
  • Head the overall Architecture blueprint and roadmap for  applications aligning with Enterprise Architecture
  • Identify important potential technologies and approaches to address current and future Enterprise needs, evaluating their applicability and fit, as well as leading the definition of standards and best practice for their use

Data Architecture

  • Ownership of overall Data Architecture including compliances and standards
  • Head the overall Architecture blueprint and roadmap applications aligning with Data Architecture
  • Identify important potential technologies and approaches to address current and future Data needs, evaluating their applicability and fit, as well as leading the definition of standards and best practice for their use
Read more
Celebal Technologies

at Celebal Technologies

2 recruiters
Payal Hasnani
Posted by Payal Hasnani
Jaipur, Noida, Gurugram, Delhi, Ghaziabad, Faridabad, Pune, Mumbai
5 - 15 yrs
₹7L - ₹25L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+4 more
Job Responsibilities:

• Project Planning and Management
o Take end-to-end ownership of multiple projects / project tracks
o Create and maintain project plans and other related documentation for project
objectives, scope, schedule and delivery milestones
o Lead and participate across all the phases of software engineering, right from
requirements gathering to GO LIVE
o Lead internal team meetings on solution architecture, effort estimation, manpower
planning and resource (software/hardware/licensing) planning
o Manage RIDA (Risks, Impediments, Dependencies, Assumptions) for projects by
developing effective mitigation plans
• Team Management
o Act as the Scrum Master
o Conduct SCRUM ceremonies like Sprint Planning, Daily Standup, Sprint Retrospective
o Set clear objectives for the project and roles/responsibilities for each team member
o Train and mentor the team on their job responsibilities and SCRUM principles
o Make the team accountable for their tasks and help the team in achieving them
o Identify the requirements and come up with a plan for Skill Development for all team
members
• Communication
o Be the Single Point of Contact for the client in terms of day-to-day communication
o Periodically communicate project status to all the stakeholders (internal/external)
• Process Management and Improvement
o Create and document processes across all disciplines of software engineering
o Identify gaps and continuously improve processes within the team
o Encourage team members to contribute towards process improvement
o Develop a culture of quality and efficiency within the team

Must have:
• Minimum 08 years of experience (hands-on as well as leadership) in software / data engineering
across multiple job functions like Business Analysis, Development, Solutioning, QA, DevOps and
Project Management
• Hands-on as well as leadership experience in Big Data Engineering projects
• Experience developing or managing cloud solutions using Azure or other cloud provider
• Demonstrable knowledge on Hadoop, Hive, Spark, NoSQL DBs, SQL, Data Warehousing, ETL/ELT,
DevOps tools
• Strong project management and communication skills
• Strong analytical and problem-solving skills
• Strong systems level critical thinking skills
• Strong collaboration and influencing skills

Good to have:
• Knowledge on PySpark, Azure Data Factory, Azure Data Lake Storage, Synapse Dedicated SQL
Pool, Databricks, PowerBI, Machine Learning, Cloud Infrastructure
• Background in BFSI with focus on core banking
• Willingness to travel

Work Environment
• Customer Office (Mumbai) / Remote Work

Education
• UG: B. Tech - Computers / B. E. – Computers / BCA / B.Sc. Computer Science
Read more
Multinational Company

Multinational Company

Agency job
via Telamon HR Solutions by Praveena Sagar
Remote only
5 - 15 yrs
₹27L - ₹30L / yr
Data engineering
Google Cloud Platform (GCP)
skill iconPython

• The incumbent should have hands on experience in data engineering and GCP data technologies.

• Should Work with client teams to design and implement modern, scalable data solutions using a range of new and emerging technologies from the Google Cloud Platform.

• Should Work with Agile and DevOps techniques and implementation approaches in the delivery.

• Showcase your GCP Data engineering experience when communicating with clients on their requirements, turning these into technical data solutions.

• Build and deliver Data solutions using GCP products and offerings.
• Have hands on Experience on Python 
Experience on SQL or MySQL. Experience on Looker is an added advantage.

Read more
DFCS Technologies

DFCS Technologies

Agency job
via dfcs Technologies by SheikDawood Ali
Remote, Chennai, Anywhere India
1 - 5 yrs
₹9L - ₹14L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+5 more
  • Create and maintain optimal data pipeline architecture,
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.

  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
  • Strong project management and organizational skills.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools: Experience with big
    • data tools: Hadoop, Spark, Kafka, etc.
    • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
    • Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
    • Experience with AWS cloud services: EC2, EMR, RDS, Redshift
    • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
    • Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort