Cutshort logo
Snow flake schema Jobs in Delhi, NCR and Gurgaon

7+ Snow flake schema Jobs in Delhi, NCR and Gurgaon | Snow flake schema Job openings in Delhi, NCR and Gurgaon

Apply to 7+ Snow flake schema Jobs in Delhi, NCR and Gurgaon on CutShort.io. Explore the latest Snow flake schema Job opportunities across top companies like Google, Amazon & Adobe.

icon
Staffnixcom
Mayank Choudhary
Posted by Mayank Choudhary
Mumbai, Delhi, Gurugram, Bengaluru (Bangalore), Hyderabad
8 - 12 yrs
₹30L - ₹40L / yr
Snow flake schema

Strong Snowflake Data Architect profile (Cloud Data Platform / AI-led Data Transformation)

Mandatory (Experience 1) – Must have 8+ years of experience in Data Engineering / Data Architecture, with strong focus on building enterprise-scale data platforms

Mandatory (Experience 2) – Must have 3+ years of deep hands-on experience in Snowflake architecture, including designing and implementing scalable data warehouse solutions

Mandatory (Experience 3) – Strong expertise in Snowflake features including Resource Monitors, RBAC, Virtual Warehouses, Time Travel, Zero Copy Clone, and query performance optimization

Mandatory (Experience 4) – Proven experience building and managing data ingestion pipelines using Snowpipe, handling structured, semi-structured (JSON, XML), and columnar data formats (Parquet)

Mandatory (Experience 5) – Strong experience in cloud ecosystem, preferably AWS, including S3, Lambda, EC2, Redshift, and integration with Snowflake-based architectures

Mandatory (Experience 6) – Proven experience in migrating data from on-premise or legacy systems to Snowflake, including data modeling, transformation, and validation

Mandatory (Experience 7) – Hands-on experience in SQL, SnowSQL, Python, or PySpark for data transformation, automation, and monitoring

Mandatory (Experience 8) – Experience in data modeling, partitioning, micro-partitions, and re-clustering strategies in Snowflake

Mandatory (Experience 9) – Must have experience working in client-facing or consulting roles, including requirement gathering, solution design, and stakeholder communication

Mandatory (Skill 1) – Strong understanding of end-to-end data architecture including ETL/ELT pipelines, data lakes, and warehouse integration

Mandatory (Skill 2) – Experience in designing monitoring and automation frameworks using Python, Bash, or similar tools

Mandatory (Skill 3) – Ability to translate business requirements into scalable technical solutions and define future-state data architecture roadmaps

Mandatory (Note) – Only immediate joiners or candidates who can join within 15 days

Read more
Remote, Noida, Gurugram, Pune, Nagpur, Jaipur, Gandhinagar
8 - 14 yrs
₹12L - ₹18L / yr
skill iconPython
SQL
PySpark
databricks
Snow flake schema
+6 more

Senior Data Engineer (Databricks, BigQuery, Snowflake)

Experience: 8+ Years in Data Engineering

Location: Remote | Onsite (Noida, Gurgaon, Pune, Nagpur, Jaipur, Gandhinagar)

Budget: Open / Competitive


Job Summary:

We are seeking a highly skilled Senior Data Engineer to design, build, and optimize scalable data solutions that support advanced analytics and machine learning initiatives. You will lead the development of reliable, high-performance data systems and collaborate closely with data scientists to enable data-driven decision-making.

In this role, we expect a forward-thinking professional who utilizes AI-augmented development tools (such as Cursor, Windsurf, or GitHub Copilot) to increase engineering velocity and maintain high code standards in a modern enterprise environment.


Key Responsibilities:

  • Scalable Pipelines: Design, develop, and optimize end-to-end data pipelines using SQL, Python, and PySpark.
  • ETL/ELT Workflows: Build and maintain workflows to transform raw data into structured, analytics-ready datasets.
  • ML Integration: Partner with data scientists to deploy and integrate machine learning models into production environments.
  • Cloud Infrastructure: Manage and scale data infrastructure within AWS and Azure ecosystems.
  • Data Warehousing: Utilize Databricks and Snowflake for big data processing and enterprise warehousing.
  • Automation & IaC: Implement workflow orchestration using Apache Airflow and manage infrastructure as code via Terraform.
  • Performance Tuning: Optimize data storage, retrieval, and system performance across data warehouse platforms.
  • Governance & Compliance: Ensure data quality and security using tools like Unity Catalog or Hive Metastore.
  • AI-Augmented Development: Integrate AI tools and LLM APIs into data pipelines and use AI IDEs to streamline debugging and documentation.


Technical Requirements:

  • Experience: 8+ years of core Data Engineering experience in large-scale enterprise or consulting environments.
  • Languages: Expert proficiency in SQL and Python for complex data processing.
  • Big Data: Hands-on experience with PySpark and large-scale distributed computing.
  • Architecture: Strong understanding of ETL frameworks, data pipeline architecture, and data warehousing best practices.
  • Cloud Platforms: Deep working knowledge of AWS and Azure.
  • Modern Tooling: Proven experience with Databricks, Snowflake, and Apache Airflow.
  • Infrastructure: Experience with Terraform or similar IaC tools for scalable deployments.
  • AI Competency: Proficiency in using AI IDEs (Cursor/Windsurf) and integrating AI/ML models into production data flows.


Preferred Qualifications:

  • Exposure to data governance and cataloging tools (e.g., Unity Catalog).
  • Knowledge of performance tuning for massive-scale big data systems.
  • Familiarity with real-time data processing frameworks.
  • Experience in digital transformation and sustainability-focused data projects.
Read more
IT Product based company

IT Product based company

Agency job
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
7 - 14 yrs
₹13L - ₹14.4L / yr
Data engineering
data engineer
snowflakes
Snow flake schema
snow flake
+1 more

Job Title: Data Engineer


Location City: Gurugram


Industry: Research and Advisory Services


Role Overview


We are looking for a Senior Data Engineer (7–10 years) to play a foundational role in building Everest Group’s greenfield, Snowflake-based enterprise data platform.


This role is hands-on and ownership-driven, with a strong focus on:


Ingesting data from enterprise SaaS platforms Building scalable Snowflake ELT pipelines


Designing analytics-ready data models Owning the initial Snowflake platform foundations in collaboration with architecture leadership The ideal candidate has deep experience integrating CRM and marketing systems via APIs, is comfortable operating production- grade data pipelines, and can make sound decisions around performance, cost, and reliability.


Key Responsibilities


Robust Data Ingestion Pipelines From Enterprise SaaS Platforms, Including


 Salesforce (CRM)


 NetSuite (Finance)


 Marketing and RevOps tools such as Marketo, 6sense, Gong


 SharePoint (files, metadata, permissions)


Develop API-based Ingestion Frameworks Handling


 Authentication and authorization


 Pagination, rate limits, retries, and failures


 Incremental loads, soft deletes, and historical tracking


 Schema evolution and upstream source changes


ELT pipelines within Snowflake Write high-quality, optimized SQL for complex


transformations Build and manage data layers including raw, staged, and curated datasets


Optimize Snowflake warehouses, storage, and query performance with a strong focus on cost efficiency



Models Including


 Fact and dimension tables


 Star and snowflake schemas


 Slowly Changing Dimensions (SCD Type 1 and Type 2) Ensure data models support reporting, dashboards, and research analytics Partner with analytics and research teams to deliver analytics-ready, well documented datasets reliability including scheduling, monitoring, alerting, and recovery Implement data quality checks for accuracy, completeness, and freshness


 Support Snowflake Platform Foundations Including


 Warehouse and environment strategy (dev/test/prod)


 Role-based access control (RBAC) Secure handling of sensitive HR and finance data (PII) Troubleshoot and resolve data issues across ingestion, transformation, and consumption layers research, product, and technology stakeholders to translate business needs into data solutions Contribute to data platform architecture discussions and continuous improvement initiatives


Maintain clear documentation for pipelines, data models, and data flows


Follow modern engineering practices including Git-based version control and CI/CD workflows


Education And Experience


Bachelor’s or master’s degree in computer science, Information Technology, Engineering, Mathematics, or related field.


7–10 years of hands-on experience in data engineering or similar roles


Strong hands-on expertise with Snowflake, including ingestion, transformations, and performance optimization


Proven experience ingesting data from SaaS platforms via APIs (HR, CRM, or Marketing systems)


Advanced SQL skills and strong understanding of relational databases and data modeling


Strong Python experience for API integration, data ingestion, and automation Experience with cloud platforms (Azure preferred; AWS/GCP acceptable)


Experience with orchestration or transformation tools such as dbt, Azure Data Factory, or similar


Strong problem-solving skills, ownership mindset, and attention to detail



Read more
Data Havn

Data Havn

Agency job
via Infinium Associate by Toshi Srivastava
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2.5 - 4.5 yrs
₹15L - ₹28L / yr
skill iconPython
SQL
ETL
SQL server
Data Visualization
+8 more

About the Role:


We are seeking a talented Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.

Responsibilities:

  • Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
  • Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
  • Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
  • Team Management: Able to handle team.
  • Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
  • Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
  • Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
  • Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.

 

 Skills:

  • Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
  • Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
  • Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
  • Understanding of data modeling and data architecture concepts.
  • Experience with ETL/ELT tools and frameworks.
  • Excellent problem-solving and analytical skills.
  • Ability to work independently and as part of a team.

Preferred Qualifications:

  • Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
  • Knowledge of machine learning and artificial intelligence concepts.
  • Experience with data visualization tools (e.g., Tableau, Power BI).
  • Certification in cloud platforms or data engineering.
Read more
Metric Vibes

Metric Vibes

Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Noida
4 - 8 yrs
₹10L - ₹15L / yr
PowerBI
skill iconJavascript
RESTful APIs
Embedded software
SQL
+9 more

Job Title: Tableau BI Developer

Years of Experience: 4-8Yrs

12$ per hour fte engagement

8 hrs. working


Required Skills & Experience:

✅ 4–8 years of experience in BI development and data engineering

✅ Expertise in BigQuery and/or Snowflake for large-scale data processing

✅ Strong SQL skills with experience writing complex analytical queries

✅ Experience in creating dashboards in tools like Power BI, Looker, or similar

✅ Hands-on experience with ETL/ELT tools and data pipeline orchestration

✅ Familiarity with cloud platforms (GCP, AWS, or Azure)

✅ Strong understanding of data modeling, data warehousing, and analytics best practices

✅ Excellent communication skills with the ability to explain technical concepts to non-technical stakeholders

Read more
Top IT MNC

Top IT MNC

Agency job
Chennai, Bengaluru (Bangalore), Kochi (Cochin), Coimbatore, Hyderabad, Pune, Kolkata, Noida, Gurugram, Mumbai
5 - 13 yrs
₹8L - ₹20L / yr
Snow flake schema
skill iconPython
snowflake
Greetings,

We are looking out for a Snowflake developer for one of our premium clients for their PAN India loaction
Read more
There is an urgent opening for Snowflake in MNC company.

There is an urgent opening for Snowflake in MNC company.

Agency job
via Volibits by Ankita Mishra
Pune, Mumbai, Bengaluru (Bangalore), Chennai, Noida, Hyderabad
7 - 12 yrs
₹5L - ₹15L / yr
Snow flake schema
SnowSQL
Snowpipe
There is an urgent requirement for Snowflake Developer in MNC company. Notice Period should be joining in April or 2nd week of May. Having skills like Snowpipe, SnowSQL, Snowflake schema & Snowflake developer.The Budget for this profile is upto 22 LPA.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort