Cutshort logo
PySpark Jobs in Jaipur

4+ PySpark Jobs in Jaipur | PySpark Job openings in Jaipur

Apply to 4+ PySpark Jobs in Jaipur on CutShort.io. Explore the latest PySpark Job opportunities across top companies like Google, Amazon & Adobe.

icon
Remote, Noida, Gurugram, Pune, Nagpur, Jaipur, Gandhinagar
8 - 14 yrs
₹12L - ₹18L / yr
skill iconPython
SQL
PySpark
databricks
Snow flake schema
+6 more

Senior Data Engineer (Databricks, BigQuery, Snowflake)

Experience: 8+ Years in Data Engineering

Location: Remote | Onsite (Noida, Gurgaon, Pune, Nagpur, Jaipur, Gandhinagar)

Budget: Open / Competitive


Job Summary:

We are seeking a highly skilled Senior Data Engineer to design, build, and optimize scalable data solutions that support advanced analytics and machine learning initiatives. You will lead the development of reliable, high-performance data systems and collaborate closely with data scientists to enable data-driven decision-making.

In this role, we expect a forward-thinking professional who utilizes AI-augmented development tools (such as Cursor, Windsurf, or GitHub Copilot) to increase engineering velocity and maintain high code standards in a modern enterprise environment.


Key Responsibilities:

  • Scalable Pipelines: Design, develop, and optimize end-to-end data pipelines using SQL, Python, and PySpark.
  • ETL/ELT Workflows: Build and maintain workflows to transform raw data into structured, analytics-ready datasets.
  • ML Integration: Partner with data scientists to deploy and integrate machine learning models into production environments.
  • Cloud Infrastructure: Manage and scale data infrastructure within AWS and Azure ecosystems.
  • Data Warehousing: Utilize Databricks and Snowflake for big data processing and enterprise warehousing.
  • Automation & IaC: Implement workflow orchestration using Apache Airflow and manage infrastructure as code via Terraform.
  • Performance Tuning: Optimize data storage, retrieval, and system performance across data warehouse platforms.
  • Governance & Compliance: Ensure data quality and security using tools like Unity Catalog or Hive Metastore.
  • AI-Augmented Development: Integrate AI tools and LLM APIs into data pipelines and use AI IDEs to streamline debugging and documentation.


Technical Requirements:

  • Experience: 8+ years of core Data Engineering experience in large-scale enterprise or consulting environments.
  • Languages: Expert proficiency in SQL and Python for complex data processing.
  • Big Data: Hands-on experience with PySpark and large-scale distributed computing.
  • Architecture: Strong understanding of ETL frameworks, data pipeline architecture, and data warehousing best practices.
  • Cloud Platforms: Deep working knowledge of AWS and Azure.
  • Modern Tooling: Proven experience with Databricks, Snowflake, and Apache Airflow.
  • Infrastructure: Experience with Terraform or similar IaC tools for scalable deployments.
  • AI Competency: Proficiency in using AI IDEs (Cursor/Windsurf) and integrating AI/ML models into production data flows.


Preferred Qualifications:

  • Exposure to data governance and cataloging tools (e.g., Unity Catalog).
  • Knowledge of performance tuning for massive-scale big data systems.
  • Familiarity with real-time data processing frameworks.
  • Experience in digital transformation and sustainability-focused data projects.
Read more
Deqode

at Deqode

1 recruiter
Shraddha Katare
Posted by Shraddha Katare
Pune, Gurugram, Jaipur, Bhopal
5 - 8 yrs
₹10L - ₹18L / yr
Data engineering
databricks
Data Structures
skill iconPython
PySpark

Job Description -

Position: Senior Data Engineer (Azure)

Experience - 6+ Years

Mode - Hybrid

Location - Gurgaon, Pune, Jaipur, Bangalore, Bhopal


Key Responsibilities:

  • Data Processing on Azure: Azure Data Factory, Streaming Analytics, Event Hubs, Azure Databricks, Data Migration Service, Data Pipeline.
  • Provisioning, configuring, and developing Azure solutions (ADB, ADF, ADW, etc.).
  • Design and implement scalable data models and migration strategies.
  • Work on distributed big data batch or streaming pipelines (Kafka or similar).
  • Develop data integration and transformation solutions for structured and unstructured data.
  • Collaborate with cross-functional teams for performance tuning and optimization.
  • Monitor data workflows and ensure compliance with data governance and quality standards.
  • Contribute to continuous improvement through automation and DevOps practices.

Required Skills & Experience:

  • 6–10 years of experience as a Data Engineer.
  • Strong proficiency in Azure Databricks, PySpark, Python, SQL, and Azure Data Factory.
  • Experience in Data Modelling, Data Migration, and Data Warehousing.
  • Good understanding of database structure principles and schema design.
  • Hands-on experience using MS SQL Server, Oracle, or similar RDBMS platforms.
  • Experience in DevOps tools (Azure DevOps, Jenkins, Airflow, Azure Monitor) – good to have.
  • Knowledge of distributed data processing and real-time streaming (Kafka/Event Hub).
  • Familiarity with visualization tools like Power BI or Tableau.
  • Strong analytical, problem-solving, and debugging skills.
  • Self-motivated, detail-oriented, and capable of managing priorities effectively.


Read more
Deqode

at Deqode

1 recruiter
Alisha Das
Posted by Alisha Das
Bengaluru (Bangalore), Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Mumbai, Pune, Hyderabad, Indore, Jaipur, Kolkata
4 - 5 yrs
₹2L - ₹18L / yr
skill iconPython
PySpark

We are looking for a skilled and passionate Data Engineers with a strong foundation in Python programming and hands-on experience working with APIs, AWS cloud, and modern development practices. The ideal candidate will have a keen interest in building scalable backend systems and working with big data tools like PySpark.

Key Responsibilities:

  • Write clean, scalable, and efficient Python code.
  • Work with Python frameworks such as PySpark for data processing.
  • Design, develop, update, and maintain APIs (RESTful).
  • Deploy and manage code using GitHub CI/CD pipelines.
  • Collaborate with cross-functional teams to define, design, and ship new features.
  • Work on AWS cloud services for application deployment and infrastructure.
  • Basic database design and interaction with MySQL or DynamoDB.
  • Debugging and troubleshooting application issues and performance bottlenecks.

Required Skills & Qualifications:

  • 4+ years of hands-on experience with Python development.
  • Proficient in Python basics with a strong problem-solving approach.
  • Experience with AWS Cloud services (EC2, Lambda, S3, etc.).
  • Good understanding of API development and integration.
  • Knowledge of GitHub and CI/CD workflows.
  • Experience in working with PySpark or similar big data frameworks.
  • Basic knowledge of MySQL or DynamoDB.
  • Excellent communication skills and a team-oriented mindset.

Nice to Have:

  • Experience in containerization (Docker/Kubernetes).
  • Familiarity with Agile/Scrum methodologies.


Read more
Xebia IT Architects

at Xebia IT Architects

2 recruiters
Vijay S
Posted by Vijay S
Bengaluru (Bangalore), Gurugram, Pune, Hyderabad, Chennai, Bhopal, Jaipur
10 - 15 yrs
₹30L - ₹40L / yr
Spark
Google Cloud Platform (GCP)
skill iconPython
Apache Airflow
PySpark
+1 more

We are looking for a Senior Data Engineer with strong expertise in GCP, Databricks, and Airflow to design and implement a GCP Cloud Native Data Processing Framework. The ideal candidate will work on building scalable data pipelines and help migrate existing workloads to a modern framework.


  • Shift: 2 PM 11 PM
  • Work Mode: Hybrid (3 days a week) across Xebia locations
  • Notice Period: Immediate joiners or those with a notice period of up to 30 days


Key Responsibilities:

  • Design and implement a GCP Native Data Processing Framework leveraging Spark and GCP Cloud Services.
  • Develop and maintain data pipelines using Databricks and Airflow for transforming Raw → Silver → Gold data layers.
  • Ensure data integrity, consistency, and availability across all systems.
  • Collaborate with data engineers, analysts, and stakeholders to optimize performance.
  • Document standards and best practices for data engineering workflows.

Required Experience:


  • 7-8 years of experience in data engineering, architecture, and pipeline development.
  • Strong knowledge of GCP, Databricks, PySpark, and BigQuery.
  • Experience with Orchestration tools like Airflow, Dagster, or GCP equivalents.
  • Understanding of Data Lake table formats (Delta, Iceberg, etc.).
  • Proficiency in Python for scripting and automation.
  • Strong problem-solving skills and collaborative mindset.


⚠️ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.


Looking forward to your response!


Best regards,

Vijay S

Assistant Manager - TAG

https://www.linkedin.com/in/vijay-selvarajan/

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort