Cutshort logo
PySpark Jobs in Mumbai

9+ PySpark Jobs in Mumbai | PySpark Job openings in Mumbai

Apply to 9+ PySpark Jobs in Mumbai on CutShort.io. Explore the latest PySpark Job opportunities across top companies like Google, Amazon & Adobe.

icon
Fractal Analytics

at Fractal Analytics

5 recruiters
Eman Khan
Posted by Eman Khan
Bengaluru (Bangalore), Hyderabad, Gurugram, Noida, Mumbai, Pune, Coimbatore, Chennai
3 - 5 yrs
₹18L - ₹26L / yr
MLOps
MLFlow
kubeflow
skill iconMachine Learning (ML)
skill iconPython
+6 more

Building the machine learning production (or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-of-the-art AI solutions for Fractal clients.


Responsibilities

As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments.

  • Enable Model tracking, model experimentation, Model automation
  • Develop ML pipelines to support
  • Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry
  • Develop MLOps components in Machine learning development life cycle using Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS
  • Work across all phases of Model development life cycle to build MLOPS components
  • Build the knowledge base required to deliver increasingly complex MLOPS projects on Azure
  • Be an integral part of client business development and delivery engagements across multiple domains


Required Qualifications

  • 3-5 years experience building production-quality software.
  • B.E/B.Tech/M.Tech in Computer Science or related technical degree OR Equivalent
  • Strong experience in System Integration, Application Development or Data Warehouse projects across technologies used in the enterprise space
  • Knowledge of MLOps, machine learning and docker
  • Object-oriented languages (e.g. Python, PySpark, Java, C#, C++)
  • CI/CD experience( i.e. Jenkins, Git hub action,
  • Database programming using any flavors of SQL
  • Knowledge of Git for Source code management
  • Ability to collaborate effectively with highly technical resources in a fast-paced environment
  • Ability to solve complex challenges/problems and rapidly deliver innovative solutions
  • Foundational Knowledge of Cloud Computing on Azure
  • Hunger and passion for learning new skills
Read more
Fractal Analytics

at Fractal Analytics

5 recruiters
Eman Khan
Posted by Eman Khan
Bengaluru (Bangalore), Gurugram, Mumbai, Hyderabad, Pune, Noida, Coimbatore, Chennai
5.5 - 9 yrs
₹25L - ₹38L / yr
MLOps
MLFlow
kubeflow
skill iconMachine Learning (ML)
skill iconPython
+6 more

Building the machine learning production System(or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-ofthe-art AI solutions for Fractal clients.


Responsibilities

As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments.

  • Enable Model tracking, model experimentation, Model automation
  • Develop scalable ML pipelines
  • Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry
  • Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS
  • Work across all phases of Model development life cycle to build MLOPS components
  • Build the knowledge base required to deliver increasingly complex MLOPS projects on Azure
  • Be an integral part of client business development and delivery engagements across multiple domains


Required Qualifications

  • 5.5-9 years experience building production-quality software
  • B.E/B.Tech/M.Tech in Computer Science or related technical degree OR equivalent
  • Strong experience in System Integration, Application Development or Datawarehouse projects across technologies used in the enterprise space
  • Expertise in MLOps, machine learning and docker
  • Object-oriented languages (e.g. Python, PySpark, Java, C#, C++)
  • Experience developing CI/CD components for production ready ML pipeline.
  • Database programming using any flavors of SQL
  • Knowledge of Git for Source code management
  • Ability to collaborate effectively with highly technical resources in a fast-paced environment
  • Ability to solve complex challenges/problems and rapidly deliver innovative solutions
  • Team handling, problem solving, project management and communication skills & creative thinking
  • Foundational Knowledge of Cloud Computing on Azure
  • Hunger and passion for learning new skills
Read more
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Pune, Hyderabad, Ahmedabad, Chennai
3 - 7 yrs
₹8L - ₹15L / yr
AWS Lambda
Amazon S3
Amazon VPC
Amazon EC2
Amazon Redshift
+3 more

Technical Skills:


  • Ability to understand and translate business requirements into design.
  • Proficient in AWS infrastructure components such as S3, IAM, VPC, EC2, and Redshift.
  • Experience in creating ETL jobs using Python/PySpark.
  • Proficiency in creating AWS Lambda functions for event-based jobs.
  • Knowledge of automating ETL processes using AWS Step Functions.
  • Competence in building data warehouses and loading data into them.


Responsibilities:


  • Understand business requirements and translate them into design.
  • Assess AWS infrastructure needs for development work.
  • Develop ETL jobs using Python/PySpark to meet requirements.
  • Implement AWS Lambda for event-based tasks.
  • Automate ETL processes using AWS Step Functions.
  • Build data warehouses and manage data loading.
  • Engage with customers and stakeholders to articulate the benefits of proposed solutions and frameworks.
Read more
mazosol
kirthick murali
Posted by kirthick murali
Mumbai
10 - 20 yrs
₹30L - ₹58L / yr
skill iconPython
skill iconR Programming
PySpark
Google Cloud Platform (GCP)
SQL Azure

Data Scientist – Program Embedded 

Job Description:   

We are seeking a highly skilled and motivated senior data scientist to support a big data program. The successful candidate will play a pivotal role in supporting multiple projects in this program covering traditional tasks from revenue management, demand forecasting, improving customer experience to testing/using new tools/platforms such as Copilot Fabric for different purpose. The expected candidate would have deep expertise in machine learning methodology and applications. And he/she should have completed multiple large scale data science projects (full cycle from ideation to BAU). Beyond technical expertise, problem solving in complex set-up will be key to the success for this role. This is a data science role directly embedded into the program/projects, stake holder management and collaborations with patterner are crucial to the success on this role (on top of the deep expertise). 

What we are looking for: 

  1. Highly efficient in Python/Pyspark/R. 
  2. Understand MLOps concepts, working experience in product industrialization (from Data Science point of view). Experience in building product for live deployment, and continuous development and continuous integration. 
  3. Familiar with cloud platforms such as Azure, GCP, and the data management systems on such platform. Familiar with Databricks and product deployment on Databricks. 
  4. Experience in ML projects involving techniques: Regression, Time Series, Clustering, Classification, Dimension Reduction, Anomaly detection with traditional ML approaches and DL approaches. 
  5. Solid background in statistics, probability distributions, A/B testing validation, univariate/multivariate analysis, hypothesis test for different purpose, data augmentation etc. 
  6. Familiar with designing testing framework for different modelling practice/projects based on business needs. 
  7. Exposure to Gen AI tools and enthusiastic about experimenting and have new ideas on what can be done. 
  8. If they have improved an internal company process using an AI tool, that would be great (e.g. process simplification, manual task automation, auto emails) 
  9. Ideally, 10+ years of experience, and have been on independent business facing roles. 
  10. CPG or retail as a data scientist would be nice, but not number one priority, especially for those who have navigated through multiple industries. 
  11. Being proactive and collaborative would be essential. 

 

Some projects examples within the program: 

  1. Test new tools/platforms such as Copilo, Fabric for commercial reporting. Testing, validation and build trust. 
  2. Building algorithms for predicting trend in category, consumptions to support dashboards. 
  3. Revenue Growth Management, create/understand the algorithms behind the tools (can be built by 3rd parties) we need to maintain or choose to improve. Able to prioritize and build product roadmap. Able to design new solutions and articulate/quantify the limitation of the solutions. 
  4. Demand forecasting, create localized forecasts to improve in store availability. Proper model monitoring for early detection of potential issues in the forecast focusing particularly on improving the end user experience. 


Read more
Numantra Technologies

at Numantra Technologies

2 recruiters
Vandana Saxena
Posted by Vandana Saxena
Mumbai, Navi Mumbai
2 - 8 yrs
₹5L - ₹12L / yr
Microsoft Windows Azure
ADF
NumPy
PySpark
Databricks
+1 more
Experience and expertise in using Azure cloud services. Azure certification will be a plus.

- Experience and expertise in Python Development and its different libraries like Pyspark, pandas, NumPy

- Expertise in ADF, Databricks.

- Creating and maintaining data interfaces across a number of different protocols (file, API.).

- Creating and maintaining internal business process solutions to keep our corporate system data in sync and reduce manual processes where appropriate.

- Creating and maintaining monitoring and alerting workflows to improve system transparency.

- Facilitate the development of our Azure cloud infrastructure relative to Data and Application systems.

- Design and lead development of our data infrastructure including data warehouses, data marts, and operational data stores.

- Experience in using Azure services such as ADLS Gen 2, Azure Functions, Azure messaging services, Azure SQL Server, Azure KeyVault, Azure Cognitive services etc.
Read more
Navi Mumbai
3 - 5 yrs
₹7L - ₹18L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+6 more
  • Proficiency in shell scripting
  • Proficiency in automation of tasks
  • Proficiency in Pyspark/Python
  • Proficiency in writing and understanding of sqoop
  • Understanding of CloudEra manager
  • Good understanding of RDBMS
  • Good understanding of Excel

 

Read more
Virtusa

at Virtusa

2 recruiters
Agency job
via Response Informatics by Anupama Lavanya Uppala
Chennai, Bengaluru (Bangalore), Mumbai, Hyderabad, Pune
3 - 10 yrs
₹10L - ₹25L / yr
PySpark
skill iconPython
  • Minimum 1 years of relevant experience, in PySpark (mandatory)
  • Hands on experience in development, test, deploy, maintain and improving data integration pipeline in AWS cloud environment is added plus 
  • Ability to play lead role and independently manage 3-5 member of Pyspark development team 
  • EMR ,Python and PYspark mandate.
  • Knowledge and awareness working with AWS Cloud technologies like Apache Spark, , Glue, Kafka, Kinesis, and Lambda in S3, Redshift, RDS
Read more
Numantra Technologies

at Numantra Technologies

2 recruiters
nisha mattas
Posted by nisha mattas
Remote, Mumbai, powai
2 - 12 yrs
₹8L - ₹18L / yr
ADF
PySpark
Jupyter Notebook
Big Data
Windows Azure
+3 more
      • Data pre-processing, data transformation, data analysis, and feature engineering
      • Performance optimization of scripts (code) and Productionizing of code (SQL, Pandas, Python or PySpark, etc.)
    • Required skills:
      • Bachelors in - in Computer Science, Data Science, Computer Engineering, IT or equivalent
      • Fluency in Python (Pandas), PySpark, SQL, or similar
      • Azure data factory experience (min 12 months)
      • Able to write efficient code using traditional, OO concepts, modular programming following the SDLC process.
      • Experience in production optimization and end-to-end performance tracing (technical root cause analysis)
      • Ability to work independently with demonstrated experience in project or program management
      • Azure experience ability to translate data scientist code in Python and make it efficient (production) for cloud deployment
 
Read more
Fragma Data Systems

at Fragma Data Systems

8 recruiters
Evelyn Charles
Posted by Evelyn Charles
Remote, Bengaluru (Bangalore), Hyderabad, Chennai, Mumbai, Pune
8 - 15 yrs
₹16L - ₹28L / yr
PySpark
SQL Azure
azure synapse
Windows Azure
Azure Data Engineer
+3 more
Technology Skills:
  • Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
  • Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. 
  • Designing and implementing data engineering, ingestion, and transformation functions
Good to Have: 
  • Experience with Azure Analysis Services
  • Experience in Power BI
  • Experience with third-party solutions like Attunity/Stream sets, Informatica
  • Experience with PreSales activities (Responding to RFPs, Executing Quick POCs)
  • Capacity Planning and Performance Tuning on Azure Stack and Spark.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort