Cutshort logo
Jupyter notebook jobs

9+ Jupyter Notebook Jobs in India

Apply to 9+ Jupyter Notebook Jobs on CutShort.io. Find your next job, effortlessly. Browse Jupyter Notebook Jobs and apply today!

icon
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹60L - ₹80L / yr
Apache Airflow
Apache Spark
MLOps
AWS CloudFormation
DevOps
+19 more

Review Criteria:

  • Strong MLOps profile
  • 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
  • 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
  • 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
  • Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
  • Must have hands-on Python for pipeline & automation development
  • 4+ years of experience in AWS cloud, with recent companies
  • (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth

 

Preferred:

  • Hands-on in Docker deployments for ML workflows on EKS / ECS
  • Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
  • Experience with CI / CD / CT using GitHub Actions / Jenkins.
  • Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
  • Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.

 

Job Specific Criteria:

  • CV Attachment is mandatory
  • Please provide CTC Breakup (Fixed + Variable)?
  • Are you okay for F2F round?
  • Have candidate filled the google form?

 

Role & Responsibilities:

We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.

 

Key Responsibilities:

  • Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
  • Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
  • Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
  • Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
  • Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
  • Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
  • Collaborate with data scientists to productionize notebooks, experiments, and model deployments.

 

Ideal Candidate:

  • 8+ years in MLOps/DevOps with strong ML pipeline experience.
  • Strong hands-on experience with AWS:
  • Compute/Orchestration: EKS, ECS, EC2, Lambda
  • Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
  • Workflow: MWAA/Airflow, Step Functions
  • Monitoring: CloudWatch, OpenSearch, Grafana
  • Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
  • Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
  • Strong Linux, scripting, and troubleshooting skills.
  • Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.

 

Education:

  • Master’s degree in computer science, Machine Learning, Data Engineering, or related field. 


Read more
Leading digital testing boutique firm

Leading digital testing boutique firm

Agency job
via Peak Hire Solutions by Dhara Thakkar
Delhi
5 - 8 yrs
₹11L - ₹15L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Software Testing (QA)
Natural Language Processing (NLP)
Analytics
+11 more

Review Criteria

  • Strong AI/ML Test Engineer
  • 5+ years of overall experience in Testing/QA
  • 2+ years of experience in testing AI/ML models and data-driven applications, across NLP, recommendation engines, fraud detection, and advanced analytics models
  • Must have expertise in validating AI/ML models for accuracy, bias, explainability, and performance, ensuring decisions are fair, reliable, and transparent
  • Must have strong experience to design AI/ML test strategies, including boundary testing, adversarial input simulation, and anomaly monitoring to detect manipulation attempts by marketplace users (buyers/sellers)
  • Proficiency in AI/ML testing frameworks and tools (like PyTest, TensorFlow Model Analysis, MLflow, Python-based data validation libraries, Jupyter) with the ability to integrate into CI/CD pipelines
  • Must understand marketplace misuse scenarios, such as manipulating recommendation algorithms, biasing fraud detection systems, or exploiting gaps in automated scoring
  • Must have strong verbal and written communication skills, able to collaborate with data scientists, engineers, and business stakeholders to articulate testing outcomes and issues.
  • Degree in Engineering, Computer Science, IT, Data Science, or a related discipline (B.E./B.Tech/M.Tech/MCA/MS or equivalent)
  • Candidate must be based within Delhi NCR (100 km radius)


Preferred

  • Certifications such as ISTQB AI Testing, TensorFlow, Cloud AI, or equivalent applied AI credentials are an added advantage.


Job Specific Criteria

  • CV Attachment is mandatory
  • Have you worked with large datasets for AI/ML testing?
  • Have you automated AI/ML testing using PyTest, Jupyter notebooks, or CI/CD pipelines?
  • Please provide details of 2 key AI/ML testing projects you have worked on, including your role, responsibilities, and tools/frameworks used.
  • Are you willing to relocate to Delhi and why (if not from Delhi)?
  • Are you available for a face-to-face round?


Role & Responsibilities

  • 5 years’ experience in testing AIML models and data driven applications including natural language processing NLP recommendation engines fraud detection and advanced analytics models
  • Proven expertise in validating AI models for accuracy bias explainability and performance to ensure decisions eg bid scoring supplier ranking fraud detection are fair reliable and transparent
  • Handson experience in data validation and model testing ensuring training and inference pipelines align with business requirements and procurement rules
  • Strong skills in data science equipped to design test strategies for AI systems including boundary testing adversarial input simulation and dri monitoring to detect manipulation aempts by marketplace users buyers sellers
  • Proficient in data science for defining AIML testing frameworks and tools TensorFlow Model Analysis MLflow PyTest Python based data validation libraries Jupyter with ability to integrate into CICD pipelines
  • Business awareness of marketplace misuse scenarios such as manipulating recommendation algorithms biasing fraud detection systems or exploiting gaps in automated scoring
  • Education Certifications Bachelors masters in engineering CSIT Data Science or equivalent
  • Preferred Certifications ISTQB AI Testing TensorFlowCloud AI certifications or equivalent applied AI credentials


Ideal Candidate

  • 5 years’ experience in testing AIML models and data driven applications including natural language processing NLP recommendation engines fraud detection and advanced analytics models
  • Proven expertise in validating AI models for accuracy bias explainability and performance to ensure decisions eg bid scoring supplier ranking fraud detection are fair reliable and transparent
  • Handson experience in data validation and model testing ensuring training and inference pipelines align with business requirements and procurement rules
  • Strong skills in data science equipped to design test strategies for AI systems including boundary testing adversarial input simulation and dri monitoring to detect manipulation aempts by marketplace users buyers sellers
  • Proficient in data science for defining AIML testing frameworks and tools TensorFlow Model Analysis MLflow PyTest Python based data validation libraries Jupyter with ability to integrate into CICD pipelines
  • Business awareness of marketplace misuse scenarios such as manipulating recommendation algorithms biasing fraud detection systems or exploiting gaps in automated scoring
  • Education Certifications Bachelors masters in engineering CSIT Data Science or equivalent
  • Preferred Certifications ISTQB AI Testing TensorFlow Cloud AI certifications or equivalent applied AI credentials


Read more
VoltusWave Technologies India Private Limited
Hyderabad
1 - 4 yrs
₹2L - ₹5L / yr
skill iconPython
Scikit-Learn
TensorFlow
PyTorch
Keras
+7 more

Job Title: AI & ML Developer

Experience: 1+ Years

Location: Hyderabad

Company: VoltusWave Technologies India Private Limited


Job Summary:

We are looking for a passionate and skilled AI & Machine Learning Developer with over 1 year of experience to join our growing team. You will be responsible for developing, implementing, and maintaining ML models and AI-driven applications that solve real-world business problems.


Key Responsibilities:

  • Design, build, and deploy machine learning models and AI solutions.
  • Work with large datasets to extract meaningful insights and develop algorithms.
  • Preprocess, clean, and transform raw data for training and evaluation.
  • Collaborate with data scientists, software developers, and product teams to integrate models into applications.
  • Monitor and maintain the performance of deployed models.
  • Stay updated with the latest developments in AI, ML, and data science.

Required Skills:

  • Strong understanding of machine learning algorithms and principles.
  • Experience with Python and ML libraries such as scikit-learn, TensorFlow, PyTorch, Keras, etc.
  • Familiarity with data processing tools like Pandas, NumPy, etc.
  • Basic knowledge of deep learning and neural networks.
  • Experience with data visualization tools (e.g., Matplotlib, Seaborn, Plotly).
  • Knowledge of model evaluation and optimization techniques.
  • Familiarity with version control (Git), Jupyter Notebooks, and cloud environments (AWS, GCP, or Azure) is a plus.

Educational Qualification:

  • Bachelor's or Master’s degree in Computer Science, Data Science, AI/ML, or a related field.

Nice to Have:

  • Exposure to NLP, Computer Vision, or Time Series Analysis.
  • Experience with ML Ops or deployment pipelines.
  • Understanding of REST APIs and integration of ML models with web apps.

Why Join Us:

  • Work on real-time AI & ML projects.
  • Opportunity to learn and grow in a fast-paced, innovative environment.
  • Friendly and collaborative team culture.
  • Career development support and training.


Read more
Solar Secure
Saurabh Singh
Posted by Saurabh Singh
Remote only
0 - 1 yrs
₹8000 - ₹10000 / mo
skill iconData Science
skill iconPython
Jupyter Notebook

About the Company :

Nextgen Ai Technologies is at the forefront of innovation in artificial intelligence, specializing in developing cutting-edge AI solutions that transform industries. We are committed to pushing the boundaries of AI technology to solve complex challenges and drive business success.


Currently offering "Data Science Internship" for 2 months.


Data Science Projects details In which Intern’s Will Work :

Project 01 : Image Caption Generator Project in Python

Project 02 : Credit Card Fraud Detection Project

Project 03 : Movie Recommendation System

Project 04 : Customer Segmentation

Project 05 : Brain Tumor Detection with Data Science


Eligibility


A PC or Laptop with decent internet speed.

Good understanding of English language.

Any Graduate with a desire to become a web developer. Freshers are welcomed.

Knowledge of HTML, CSS and JavaScript is a plus but NOT mandatory.

Fresher are welcomed. You will get proper training also, so don't hesitate to apply if you don't have any coding background.


#please note that THIS IS AN INTERNSHIP , NOT A JOB.


We recruit permanent employees from inside our interns only (if needed).


Duration : 02 Months 

MODE: Work From Home (Online)


Responsibilities


Manage reports and sales leads in salesforce.com, CRM.

Develop content, manage design, and user access to SharePoint sites for customers and employees.

Build data driven reports, store procedures, query optimization using SQL and PL/SQL knowledge.

Learned the essentials to C++ and Java to refine code and build the exterior layer of web pages.

Configure and load xml data for the BVT tests.

Set up a GitHub page.

Develop spark scripts by using Scala shell as per requirements.

Develop and A/B test improvements to business survey questions on iOS.

Deploy statistical models to various company data streams using Linux shells.

Create monthly performance-base client billing reports using MySQL and NoSQL databases.

Utilize Hadoop and MapReduce to generate dynamic queries and extract data from HDFS.

Create source code utilizing JavaScript and PHP language to make web pages functional.

Excellent problem-solving skills and the ability to work independently or as part of a team.

Effective communication skills to convey complex technical concepts.


Benefits


Internship Certificate

Letter of recommendation

Stipend Performance Based

Part time work from home (2-3 Hrs per day)

5 days a week, Fully Flexible Shift


Read more
[x]cube LABS

at [x]cube LABS

2 candid answers
1 video
Krishna kandregula
Posted by Krishna kandregula
Hyderabad
2 - 6 yrs
₹8L - ₹20L / yr
ETL
Informatica
Data Warehouse (DWH)
PowerBI
DAX
+12 more
  • Creating and managing ETL/ELT pipelines based on requirements
  • Build PowerBI dashboards and manage datasets needed.
  • Work with stakeholders to identify data structures needed for future and perform any transformations including aggregations.
  • Build data cubes for real-time visualisation needs and CXO dashboards.


Required Tech Skills


  • Microsoft PowerBI & DAX
  • Python, Pandas, PyArrow, Jupyter Noteboks, ApacheSpark
  • Azure Synapse, Azure DataBricks, Azure HDInsight, Azure Data Factory



Read more
Vmultiply solutions

Vmultiply solutions

Agency job
via Vmultiply solutions by Maimuna fatima
Remote only
5 - 10 yrs
₹8L - ₹10L / yr
skill iconElastic Search
Apache Kafka
skill iconMongoDB
Jupyter Notebook
databricks
+2 more

1. Need to have an understanding of Elastic Search, Kafka, mongo DB, etc.

2. Should have experience of Jupter noobooks, data bricks

3. Java, Pythons

4. Senior level, 5-10 years of experience

5. It is important they have those skills so that they can take over current work. There are codes written in both Java as well as Python. (Java is legacy but that is the main search engine code). So it will be counter-productive if engineers hired have experience in both.

6. Excellent communication, analytical, research, grasping skills

Read more
Institutional-grade tools to understand digital assets

Institutional-grade tools to understand digital assets

Agency job
via Qrata by Blessy Fernandes
Bengaluru (Bangalore), Coimbatore
3 - 7 yrs
₹20L - ₹30L / yr
Business Analysis
skill iconPython
SQL
Web3js
Tableau
+1 more

Qualifications

● Bachelor's degree in Mathematics, Statistics, a relevant technical field, or equivalent practical

experience (or) degree in an analytical field (e.g. Computer Science, Engineering, Mathematics,

Statistics, Operations Research, Management Science)

● 3+ years experience with data analysis and metrics development

● 3+ years experience analyzing and interpreting data, drawing conclusions, defining

recommended actions, and reporting results across stakeholders

● 2+ years experience writing SQL queries

● 2+ years experience scripting in Python

● Demonstrated curiosity in and excitement for Web3/blockchain technologies

● Interested in learning new technologies to solve customer needs with lots of creative freedom

● Strong communication skills and business acumen

● Self-starter, motivated by an interest for developing the best possible solutions to problems

● Experience with Google Cloud - Bigquery, DataBricks stack, DBT, Tableu, Jupyter is a plus

Read more
Angel One

at Angel One

4 recruiters
Andleeb Mujeeb
Posted by Andleeb Mujeeb
Remote only
1 - 5 yrs
₹6L - ₹9L / yr
skill iconPython
pandas
NumPy
Jupyter Notebook
PyCharm

Position description:

  • Architecture & Design systems for Predictive analysis and writing algorithms to deal with financial data
  • Must have experience on web services and APIs (REST, JSON, and similar) and creation and consumption of RESTful APIs
  • Proficiently writing algorithms with Python/Pandas/Numpy; Jupyter/PyCharm
  • Experience with relational and NoSQL databases (Eg. MSSQL, MongoDB, Redshift, PostgreSQL, Redis)
  • Implementing Machine Learning Models using Python/R for best performance
  • Working with Time Series Data & analyzing large data sets.
  • Implementing financial strategies in python and generating reports to analyze the strategy results.

 

Primary Responsibilities:

  • Writing algorithms to deal with financial data and Implementing financial strategies in (Python, SQL) and generating reports to analyze the strategy results.

 

Educational qualifications preferred Degree: Bachelors degree

Required Knowledge:

  • Highly skilled in SQL, Python, Pandas, Numpy, Machine Learning, Predictive Modelling, Algorithm designing, OOPS concepts
  • 2 - 7 years Full-Time working experience on core SQL, Python role (Non-Support)
  • Bachelor’s Degree in Engineering, equivalent or higher education.
  • Writing algorithms to deal with financial data and Implementing financial strategies in (Python, SQL) and generating reports to analyze the strategy results.
Read more
Numantra Technologies

at Numantra Technologies

2 recruiters
nisha mattas
Posted by nisha mattas
Remote, Mumbai, powai
2 - 12 yrs
₹8L - ₹18L / yr
ADF
PySpark
Jupyter Notebook
Big Data
Windows Azure
+3 more
      • Data pre-processing, data transformation, data analysis, and feature engineering
      • Performance optimization of scripts (code) and Productionizing of code (SQL, Pandas, Python or PySpark, etc.)
    • Required skills:
      • Bachelors in - in Computer Science, Data Science, Computer Engineering, IT or equivalent
      • Fluency in Python (Pandas), PySpark, SQL, or similar
      • Azure data factory experience (min 12 months)
      • Able to write efficient code using traditional, OO concepts, modular programming following the SDLC process.
      • Experience in production optimization and end-to-end performance tracing (technical root cause analysis)
      • Ability to work independently with demonstrated experience in project or program management
      • Azure experience ability to translate data scientist code in Python and make it efficient (production) for cloud deployment
 
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort