2+ Jupyter Notebook Jobs in Delhi, NCR and Gurgaon | Jupyter Notebook Job openings in Delhi, NCR and Gurgaon
Apply to 2+ Jupyter Notebook Jobs in Delhi, NCR and Gurgaon on CutShort.io. Explore the latest Jupyter Notebook Job opportunities across top companies like Google, Amazon & Adobe.
Review Criteria:
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred:
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria:
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities:
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate:
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
Review Criteria
- Strong AI/ML Test Engineer
- 5+ years of overall experience in Testing/QA
- 2+ years of experience in testing AI/ML models and data-driven applications, across NLP, recommendation engines, fraud detection, and advanced analytics models
- Must have expertise in validating AI/ML models for accuracy, bias, explainability, and performance, ensuring decisions are fair, reliable, and transparent
- Must have strong experience to design AI/ML test strategies, including boundary testing, adversarial input simulation, and anomaly monitoring to detect manipulation attempts by marketplace users (buyers/sellers)
- Proficiency in AI/ML testing frameworks and tools (like PyTest, TensorFlow Model Analysis, MLflow, Python-based data validation libraries, Jupyter) with the ability to integrate into CI/CD pipelines
- Must understand marketplace misuse scenarios, such as manipulating recommendation algorithms, biasing fraud detection systems, or exploiting gaps in automated scoring
- Must have strong verbal and written communication skills, able to collaborate with data scientists, engineers, and business stakeholders to articulate testing outcomes and issues.
- Degree in Engineering, Computer Science, IT, Data Science, or a related discipline (B.E./B.Tech/M.Tech/MCA/MS or equivalent)
- Candidate must be based within Delhi NCR (100 km radius)
Preferred
- Certifications such as ISTQB AI Testing, TensorFlow, Cloud AI, or equivalent applied AI credentials are an added advantage.
Job Specific Criteria
- CV Attachment is mandatory
- Have you worked with large datasets for AI/ML testing?
- Have you automated AI/ML testing using PyTest, Jupyter notebooks, or CI/CD pipelines?
- Please provide details of 2 key AI/ML testing projects you have worked on, including your role, responsibilities, and tools/frameworks used.
- Are you willing to relocate to Delhi and why (if not from Delhi)?
- Are you available for a face-to-face round?
Role & Responsibilities
- 5 years’ experience in testing AIML models and data driven applications including natural language processing NLP recommendation engines fraud detection and advanced analytics models
- Proven expertise in validating AI models for accuracy bias explainability and performance to ensure decisions eg bid scoring supplier ranking fraud detection are fair reliable and transparent
- Handson experience in data validation and model testing ensuring training and inference pipelines align with business requirements and procurement rules
- Strong skills in data science equipped to design test strategies for AI systems including boundary testing adversarial input simulation and dri monitoring to detect manipulation aempts by marketplace users buyers sellers
- Proficient in data science for defining AIML testing frameworks and tools TensorFlow Model Analysis MLflow PyTest Python based data validation libraries Jupyter with ability to integrate into CICD pipelines
- Business awareness of marketplace misuse scenarios such as manipulating recommendation algorithms biasing fraud detection systems or exploiting gaps in automated scoring
- Education Certifications Bachelors masters in engineering CSIT Data Science or equivalent
- Preferred Certifications ISTQB AI Testing TensorFlowCloud AI certifications or equivalent applied AI credentials
Ideal Candidate
- 5 years’ experience in testing AIML models and data driven applications including natural language processing NLP recommendation engines fraud detection and advanced analytics models
- Proven expertise in validating AI models for accuracy bias explainability and performance to ensure decisions eg bid scoring supplier ranking fraud detection are fair reliable and transparent
- Handson experience in data validation and model testing ensuring training and inference pipelines align with business requirements and procurement rules
- Strong skills in data science equipped to design test strategies for AI systems including boundary testing adversarial input simulation and dri monitoring to detect manipulation aempts by marketplace users buyers sellers
- Proficient in data science for defining AIML testing frameworks and tools TensorFlow Model Analysis MLflow PyTest Python based data validation libraries Jupyter with ability to integrate into CICD pipelines
- Business awareness of marketplace misuse scenarios such as manipulating recommendation algorithms biasing fraud detection systems or exploiting gaps in automated scoring
- Education Certifications Bachelors masters in engineering CSIT Data Science or equivalent
- Preferred Certifications ISTQB AI Testing TensorFlow Cloud AI certifications or equivalent applied AI credentials

