Cutshort logo
Machine learning ml jobs

50+ Machine Learning (ML) Jobs in India

Apply to 50+ Machine Learning (ML) Jobs on CutShort.io. Find your next job, effortlessly. Browse Machine Learning (ML) Jobs and apply today!

icon
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹50L - ₹75L / yr
Ansible
Terraform
skill iconAmazon Web Services (AWS)
Platform as a Service (PaaS)
CI/CD
+30 more

ROLE & RESPONSIBILITIES:

We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.


KEY RESPONSIBILITIES:

1.     Cloud Security (AWS)-

  • Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
  • Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
  • Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
  • Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
  • Ensure encryption of data at rest/in transit across all cloud services.

 

2.     DevOps Security (IaC, CI/CD, Kubernetes, Linux)-

Infrastructure as Code & Automation Security:

  • Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
  • Enforce misconfiguration scanning and automated remediation.

CI/CD Security:

  • Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
  • Implement secure build, artifact signing, and deployment workflows.

Containers & Kubernetes:

  • Harden Docker images, private registries, runtime policies.
  • Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
  • Apply CIS Benchmarks for Kubernetes and Linux.

Monitoring & Reliability:

  • Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
  • Ensure audit logging across cloud/platform layers.


3.     MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-

Pipeline & Workflow Security:

  • Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
  • Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.

ML Platform Security:

  • Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
  • Control model access, artifact protection, model registry security, and ML metadata integrity.

Data Security:

  • Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
  • Enforce data versioning security, lineage tracking, PII protection, and access governance.

ML Observability:

  • Implement drift detection (data drift/model drift), feature monitoring, audit logging.
  • Integrate ML monitoring with Grafana/Prometheus/CloudWatch.


4.     Network & Endpoint Security-

  • Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
  • Conduct vulnerability assessments, penetration test coordination, and network segmentation.
  • Secure remote workforce connectivity and internal office networks.


5.     Threat Detection, Incident Response & Compliance-

  • Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
  • Build security alerts, automated threat detection, and incident workflows.
  • Lead incident containment, forensics, RCA, and remediation.
  • Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
  • Maintain security policies, procedures, RRPs (Runbooks), and audits.


IDEAL CANDIDATE:

  • 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
  • Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
  • Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
  • Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
  • Strong Linux security (CIS hardening, auditing, intrusion detection).
  • Proficiency in Python, Bash, and automation/scripting.
  • Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
  • Understanding of microservices, API security, serverless security.
  • Strong understanding of vulnerability management, penetration testing practices, and remediation plans.


EDUCATION:

  • Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
  • Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.


PERKS, BENEFITS AND WORK CULTURE:

  • Competitive Salary Package
  • Generous Leave Policy
  • Flexible Working Hours
  • Performance-Based Bonuses
  • Health Care Benefits
Read more
Gurugram, Bengaluru (Bangalore), Hyderabad, Mumbai
5 - 12 yrs
₹35L - ₹52L / yr
skill iconData Science
skill iconMachine Learning (ML)
Artificial Intelligence (AI)

Strong Senior Data Scientist (AI/ML/GenAI) Profile

Mandatory (Experience 1) – Must have a minimum of 5+ years of experience in designing, developing, and deploying Machine Learning / Deep Learning (ML/DL) systems in production

Mandatory (Experience 2) – Must have strong hands-on experience in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.

Mandatory (Experience 3) – Must have 1+ years of experience in fine-tuning Large Language Models (LLMs) using techniques like LoRA/QLoRA, and building RAG (Retrieval-Augmented Generation) pipelines.

Mandatory (Experience 4) – Must have experience with MLOps and production-grade systems including Docker, Kubernetes, Spark, model registries, and CI/CD workflows

mandatory (Note): Budget for upto 5 yoe is 25Lacs, Upto 7 years is 35 lacs and upto 12 years is 45 lacs, Also client can pay max 30-40% based on candidature

Read more
ClanX

at ClanX

2 candid answers
Ariba Khan
Posted by Ariba Khan
Bengaluru (Bangalore)
5 - 7 yrs
Upto ₹45L / yr (Varies
)
skill iconPython
Natural Language Processing (NLP)
skill iconMachine Learning (ML)
OCR
Large Language Models (LLM)
+1 more

This opportunity through ClanX is for Parspec (direct payroll with Parspec)


Please note you would be interviewed for this role at Parspec while ClanX is a recruitment partner helping Parspec find the right candidates and manage the hiring process.


Parspec is hiring a Machine Learning Engineer to build state-of-the-art AI systems, focusing on NLP, LLMs, and computer vision in a high-growth, product-led environment.


Company Details:

Parspec is a fast-growing technology startup revolutionizing the construction supply chain. By digitizing and automating critical workflows, Parspec empowers sales teams with intelligent tools to drive efficiency and close more deals. With a mission to bring a data-driven future to the construction industry, Parspec blends deep domain knowledge with AI expertise.


Requirements:

  • Bachelor’s or Master’s degree in Science or Engineering.
  • 5-7 years of experience in ML and data science.
  • Recent hands-on work with LLMs, including fine-tuning, RAG, agent flows, and integrations.
  • Strong understanding of foundational models and transformers.
  • Solid grasp of ML and DL fundamentals, with experience in CV and NLP.
  • Recent experience working with large datasets.
  • Python experience with ML libraries like numpy, pandas, sklearn, matplotlib, nltk and others.
  • Experience with frameworks like Hugging Face, Spacy, BERT, TensorFlow, PyTorch, OpenRouter or Modal.


Requirements:

Must haves

  • Design, develop, and deploy NLP, CV, and recommendation systems
  • Train and implement deep learning models
  • Research and explore novel ML architectures
  • Build and maintain end-to-end ML pipelines
  • Collaborate across product, design, and engineering teams
  • Work closely with business stakeholders to shape product features
  • Ensure high scalability and performance of AI solutions
  • Uphold best practices in engineering and contribute to a culture of excellence
  • Actively participate in R&D and innovation within the team

Good to haves

  • Experience building scalable AI pipelines for extracting structured data from unstructured sources.
  • Experience with cloud platforms, containerization, and managed AI services.
  • Knowledge of MLOps practices, CI/CD, monitoring, and governance.
  • Experience with AWS or Django.
  • Familiarity with databases and web application architecture.
  • Experience with OCR or PDF tools.


Responsibilities:

  • Design, develop, and deploy NLP, CV, and recommendation systems
  • Train and implement deep learning models
  • Research and explore novel ML architectures
  • Build and maintain end-to-end ML pipelines
  • Collaborate across product, design, and engineering teams
  • Work closely with business stakeholders to shape product features
  • Ensure high scalability and performance of AI solutions
  • Uphold best practices in engineering and contribute to a culture of excellence
  • Actively participate in R&D and innovation within the team


Interview Process

  1. Technical interview (coding, ML concepts, project walkthrough)
  2. System design and architecture round
  3. Culture fit and leadership interaction
  4. Final offer discussion
Read more
ClanX

at ClanX

2 candid answers
Ariba Khan
Posted by Ariba Khan
Bengaluru (Bangalore)
3 - 4.5 yrs
Upto ₹25L / yr (Varies
)
skill iconMachine Learning (ML)
skill iconPython
Computer Vision
Natural Language Processing (NLP)
TensorFlow

This opportunity through ClanX is for Parspec (direct payroll with Parspec)


Please note you would be interviewed for this role at Parspec while ClanX is a recruitment partner helping Parspec find the right candidates and manage the hiring process.


Parspec is hiring a Machine Learning Engineer to build state-of-the-art AI systems, focusing on NLP, LLMs, and computer vision in a high-growth, product-led environment.


Company Details:

Parspec is a fast-growing technology startup revolutionizing the construction supply chain. By digitizing and automating critical workflows, Parspec empowers sales teams with intelligent tools to drive efficiency and close more deals. With a mission to bring a data-driven future to the construction industry, Parspec blends deep domain knowledge with AI expertise.


Requirements:

  • 3 to 4 years of relevant experience in ML and AI roles
  • Strong grasp of ML, deep learning, and model deployment
  • Proficient in Python and libraries like numpy, pandas, sklearn, etc.
  • Experience with TensorFlow/Keras or PyTorch
  • Familiar with AWS/GCP platforms
  • Strong coding skills and ability to ship production-ready solutions
  • Bachelor's/Master's in Engineering or related field
  • Curious, self-driven, and a fast learner
  • Passionate about NLP, LLMs, and state-of-the-art AI technologies
  • Comfortable with collaboration across globally distributed teams

Preferred (Not Mandatory):

  • Experience with Django, databases, and full-stack environments
  • Familiarity with OCR and PDF processing
  • Competitive programming or Kaggle participation
  • Prior work with distributed teams across time zones


Responsibilities:

  • Design, develop, and deploy NLP, CV, and recommendation systems
  • Train and implement deep learning models
  • Research and explore novel ML architectures
  • Build and maintain end-to-end ML pipelines
  • Collaborate across product, design, and engineering teams
  • Work closely with business stakeholders to shape product features
  • Ensure high scalability and performance of AI solutions
  • Uphold best practices in engineering and contribute to a culture of excellence
  • Actively participate in R&D and innovation within the team


Interview Process

  1. Technical interview (coding, ML concepts, project walkthrough)
  2. System design and architecture round
  3. Culture fit and leadership interaction
  4. Final offer discussion
Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹0.1L - ₹0.1L / yr
skill iconPython
MLOps
Apache Airflow
Apache Spark
AWS CloudFormation
+23 more

Review Criteria

  • Strong MLOps profile
  • 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
  • 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
  • 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
  • Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
  • Must have hands-on Python for pipeline & automation development
  • 4+ years of experience in AWS cloud, with recent companies
  • (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth

 

Preferred

  • Hands-on in Docker deployments for ML workflows on EKS / ECS
  • Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
  • Experience with CI / CD / CT using GitHub Actions / Jenkins.
  • Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
  • Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.

 

Job Specific Criteria

  • CV Attachment is mandatory
  • Please provide CTC Breakup (Fixed + Variable)?
  • Are you okay for F2F round?
  • Have candidate filled the google form?

 

Role & Responsibilities

We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.

 

Key Responsibilities:

  • Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
  • Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
  • Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
  • Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
  • Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
  • Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
  • Collaborate with data scientists to productionize notebooks, experiments, and model deployments.

 

Ideal Candidate

  • 8+ years in MLOps/DevOps with strong ML pipeline experience.
  • Strong hands-on experience with AWS:
  • Compute/Orchestration: EKS, ECS, EC2, Lambda
  • Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
  • Workflow: MWAA/Airflow, Step Functions
  • Monitoring: CloudWatch, OpenSearch, Grafana
  • Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
  • Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
  • Strong Linux, scripting, and troubleshooting skills.
  • Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.

 

Education:

  • Master’s degree in computer science, Machine Learning, Data Engineering, or related field.

 

Read more
Newpage Solutions

at Newpage Solutions

2 candid answers
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
8yrs+
Upto ₹45L / yr (Varies
)
skill iconPython
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Generative AI
skill iconDjango
+7 more

About Newpage Solutions

Newpage Solutions is a global digital health innovation company helping people live longer, healthier lives. We partner with life sciences organizations—including pharmaceutical, biotech, and healthcare leaders—to build transformative AI and data-driven technologies addressing real-world health challenges.

From strategy and research to UX design and agile development, we deliver and validate impactful solutions using lean, human-centered practices.

We are proud to be Great Place to Work® certified for three consecutive years, hold a top Glassdoor rating, and were named among the "Top 50 Most Promising Healthcare Solution Providers" by CIOReview.

We foster creativity, continuous learning, and inclusivity, creating an environment where bold ideas thrive and make a measurable difference in people’s lives.

Newpage looks for candidates who are invested in long-term impact. Applications with a pattern of frequent job changes may not align with the values we prioritize.


Your Mission

We’re seeking a highly experienced, technically exceptional AI Development Lead to architect and deliver next-generation Generative AI and Agentic systems. You will drive end-to-end innovation—from model selection and orchestration design to scalable backend implementation—while collaborating with cross-functional teams to transform AI research into production-ready solutions.

This is an individual-contributor leadership role for someone who thrives on ownership, fast execution, and technical excellence. You will define the standards for quality, scalability, and innovation across all AI initiatives.


What You’ll Do

  • Architect, build, and optimize production-grade Generative AI applications using modern frameworks such as LangChain, LlamaIndex, Semantic Kernel, or custom orchestration layers.
  • Lead the design of Agentic AI frameworks (Agno, AutoGen, CrewAI, etc.), enabling intelligent, goal-driven workflows with memory, reasoning, and contextual awareness.
  • Develop and deploy Retrieval-Augmented Generation (RAG) systems integrating LLMs, vector databases, and real-time data pipelines.
  • Design robust prompt engineering and refinement frameworks to improve reasoning quality, adaptability, and user relevance.
  • Deliver high-performance backend systems using Python (FastAPI, Flask, or similar) aligned with SOLID principles, OOP, and clean architecture.
  • Own the complete SDLC, including design, implementation, code reviews, testing, CI/CD, observability, and post-deployment monitoring.
  • Use AI-assisted environments (e.g., Cursor, GitHub Copilot, Claude Code) to accelerate development while maintaining code quality and maintainability.
  • Collaborate closely with MLOps engineers to containerize, scale, and deploy models using Docker, Kubernetes, and modern CI/CD pipelines.
  • Integrate APIs from OpenAI, Anthropic, Cohere, Mistral, or open-source LLMs (Llama 3, Mixtral, etc.).
  • Leverage VectorDB such as FAISS, Pinecone, Weaviate, or Chroma for semantic search, RAG, and context retrieval.
  • Develop custom tools, libraries, and frameworks that improve development velocity and reliability across AI teams.
  • Partner with Product, Design, and ML teams to translate conceptual AI features into scalable user-facing products.
  • Provide technical mentorship and guide team members in system design, architecture reviews, and AI best practices.
  • Lead POCs, internal research experiments, and innovation sprints to explore and validate emerging AI techniques.

What You Bring

  • 8+ years of total experience in software development, with at least 3 years in AI/ML systems engineering or Generative AI.
  • Python experience with strong grasp of OOP, SOLID, and scalable microservice architecture.
  • Proven track record developing and deploying GenAI/LLM-based systems in production.
  • Hands-on work with LangChain, LlamaIndex, or custom orchestration frameworks.
  • Deep familiarity with OpenAI, Anthropic, Hugging Face, or open-source LLM APIs.
  • Advanced understanding of prompt construction, optimization, and evaluation techniques.
  • End-to-end implementation experience using vector databases and retrieval pipelines.
  • Understanding of MLOps, model serving, scaling, and monitoring workflows (e.g., BentoML, MLflow, Vertex AI, AWS Sagemaker).
  • Experience with GitHub Actions, Docker, Kubernetes, and cloud-native deployments.
  • Are obsessed with clean code, system scalability, and performance optimization.
  • Can balance rapid prototyping with long-term maintainability.
  • Excel at working independently while collaborating effectively across teams.
  • Stay ahead of the curve on new AI models, frameworks, and best practices.
  • Have a founder’s mindset and love solving ambiguous, high-impact technical challenges.
  • Bachelor’s or Master’s in Computer Science, Machine Learning, or a related technical discipline.


What We Offer

At Newpage, we’re building a company that works smart and grows with agility—where driven individuals come together to do work that matters. We offer:

  • A people-first culture – Supportive peers, open communication, and a strong sense of belonging.
  • Smart, purposeful collaboration – Work with talented colleagues to create technologies that solve meaningful business challenges.
  • Balance that lasts – We respect your time and support a healthy integration of work and life.
  • Room to grow – Opportunities for learning, leadership, and career development, shaped around you.
  • Meaningful rewards – Competitive compensation that recognizes both contribution and potential.
Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 12 yrs
₹15L - ₹30L / yr
skill iconMachine Learning (ML)
skill iconAmazon Web Services (AWS)
skill iconKubernetes
ECS
Amazon Redshift
+14 more

Core Responsibilities:

  • The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
  • Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
  • Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
  • Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
  • System Integration: Integrate models into existing systems and workflows.
  • Model Deployment: Deploy models to production environments and monitor performance.
  • Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
  • Continuous Improvement: Identify areas for improvement in model performance and systems.

 

Skills:

  • Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
  • Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaossearch logs, etc for troubleshooting; Other tech touch points are ScyllaDB (like BigTable), OpenSearch, Neo4J graph
  • Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
  • Knowledge of model monitoring and performance evaluation.

 

Required experience:

  • Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sagemaker pipeline with ability to analyze gaps and recommend/implement improvements
  • AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
  • AWS data: Redshift, Glue
  • Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)

 

Skills: Aws, Aws Cloud, Amazon Redshift, Eks

 

Must-Haves

Machine Learning +Aws+ (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sagemaker

Notice period - 0 to 15days only

Hybrid work mode- 3 days office, 2 days at home

Read more
Tecblic Private LImited
Priya Khatri
Posted by Priya Khatri
Ahmedabad
4 - 7 yrs
₹5L - ₹19L / yr
Generative AI
Large Language Models (LLM)
skill iconMachine Learning (ML)
skill iconDeep Learning
skill iconPython
+2 more

Job Description: Machine Learning Engineer – LLM and Agentic AI

Location: Ahmedabad

Experience: 4 to 7 years

Employment Type: Full-Time

________________________________________

About Us

Join a forward-thinking team at Tecblic, where innovation meets cutting-edge technology. We specialize in delivering AI-driven solutions that empower businesses to thrive in the digital age. If you're passionate about LLMs, machine learning, and pushing the boundaries of Agentic AI, we’d love to have you on board.

________________________________________

Key Responsibilities

• Research and Development: Research, design, and fine-tune machine learning models, with a focus on Large Language Models (LLMs) and Agentic AI systems.

• Model Optimization: Fine-tune and optimize pre-trained LLMs for domain-specific use cases, ensuring scalability and performance.

• Integration: Collaborate with software engineers and product teams to integrate AI models into customer-facing applications and platforms.

• Data Engineering: Perform data preprocessing, pipeline creation, feature engineering, and exploratory data analysis (EDA) to prepare datasets for training and evaluation.

• Production Deployment: Design and implement robust model deployment pipelines, including monitoring and managing model performance in production.

• Experimentation: Prototype innovative solutions leveraging cutting-edge techniques like reinforcement learning, few-shot learning, and generative AI.

• Technical Mentorship: Mentor junior team members on best practices in machine learning and software engineering.

________________________________________

Requirements

Core Technical Skills:

• Proficiency in Python for machine learning and data science tasks.

• Expertise in ML frameworks and libraries like PyTorch, TensorFlow, Hugging Face, Scikit-learn, or similar.

• Solid understanding of Large Language Models (LLMs) such as GPT, T5, BERT, or Bloom, including fine-tuning techniques.

• Experience working on NLP tasks such as text classification, entity recognition, summarization, or question answering.

• Knowledge of deep learning architectures, such as transformers, RNNs, and CNNs.

• Strong skills in data manipulation using tools like Pandas, NumPy, and SQL.

• Familiarity with cloud services like AWS, GCP, or Azure, and experience deploying ML models using tools like Docker, Kubernetes, or serverless functions.

Additional Skills (Good to Have):

• Exposure to Agentic AI (e.g., autonomous agents, decision-making systems) and practical implementation.

• Understanding of MLOps tools (e.g., MLflow, Kubeflow) to streamline workflows and ensure production reliability.

• Experience with generative AI models (GANs, VAEs) and reinforcement learning techniques.

• Hands-on experience in prompt engineering and few-shot/fine-tuned approaches for LLMs.

• Familiarity with vector databases like Pinecone, Weaviate, or FAISS for efficient model retrieval.

• Version control (Git) and familiarity with collaborative development practices.

General Skills:

• Strong analytical and mathematical background, including proficiency in linear algebra, statistics, and probability.

• Solid understanding of algorithms and data structures to solve complex ML problems.

• Ability to handle and process large datasets using distributed frameworks like Apache Spark or Dask (optional but useful).

________________________________________

Soft Skills:

• Excellent problem-solving and critical-thinking abilities.

• Strong communication and collaboration skills to work with cross-functional teams.

• Self-motivated, with a continuous learning mindset to keep up with emerging technologies.


Read more
Pune
6 - 8 yrs
₹45L - ₹50L / yr
skill iconPython
databricks
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
CI/CD

We are looking for a Senior AI / ML Engineer to join our fast-growing team and help build AI-driven data platforms and intelligent solutions. If you are passionate about AI, data engineering, and building real-world GenAI systems, this role is for you!



🔧 Key Responsibilities

• Develop and deploy AI/ML models for real-world applications

• Build scalable pipelines for data processing, training, and evaluation

• Work on LLMs, RAG, embeddings, and agent workflows

• Collaborate with data engineers, product teams, and software developers

• Write clean, efficient Python code and ensure high-quality engineering practices

• Handle model monitoring, performance tuning, and documentation



Required Skills

• 2–5 years of experience in AI/ML engineering

• Strong knowledge of Python, TensorFlow/PyTorch

• Experience with LLMs, GenAI, RAG, or NLP

• Knowledge of Databricks, MLOps or cloud platforms (AWS/Azure/GCP)

• Good understanding of APIs, distributed systems, and data pipelines



🎯 Good to Have

• Experience in healthcare, SaaS, or big data

• Exposure to Databricks Mosaic AI

• Experience building AI agents

Read more
Clink

at Clink

2 candid answers
1 product
Hari Krishna
Posted by Hari Krishna
Hyderabad, Bengaluru (Bangalore)
0 - 2 yrs
₹4L - ₹8L / yr
Artificial Intelligence (AI)
Large Language Models (LLM)
skill iconPython
skill iconMachine Learning (ML)
FastAPI
+2 more

Role Overview

Join our core tech team to build the intelligence layer of Clink's platform. You'll architect AI agents, design prompts, build ML models, and create systems powering personalized offers for thousands of restaurants. High-growth opportunity working directly with founders, owning critical features from day one.


Why Clink?

Clink revolutionizes restaurant loyalty using AI-powered offer generation and customer analytics:

  • ML-driven customer behavior analysis (Pattern detection)
  • Personalized offers via LLMs and custom AI agents
  • ROI prediction and forecasting models
  • Instagram marketing rewards integration


Tech Stack:

  • Python,
  • FastAPI,
  • PostgreSQL,
  • Redis,
  • Docker,
  • LLMs


You Will Work On:

AI Agents: Design and optimize AI agents

ML Models: Build redemption prediction, customer segmentation, ROI forecasting

Data & Analytics: Analyze data, build behavior pattern pipelines, create product bundling matrices

System Design: Architect scalable async AI pipelines, design feedback loops, implement A/B testing

Experimentation: Test different LLM approaches, explore hybrid LLM+ML architectures, prototype new capabilities


Must-Have Skills

Technical: 0-2 years AI/ML experience (projects/internships count), strong Python, LLM API knowledge, ML fundamentals (supervised learning, clustering), Pandas/NumPy proficiency

Mindset: Extreme curiosity, logical problem-solving, builder mentality (side projects/hackathons), ownership mindset

Nice to Have: Pydantic, FastAPI, statistical forecasting, PostgreSQL/SQL, scikit-learn, food-tech/loyalty domain interest

Read more
Hiret Consulting
Sanikha M
Posted by Sanikha M
Bengaluru (Bangalore)
5 - 8 yrs
₹10L - ₹15L / yr
skill iconMachine Learning (ML)
skill iconAndroid Development
Fullstack Developer
skill iconKotlin
skill iconDocker
+2 more

Experience: 5-8 years of professional experience in software engineering, with a strong

background in developing and deploying scalable applications.

● Technical Skills:

Architecture: Demonstrated experience in architecture/ system design for scale,

preferably as a digital public good

Full Stack: Extensive experience with full-stack development, including mobile

app development and backend technologies.

App Development: Hands-on experience building and launching mobile

applications, preferably for Android.

Cloud Infrastructure: Familiarity with cloud platforms and containerization

technologies (Docker, Kubernetes).

○ (Bonus) ML Ops: Proven experience with ML Ops practices and tools.

● Soft Skills:

○ Experience in hiring team members

○ A proactive and independent problem-solver, comfortable working in a fast-paced

environment.

○ Excellent communication and leadership skills, with the ability to mentor junior

engineers.

○ A strong desire to use technology for social good.


Preferred Qualifications

● Experience working in a startup or smaller team environment.

● Familiarity with the healthcare or public health sector.

● Experience in developing applications for low-resource environments.

● Experience with data management in privacy and security-sensitive applications.

Read more
Ekloud INC
Kratika Agarwal
Posted by Kratika Agarwal
Bengaluru (Bangalore)
4 - 6 yrs
₹5L - ₹14L / yr
skill iconMachine Learning (ML)
skill iconPython
gpu framework
TensorFlow
Keras
+2 more

We are looking for enthusiastic engineers passionate about building and maintaining solutioning platform components on cloud and Kubernetes infrastructure. The ideal candidate will go beyond traditional SRE responsibilities by collaborating with stakeholders, understanding the applications hosted on the platform, and designing automation solutions that enhance platform efficiency, reliability, and value.

[Technology and Sub-technology]

• ML Engineering / Modelling

• Python Programming

• GPU frameworks: TensorFlow, Keras, Pytorch etc.

• Cloud Based ML development and Deployment AWS or Azure


[Qualifications]

• Bachelor’s Degree in Computer Science, Computer Engineering or equivalent technical degree

• Proficient programming knowledge in Python or Java and ability to read and explain open source codebase.

• Good foundation of Operating Systems, Networking and Security Principles

• Exposure to DevOps tools, with experience integrating platform components into Sagemaker/ECR and AWS Cloud environments.

• 4-6 years of relevant experience working on AI/ML projects


[Primary Skills]:

• Excellent analytical & problem solving skills.

• Exposure to Machine Learning and GenAI technologies.

• Understanding and hands-on experience with AI/ML Modeling, Libraries, frameworks, and Tools (TensorFlow, Keras, Pytorch etc.)

• Strong knowledge of Python, SQL/NoSQL

• Cloud Based ML development and Deployment AWS or Azure

Read more
AI Industry

AI Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai, Bengaluru (Bangalore), Hyderabad, Gurugram
5 - 12 yrs
₹20L - ₹46L / yr
skill iconData Science
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Generative AI
skill iconDeep Learning
+14 more

Review Criteria

  • Strong Senior Data Scientist (AI/ML/GenAI) Profile
  • 5+ years of experience in designing, developing, and deploying Machine Learning / Deep Learning (ML/DL) systems in production
  • Must have strong hands-on experience in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.
  • 1+ years of experience in fine-tuning Large Language Models (LLMs) using techniques like LoRA/QLoRA, and building RAG (Retrieval-Augmented Generation) pipelines.
  • Must have experience with MLOps and production-grade systems including Docker, Kubernetes, Spark, model registries, and CI/CD workflows

 

Preferred

  • Prior experience in open-source GenAI contributions, applied LLM/GenAI research, or large-scale production AI systems
  • Preferred (Education) – B.S./M.S./Ph.D. in Computer Science, Data Science, Machine Learning, or a related field.

 

Job Specific Criteria

  • CV Attachment is mandatory
  • Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
  • Are you okay with 3 Days WFO?
  • Virtual Interview requires video to be on, are you okay with it?


Role & Responsibilities

Company is hiring a Senior Data Scientist with strong expertise in AI, machine learning engineering (MLE), and generative AI. You will play a leading role in designing, deploying, and scaling production-grade ML systems — including large language model (LLM)-based pipelines, AI copilots, and agentic workflows. This role is ideal for someone who thrives on balancing cutting-edge research with production rigor and loves mentoring while building impact-first AI applications.

 

Responsibilities:

  • Own the full ML lifecycle: model design, training, evaluation, deployment
  • Design production-ready ML pipelines with CI/CD, testing, monitoring, and drift detection
  • Fine-tune LLMs and implement retrieval-augmented generation (RAG) pipelines
  • Build agentic workflows for reasoning, planning, and decision-making
  • Develop both real-time and batch inference systems using Docker, Kubernetes, and Spark
  • Leverage state-of-the-art architectures: transformers, diffusion models, RLHF, and multimodal pipelines
  • Collaborate with product and engineering teams to integrate AI models into business applications
  • Mentor junior team members and promote MLOps, scalable architecture, and responsible AI best practices


Ideal Candidate

  • 5+ years of experience in designing, deploying, and scaling ML/DL systems in production
  • Proficient in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX
  • Experience with LLM fine-tuning, LoRA/QLoRA, vector search (Weaviate/PGVector), and RAG pipelines
  • Familiarity with agent-based development (e.g., ReAct agents, function-calling, orchestration)
  • Solid understanding of MLOps: Docker, Kubernetes, Spark, model registries, and deployment workflows
  • Strong software engineering background with experience in testing, version control, and APIs
  • Proven ability to balance innovation with scalable deployment
  • B.S./M.S./Ph.D. in Computer Science, Data Science, or a related field
  • Bonus: Open-source contributions, GenAI research, or applied systems at scale


Read more
BOSCH MOBILITY

BOSCH MOBILITY

Agency job
Bengaluru (Bangalore)
8 - 16 yrs
₹18L - ₹20L / yr
CAMERA
Sensors
SENSOR
skill iconDeep Learning
Artificial Intelligence (AI)
+14 more

Job Title: Sensor Expert – MLFF (Multi-Lane Free Flow) Engagement Type: Consultant / External Associate Organization: Bosch - MPIN

Location: Bangalore, India


Purpose of the Role:


To provide technical expertise in sensing technologies for MLFF (Multi-Lane Free Flow) and ITMS (Intelligent Traffic Management System) solutions. The role focuses on camera systems, AI/ML based computer vision, and multi-sensor integration (camera, RFID, radar) to drive solution performance, optimization, and business success. Key


Responsibilities:

• Lead end-to-end sensor integration for MLFF and ITMS platforms.


• Manage camera systems, ANPR, and data packet processing.


• Apply AI/ML techniques for performance optimization in computer vision.


• Collaborate with System Integrators and internal teams on architecture and implementation.


• Support B2G proposals (smart city, mining, infrastructure projects) with domain expertise.


• Drive continuous improvement in deployed MLFF solutions. 


Key Competencies:


• Deep understanding of camera and sensor technologies, AI/ML for vision systems, and system integration.


• Experience in PoC development and solution optimization.


• Strong analytical, problem-solving, and collaboration skills.

• Familiarity with B2G environments and public infrastructure tenders preferred.


Qualification & Experience:


• Bachelor’s/Master’s in Electronics, Electrical, or Computer Science.


• 8–10 years of experience in camera technology, AI/ML, and sensor integration.


• Proven track record in system design, implementation, and field optimization.

Read more
GuppShupp
Nitesh Singh
Posted by Nitesh Singh
Bengaluru (Bangalore)
1 - 2 yrs
₹8L - ₹12L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
skill iconPython
skill iconPostgreSQL
Vector database
+5 more

🚀 Join GuppShupp: Build Bharat's First AI Lifelong Friend

GuppShupp's mission is nothing short of building Bharat's First AI Lifelong Friend. This is more than just a chatbot—it's about creating a truly personalized, consistently available companion that understands and grows with the user over a lifetime. We are pioneering this deeply personal experience using cutting-edge Generative AI.

We're hiring a Founding AI Engineer (1+ Year Experience) to join our small team of A+ builders and craft the foundational LLM and infrastructure behind this mission.

If you are passionate about:

  • Deep personalization and managing complex user state/memory.
  • Building high-quality, high-throughput AI tools.
  • Next-level infrastructure at an incredible scale (millions of users).


What you'll do (responsibilities)

We're looking for an experienced individual contributor who enjoys working alongside other experienced engineers and iterate on AI

Prompt Engineering & Testing

  • Write, test, and iterate numerous prompt variations.
  • Identify and fix failures, biases, or edge cases in AI responses.

Advance LLM and Development

  • Engineer solutions for long-term conversational memory and statefulness in LLMs.
  • Implement techniques (e.g., retrieval-augmented generation (RAG) or summarization) to effectively manage and extend the context window for complex tasks.

Collaboration & Optimization

  • Work with product and growth teams to turn feature goals into effective technical prompts.
  • Optimize prompts for diverse use cases (e.g., chat, content, personalization).

LLM Fine-Tuning & Management

  • Prepare, clean, and format datasets for training.
  • Run fine-tuning jobs on smaller, specialized language models.
  • Assist in deploying, monitoring, and maintaining these models



What we're looking for (qualifications)

You are an AI Engineer who has successfully shipped systems in this domain for over a year—you won't need ramp-up time. We prioritize continuous learning and hands-on skill development over formal qualifications. Crucially, we are looking for a teammate driven by a sense of duty to the user and a passion for taking full ownership of their contributions.

Read more
NA

NA

Agency job
via eTalent Services by JaiPrakash Bharti
Remote only
3 - 8 yrs
₹5L - ₹14L / yr
skill iconPython
skill iconMachine Learning (ML)
Windows Azure
TensorFlow
MLFlow
+6 more

Role: Azure AI Tech Lead

Exp-3.5-7 Years

Location: Remote / Noida (NCR)

Notice Period: Immediate to 15 days

 

Mandatory Skills: Python, Azure AI/ML, PyTorch, TensorFlow, JAX, HuggingFace, LangChain, Kubeflow, MLflow, LLMs, RAG, MLOps, Docker, Kubernetes, Generative AI, Model Deployment, Prometheus, Grafana

 

JOB DESCRIPTION

As the Azure AI Tech Lead, you will serve as the principal technical expert leading the design, development, and deployment of advanced AI and ML solutions on the Microsoft Azure platform. You will guide a team of engineers, establish robust architectures, and drive end-to-end implementation of AI projects—transforming proof-of-concepts into scalable, production-ready systems.

 

Key Responsibilities:

  • Lead architectural design and development of AI/ML solutions using Azure AI, Azure OpenAI, and Cognitive Services.
  • Develop and deploy scalable AI systems with best practices in MLOps across the full model lifecycle.
  • Mentor and upskill AI/ML engineers through technical reviews, training, and guidance.
  • Implement advanced generative AI techniques including LLM fine-tuning, RAG systems, and diffusion models.
  • Collaborate cross-functionally to translate business goals into innovative AI solutions.
  • Enforce governance, responsible AI practices, and performance optimization standards.
  • Stay ahead of trends in LLMs, agentic AI, and applied research to shape next-gen solutions.

 

Qualifications:

  • Bachelor’s or Master’s in Computer Science or related field.
  • 3.5–7 years of experience delivering end-to-end AI/ML solutions.
  • Strong expertise in Azure AI ecosystem and production-grade model deployment.
  • Deep technical understanding of ML, DL, Generative AI, and MLOps pipelines.
  • Excellent analytical and problem-solving abilities; applied research or open-source contributions preferred.


Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
2 - 3 yrs
Best in industry
skill iconPython
skill iconDjango
skill iconFlask
Data Structures
Algorithms
+4 more

We're seeking an AI/ML Engineer to join our team-

As an AI/ML Engineer, you will be responsible for designing, developing, and implementing artificial intelligence (AI) and machine learning (ML) solutions to solve real world business problems. You will work closely with cross-functional teams, including data scientists, software engineers, and product managers, to deploy and integrate Applied AI/ML solutions into the products that are being built at NonStop. Your role will involve researching cutting-edge algorithms, data processing techniques, and implementing scalable solutions to drive innovation and improve the overall user experience.


Responsibilities

  • Applied AI/ML engineering; Building engineering solutions on top of the AI/ML tooling available in the industry today. Eg: Engineering APIs around OpenAI
  • AI/ML Model Development: Design, develop, and implement machine learning models and algorithms that address specific business challenges, such as natural language processing, computer vision, recommendation systems, anomaly detection, etc.
  • Data Preprocessing and Feature Engineering: Cleanse, preprocess, and transform raw data into suitable formats for training and testing AI/ML models. Perform feature engineering to extract relevant features from the data
  • Model Training and Evaluation: Train and validate AI/ML models using diverse datasets to achieve optimal performance. Employ appropriate evaluation metrics to assess model accuracy, precision, recall, and other relevant metrics
  • Data Visualization: Create clear and insightful data visualizations to aid in understanding data patterns, model behavior, and performance metrics
  • Deployment and Integration: Collaborate with software engineers and DevOps teams to deploy AI/ML models into production environments and integrate them into various applications and systems
  • Data Security and Privacy: Ensure compliance with data privacy regulations and implement security measures to protect sensitive information used in AI/ML processes
  • Continuous Learning: Stay updated with the latest advancements in AI/ML research, tools, and technologies, and apply them to improve existing models and develop novel solutions
  • Documentation: Maintain detailed documentation of the AI/ML development process, including code, models, algorithms, and methodologies for easy understanding and future reference

Requirements

  • Bachelor's, Master's or PhD in Computer Science, Data Science, Machine Learning, or a related field. Advanced degrees or certifications in AI/ML are a plus
  • Proven experience as an AI/ML Engineer, Data Scientist, or related role, ideally with a strong portfolio of AI/ML projects
  • Proficiency in programming languages commonly used for AI/ML. Preferably Python
  • Familiarity with popular AI/ML libraries and frameworks, such as TensorFlow, PyTorch, scikit-learn, etc.
  • Familiarity with popular AI/ML Models such as GPT3, GPT4, Llama2, BERT etc.
  • Strong understanding of machine learning algorithms, statistics, and data structures
  • Experience with data preprocessing, data wrangling, and feature engineering
  • Knowledge of deep learning architectures, neural networks, and transfer learning
  • Familiarity with cloud platforms and services (e.g., AWS, Azure, Google Cloud) for scalable AI/ML deployment
  • Solid understanding of software engineering principles and best practices for writing maintainable and scalable code
  • Excellent analytical and problem-solving skills, with the ability to think critically and propose innovative solutions
  • Effective communication skills to collaborate with cross-functional teams and present complex technical concepts to non-technical stakeholders


Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 12 yrs
₹25L - ₹30L / yr
skill iconMachine Learning (ML)
AWS CloudFormation
Online machine learning
skill iconAmazon Web Services (AWS)
ECS
+20 more

MUST-HAVES: 

  • Machine Learning + Aws + (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sage maker
  • Notice period - 0 to 15 days only 
  • Hybrid work mode- 3 days office, 2 days at home


SKILLS: AWS, AWS CLOUD, AMAZON REDSHIFT, EKS


ADDITIONAL GUIDELINES:

  • Interview process: - 2 Technical round + 1 Client round
  • 3 days in office, Hybrid model. 


CORE RESPONSIBILITIES:

  • The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
  • Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
  • Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
  • Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
  • System Integration: Integrate models into existing systems and workflows.
  • Model Deployment: Deploy models to production environments and monitor performance.
  • Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
  • Continuous Improvement: Identify areas for improvement in model performance and systems.


SKILLS:

  • Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
  • Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaos search logs, etc. for troubleshooting; Other tech touch points are Scylla DB (like BigTable), OpenSearch, Neo4J graph
  • Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
  • Knowledge of model monitoring and performance evaluation.


REQUIRED EXPERIENCE:

  • Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sage maker pipeline with ability to analyze gaps and recommend/implement improvements
  • AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
  • AWS data: Redshift, Glue
  • Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
Read more
Inferigence Quotient

at Inferigence Quotient

1 recruiter
Neeta Trivedi
Posted by Neeta Trivedi
Bengaluru (Bangalore)
2 - 3 yrs
₹6L - ₹12L / yr
skill iconPython
skill iconDeep Learning
skill iconMachine Learning (ML)
skill iconC++
CUDA
+11 more

AI based systems design and development, entire pipeline from image/ video ingest, metadata ingest, processing, encoding, transmitting.


Implementation and testing of advanced computer vision algorithms.

Dataset search, preparation, annotation, training, testing, fine tuning of vision CNN models. Multimodal AI, LLMs, hardware deployment, explainability.


Detailed analysis of results. Documentation, version control, client support, upgrades.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Shivangi Bhattacharyya
Posted by Shivangi Bhattacharyya
Bengaluru (Bangalore)
6 - 8 yrs
Best in industry
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
skill iconPython
PowerBI
Tableau
+2 more

Experience- 6 to 8 years

Location- Bangalore


Job Description-


- Extensive experience with machine learning utilizing the latest analytical models in Python. (i.e., experience in generating data-driven insights that play a key role in rapid decision-making and driving business outcomes.)

- Extensive experience using Tableau, table design, PowerApps, Power BI, Power Automate, and cloud environments, or equivalent experience designing/implementing data analysis pipelines and visualization.

- Extensive experience using AI agent platforms. (AI = data analysis: a required skill for data analysts.)

- A statistics major or equivalent understanding of statistical analysis results interpretation.

Read more
iDreamCareercom

at iDreamCareercom

1 video
3 recruiters
Recruitment Team
Posted by Recruitment Team
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
10 - 20 yrs
₹40L - ₹65L / yr
MERN Stack
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Retrieval Augmented Generation (RAG)
skill iconAmazon Web Services (AWS)

At iDreamCareer, we’re on a mission to democratize career guidance for millions of young learners across India and beyond. Technology is at the heart of this mission — and we’re looking for an Engineering Manager who thrives in high-ownership environments, thinks with an enterprising mindset, and gets excited about solving problems that genuinely change lives.

This is not just a management role. It’s a chance to shape the product, scale the platform, influence the engineering culture, and lead a team that builds with heart and hustle.



As an Director-Engineering here, you will:


  • Lead a talented team of engineers while remaining hands-on with architecture and development.
  • Champion the use of AI/ML, LLM-driven features, and intelligent systems to elevate learner experience.
  • Inspire a culture of high performance, clear thinking, and thoughtful engineering.
  • Partner closely with product, design, and content teams to deliver delightful, meaningful user experiences.
  • Bring structure, clarity, and energy to complex problem-solving.
  • This role is ideal for someone who loves building, mentoring, scaling, and thinking several steps ahead.


Key Responsibilities

Technical Leadership & Ownership

  • Lead end-to-end development across backend, frontend, architecture, and infrastructure in partnership with product and design teams.
  • Stay hands-on with the MERN stack, Python, and AI/ML technologies, while guiding and coaching a high-performance engineering team.
  • Architect, develop, and maintain distributed microservices, event-driven systems, and robust APIs on AWS.


AI/ML Engineering

  • Build and deploy AI-powered features, leveraging LLMs, RAG pipelines, embeddings, vector databases, and model evaluation frameworks.
  • Drive prompt engineering, retrieval optimization, and continuous refinement of AI system performance.
  • Champion the adoption of modern AI coding tools and emerging AI platforms to boost team productivity.


Cloud, Data, DevOps & Scaling

  • Own deployments and auto-scaling on AWS (ECS, Lambda, CloudFront, SQS, SES, ELB, S3).
  • Build and optimize real-time and batch data pipelines using BigQuery and other analytics tools.
  • Implement CI/CD pipelines for Dockerized applications, ensuring strong observability through Prometheus, Loki, Grafana, CloudWatch.
  • Enforce best practices around security, code quality, testing, and system performance.

Collaboration & Delivery Excellence

  • Partner closely with product managers, designers, and QA to deliver features with clarity, speed, and reliability.
  • Drive agile rituals, ensure engineering predictability, and foster a culture of ownership, innovation, and continuous improvement


Required Skills & Experience

  • 8-15 years of experience in full-stack or backend engineering with at least 5+ years leading engineering teams.
  • Strong hands-on expertise in the MERN stack and modern JavaScript/TypeScript ecosystems.
  • 5+ years building and scaling production-grade applications and distributed systems.
  • 2+ years building and deploying AI/ML products — including training, tuning, integrating, and monitoring AI models in production.
  • Practical experience with SQL, NoSQL, vector databases, embeddings, and production-grade RAG systems.
  • Strong understanding of LLM prompt optimization, evaluation frameworks, and AI-driven system design.
  • Hands-on with AI developer tools, automation utilities, and emerging AI productivity platforms.

Preferred Skills

  • Familiarity with LLM orchestration frameworks (LangChain, LlamaIndex, etc.) and advanced tool-calling workflows.
  • Experience building async workflows, schedulers, background jobs, and offline processing systems.
  • Exposure to modern frontend testing frameworks, QA automation, and performance testing.
Read more
KGiSL MICROCOLLEGE
Hiring Recruitment
Posted by Hiring Recruitment
kerala
1 - 6 yrs
₹1L - ₹6L / yr
skill iconPython
skill iconData Science
skill iconDeep Learning
skill iconMachine Learning (ML)

Job description


Job Title: Python Trainer (Workshop Model Freelance / Part-time)


Location: Thrissur & Ernakulam


Program Duration: 30 or 60 Hours (Workshop Model)


Job Type: Freelance / Contract


About the Role:


We are seeking an experienced and passionate Python Trainer to deliver interactive, hands-on training sessions for students under a workshop model in Thrissur and Ernakulam locations. The trainer will be responsible for engaging learners with practical examples and real-time coding exercises.


Key Responsibilities:


Conduct offline workshop-style Python training sessions (30 or 60 hours total).


Deliver interactive lectures and coding exercises focused on Python programming fundamentals and applications.


Customize the curriculum based on learners skill levels and project needs.


Guide students through mini-projects, assignments, and coding challenges.


Ensure effective knowledge transfer through practical, real-world examples.


Requirements:


Experience: 15 years of training or industry experience in Python programming.


Technical Skills: Strong knowledge of Python, including OOPs concepts, file handling, libraries (NumPy, Pandas, etc.), and basic data visualization.


Prior experience in academic or corporate training preferred.


Excellent communication and presentation skills.


Mode: Offline Workshop (Thrissur / Ernakulam)


Duration: Flexible – 30 Hours or 60 Hours Total


Organization: KGiSL Microcollege


Role: Other


Industry Type: Education / Training


Department: Other


Employment Type: Full Time, Permanent


Role Category: Other


Education


UG: Any Graduate


Key Skills


Data Science,Artificial Intelligence



Role: Other

Industry Type: Education / Training

Department: Other

Employment Type: Full Time, Permanent

Role Category: Other

Education

UG: Any Graduate

Read more
KGiSL MICROCOLLEGE
Hiring Recruitment
Posted by Hiring Recruitment
Thrissur
1 - 6 yrs
₹1L - ₹6L / yr
skill iconData Science
skill iconPython
Prompt engineering
skill iconMachine Learning (ML)

Job description


Job Title: Python Trainer (Workshop Model Freelance / Part-time)


Location: Thrissur & Ernakulam


Program Duration: 30 or 60 Hours (Workshop Model)


Job Type: Freelance / Contract


About the Role:


We are seeking an experienced and passionate Python Trainer to deliver interactive, hands-on training sessions for students under a workshop model in Thrissur and Ernakulam locations. The trainer will be responsible for engaging learners with practical examples and real-time coding exercises.


Key Responsibilities:


Conduct offline workshop-style Python training sessions (30 or 60 hours total).


Deliver interactive lectures and coding exercises focused on Python programming fundamentals and applications.


Customize the curriculum based on learners skill levels and project needs.


Guide students through mini-projects, assignments, and coding challenges.


Ensure effective knowledge transfer through practical, real-world examples.


Requirements:


Experience: 15 years of training or industry experience in Python programming.


Technical Skills: Strong knowledge of Python, including OOPs concepts, file handling, libraries (NumPy, Pandas, etc.), and basic data visualization.


Prior experience in academic or corporate training preferred.


Excellent communication and presentation skills.


Mode: Offline Workshop (Thrissur / Ernakulam)


Duration: Flexible – 30 Hours or 60 Hours Total


Organization: KGiSL Microcollege


Role: Other


Industry Type: Education / Training


Department: Other


Employment Type: Full Time, Permanent


Role Category: Other


Education


UG: Any Graduate


Key Skills


Data Science,Artificial Intelligence



Role: Other

Industry Type: Education / Training

Department: Other

Employment Type: Full Time, Permanent

Role Category: Other

Education

UG: Any Graduate

Read more
Mobileum

at Mobileum

1 recruiter
Eman Khan
Posted by Eman Khan
Bengaluru (Bangalore), Mumbai, Gurugram
7yrs+
₹30L - ₹62L / yr
Retrieval Augmented Generation (RAG)
Large Language Models (LLM)
Telecom
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
+2 more

About Us

Mobileum is a leading provider of Telecom analytics solutions for roaming, core network, security, risk management, domestic and international connectivity testing, and customer intelligence.

More than 1,000 customers rely on its Active Intelligence platform, which provides advanced analytics solutions, allowing customers to connect deep network and operational intelligence with real-time actions that increase revenue, improve customer experience, and reduce costs.


Headquartered in Silicon Valley, Mobileum has global offices in Australia, Dubai, Germany, Greece, India, Portugal, Singapore and UK with global HC of over 1800+. 


Join Mobileum Team

At Mobileum we recognize that our team is the main reason for our success. What does work with us mean? Opportunities!

Role: GenAI/LLM Engineer – Domain-Specific AI Solutions (Telecom)


About the Job

We are seeking a highly skilled GenAI/LLM Engineer to design, fine-tune, and operationalize Large Language Models (LLMs) for telecom business applications. This role will be instrumental in building domain-specific GenAI solutions, including the development of domain-specific LLMs, to transform telecom operational processes, customer interactions, and internal decision-making workflows.


Roles & Responsibility:

  • Build domain-specific LLMs by curating domain-relevant datasets and training/fine-tuning LLMs tailored for telecom use cases.
  • Fine-tune pre-trained LLMs (e.g., GPT, Llama, Mistral) using telecom-specific datasets to improve task accuracy and relevance.
  • Design and implement prompt engineering frameworks, optimize prompt construction and context strategies for telco-specific queries and processes.
  • Develop Retrieval-Augmented Generation (RAG) pipelines integrated with vector databases (e.g., FAISS, Pinecone) to enhance LLM performance on internal knowledge.
  • Build multi-agent LLM pipelines using orchestration tools (LangChain, LlamaIndex) to support complex telecom workflows.
  • Collaborate cross-functionally with data engineers, product teams, and domain experts to translate telecom business logic into GenAI workflows.
  • Conduct systematic model evaluation focused on minimizing hallucinations, improving domain-specific accuracy, and tracking performance improvements on business KPIs.
  • Contribute to the development of internal reusable GenAI modules, coding standards, and best practices documentation.


Desired Profile

  • Familiarity with multi-modal LLMs (text + tabular/time-series).
  • Experience with OpenAI function calling, LangGraph, or agent-based orchestration.
  • Exposure to telecom datasets (e.g., call records, customer tickets, network logs).
  • Experience with low-latency inference optimization (e.g., quantization, distillation).


Technical skills

  • Hands-on experience in fine-tuning transformer models, prompt engineering, and RAG architecture design.
  • Experience delivering production-ready AI solutions in enterprise environments; telecom exposure is a plus.
  • Advanced knowledge of transformer architectures, fine-tuning techniques (LoRA, PEFT, adapters), and transfer learning.
  • Proficiency in Python, with significant experience using PyTorch, Hugging Face Transformers, and related NLP libraries.
  • Practical expertise in prompt engineering, RAG pipelines, and LLM orchestration tools (LangChain, LlamaIndex).
  • Ability to build domain-adapted LLMs, from data preparation to final model deployment.


Work Experience

7+ years of professional experience in AI/ML, with at least 2+ years of practical exposure to LLMs or GenAI deployments.


Educational Qualification

  • Master’s or Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, Natural Language Processing, or a related field.
  • Ph.D. preferred for foundational model work and advanced research focus.
Read more
Quanteon Solutions
DurgaPrasad Sannamuri
Posted by DurgaPrasad Sannamuri
Remote only
4 - 5 yrs
₹10L - ₹15L / yr
Artificial Intelligence (AI)
Generative AI
Large Language Models (LLM) tuning
skill iconMachine Learning (ML)
skill iconPython
+4 more

We are looking for an 4-5 years of experienced AI/ML Engineer who can design, develop, and deploy scalable machine learning models and AI-driven solutions. The ideal candidate should have strong expertise in data processing, model building, and production deployment, along with solid programming and problem-solving skills.

Key Responsibilities

  • Develop and deploy machine learning, deep learning, and NLP models for various business use cases.
  • Build end-to-end ML pipelines including data preprocessing, feature engineering, training, evaluation, and production deployment.
  • Optimize model performance and ensure scalability in production environments.
  • Work closely with data scientists, product teams, and engineers to translate business requirements into AI solutions.
  • Conduct data analysis to identify trends and insights.
  • Implement MLOps practices for versioning, monitoring, and automating ML workflows.
  • Research and evaluate new AI/ML techniques, tools, and frameworks.
  • Document system architecture, model design, and development processes.

Required Skills

  • Strong programming skills in Python (NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch, Keras).
  • Hands-on experience in building and deploying ML/DL models in production.
  • Good understanding of machine learning algorithms, neural networks, NLP, and computer vision.
  • Experience with REST APIs, Docker, Kubernetes, and cloud platforms (AWS/GCP/Azure).
  • Working knowledge of MLOps tools such as MLflow, Airflow, DVC, or Kubeflow.
  • Familiarity with data pipelines and big data technologies (Spark, Hadoop) is a plus.
  • Strong analytical skills and ability to work with large datasets.
  • Excellent communication and problem-solving abilities.

Preferred Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Data Science, AI, Machine Learning, or related fields.
  • Experience in deploying models using cloud services (AWS Sagemaker, GCP Vertex AI, etc.).
  • Experience in LLM fine-tuning or generative AI is an added advantage.


Read more
Gruve
Pune
10 - 15 yrs
Upto ₹70L / yr (Varies
)
skill iconMachine Learning (ML)
Distributed Systems
skill iconKubernetes
skill iconDocker
Microservices
+4 more

About the Company

Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.


Why Gruve

At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.

Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.


Position Summary

We are seeking a highly experienced and visionary Senior Engineering Manager – Inference Services to lead and scale our team responsible for building high-performance inference systems that power cutting-edge AI/ML products. This role requires a blend of strong technical expertise, leadership skills, and product-oriented thinking to drive innovation, scalability, and reliability of our inference infrastructure.


Key Responsibilities

Leadership & Strategy

  • Lead, mentor, and grow a team of engineers focused on inference platforms, services, and optimizations.
  • Define the long-term vision and roadmap for inference services in alignment with product and business goals.
  • Partner with cross-functional leaders in ML, Product, Data Science, and Infrastructure to deliver robust, low-latency, and scalable inference solutions.

Engineering Excellence

  • Architect and oversee development of distributed, production-grade inference systems ensuring scalability, efficiency, and reliability.
  • Drive adoption of best practices for model deployment, monitoring, and continuous improvement of inference pipelines.
  • Ensure high availability, cost optimization, and performance tuning of inference workloads across cloud and on-prem environments.

Innovation & Delivery

  • Evaluate emerging technologies, frameworks, and hardware accelerators (GPUs, TPUs, etc.) to continuously improve inference efficiency.
  • Champion automation and standardization of model deployment and lifecycle management.
  • Balance short-term delivery with long-term architectural evolution.

People & Culture

  • Build a strong engineering culture focused on collaboration, innovation, and accountability.
  • Provide coaching, feedback, and career development opportunities to team members.
  • Foster a growth mindset and data-driven decision-making.


Basic Qualifications

Experience

  • 12+ years of software engineering experience with at least 4–5 years in engineering leadership roles.
  • Proven track record of managing high-performing teams delivering large-scale distributed systems or ML platforms.
  • Experience in building and operating inference systems, ML serving platforms, or real-time data systems at scale.

Technical Expertise

  • Strong understanding of machine learning model deployment, serving, and optimization (batch & real-time).
  • Proficiency in cloud-native technologies (Kubernetes, Docker, microservices architecture).
  • Hands-on knowledge of inference frameworks (TensorFlow Serving, Triton Inference Server, TorchServe, etc.) and hardware accelerators.
  • Solid background in programming languages (Python, Java, C++ or Go) and performance optimization techniques.

Preferred Qualifications

  • Experience with MLOps platforms and end-to-end ML lifecycle management.
  • Prior work in high-throughput, low-latency systems (ad-tech, search, recommendations, etc.).
  • Knowledge of cost optimization strategies for large-scale inference workloads.
Read more
Big Rattle Technologies
Sreelakshmi Nair (Big Rattle Technologies)
Posted by Sreelakshmi Nair (Big Rattle Technologies)
Remote, Mumbai
5 - 7 yrs
₹8L - ₹12L / yr
skill iconPython
SQL
skill iconMachine Learning (ML)
Data profiling
E2E
+8 more

Position: QA Engineer – Machine Learning Systems (5 - 7 years)

Location: Remote (Company in Mumbai)

Company: Big Rattle Technologies Private Limited


Immediate Joiners only.


Summary:

The QA Engineer will own quality assurance across the ML lifecycle—from raw data validation through feature engineering checks, model training/evaluation verification, batch prediction/optimization validation, and end-to-end (E2E) workflow testing. The role is hands-on with Python automation, data profiling, and pipeline test harnesses in Azure ML and Azure DevOps. Success means probably correct data, models, and outputs at production scale and cadence.


Key Responsibilities:

Test Strategy & Governance

  • ○ Define an ML-specific Test Strategy covering data quality KPIs, feature consistency
  • checks, model acceptance gates (metrics + guardrails), and E2E run acceptance
  • (timeliness, completeness, integrity).
  • ○ Establish versioned test datasets & golden baselines for repeatable regression of
  • features, models, and optimizers.


Data Quality & Transformation

  • Validate raw data extracts and landed data lake data: schema/contract checks, null/outlier thresholds, time-window completeness, duplicate detection, site/material coverage.
  • Validate transformed/feature datasets: deterministic feature generation, leakage detection, drift vs. historical distributions, feature parity across runs (hash or statistical similarity tests).
  • Implement automated data quality checks (e.g., Great Expectations/pytest + Pandas/SQL) executed in CI and AML pipelines.

Model Training & Evaluation

  • Verify training inputs (splits, windowing, target leakage prevention) and hyperparameter configs per site/cluster.
  • Automate metric verification (e.g., MAPE/MAE/RMSE, uplift vs. last model, stability tests) with acceptance thresholds and champion/challenger logic.
  • Validate feature importance stability and sensitivity/elasticity sanity checks (price/volume monotonicity where applicable).
  • Gate model registration/promotion in AML based on signed test artifacts and reproducible metrics.


Predictions, Optimization & Guardrails

  • Validate batch predictions: result shapes, coverage, latency, and failure handling.
  • Test model optimization outputs and enforced guardrails: detect violations and prove idempotent writes to DB.
  • Verify API push to third party system (idempotency keys, retry/backoff, delivery receipts).


Pipelines & E2E

  • Build pipeline test harnesses for AML pipelines (data-gen nightly, training weekly,
  • prediction/optimization) including orchestrated synthetic runs and fault injection
  • (missing slice, late competitor data, SB backlog).
  • Run E2E tests from raw data store -> ADLS -> AML -> RDBMS -> APIM/Frontend, assert
  • freshness SLOs and audit event completeness (Event Hubs -> ADLS immutable).


Automation & Tooling

  • Develop Python-based automated tests (pytest) for data checks, model metrics, and API contracts; integrate with Azure DevOps (pipelines, badges, gates).
  • Implement data-driven test runners (parameterized by site/material/model-version) and store signed test artifacts alongside models in AML Registry.
  • Create synthetic test data generators and golden fixtures to cover edge cases (price gaps, competitor shocks, cold starts).


Reporting & Quality Ops

  • Publish weekly test reports and go/no-go recommendations for promotions; maintain a defect taxonomy (data vs. model vs. serving vs. optimization).
  • Contribute to SLI/SLO dashboards (prediction timeliness, queue/DLQ, push success, data drift) used for release gates.


Required Skills (hands-on experience in the following):

  • Python automation (pytest, pandas, NumPy), SQL (PostgreSQL/Snowflake), and CI/CD (Azure
  • DevOps) for fully automated ML QA.
  • Strong grasp of ML validation: leakage checks, proper splits, metric selection
  • (MAE/MAPE/RMSE), drift detection, sensitivity/elasticity sanity checks.
  • Experience testing AML pipelines (pipelines/jobs/components), and message-driven integrations
  • (Service Bus/Event Hubs).
  • API test skills (FastAPI/OpenAPI, contract tests, Postman/pytest-httpx) + idempotency and retry
  • patterns.
  • Familiar with feature stores/feature engineering concepts and reproducibility.
  • Solid understanding of observability (App Insights/Log Analytics) and auditability requirements.


Required Qualifications:

  • Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
  • 5–7+ years in QA with 3+ years focused on ML/Data systems (data pipelines + model validation).
  • Certification in Azure Data or ML Engineer Associate is a plus.



Why should you join Big Rattle?

Big Rattle Technologies specializes in AI/ ML Products and Solutions as well as Mobile and Web Application Development. Our clients include Fortune 500 companies. Over the past 13 years, we have delivered multiple projects for international and Indian clients from various industries like FMCG, Banking and Finance, Automobiles, Ecommerce, etc. We also specialise in Product Development for our clients.

Big Rattle Technologies Private Limited is ISO 27001:2022 certified and CyberGRX certified.

What We Offer:

  • Opportunity to work on diverse projects for Fortune 500 clients.
  • Competitive salary and performance-based growth.
  • Dynamic, collaborative, and growth-oriented work environment.
  • Direct impact on product quality and client satisfaction.
  • 5-day hybrid work week.
  • Certification reimbursement.
  • Healthcare coverage.

How to Apply:

Interested candidates are invited to submit their resume detailing their experience. Please detail out your work experience and the kind of projects you have worked on. Ensure you highlight your contributions and accomplishments to the projects.


Read more
shaadi.com

at shaadi.com

3 recruiters
Agency job
via hirezyai by Aardra Suresh
Mumbai
2 - 8 yrs
₹24L - ₹30L / yr
skill iconMachine Learning (ML)
skill iconPython
SQL
Neural networks

What We’re Looking For

  • 3-5 years of Data Science & ML experience in consumer internet / B2C products.
  • Degree in Statistics, Computer Science, or Engineering (or certification in Data Science).
  • Machine Learning wizardry: recommender systems, NLP, user profiling, image processing, anomaly detection.
  • Statistical chops: finding meaningful insights in large data sets.
  • Programming ninja: R, Python, SQL + hands-on with Numpy, Pandas, scikit-learn, Keras, TensorFlow (or similar).
  • Visualization skills: Redshift, Tableau, Looker, or similar.
  • A strong problem-solver with curiosity hardwired into your DNA.
  • Brownie Points
  • Experience with big data platforms: Hadoop, Spark, Hive, Pig.
  • Extra love if you’ve played with BI tools like Tableau or Looker.


Read more
Vola Finance

at Vola Finance

1 video
2 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
3yrs+
Upto ₹23L / yr (Varies
)
skill iconPython
Logistic regression
SQL
Credit Risk
skill iconAmazon Web Services (AWS)
+3 more

Domain - Credit risk / Fintech 

    

Roles and Responsibilities: 

1. Development, validation and monitoring of Application and Behaviour score cards

for Retail loan portfolio 

2. Improvement of collection efficiency through advanced analytics

3. Development and deployment of fraud scorecard 

4. Upsell / Cross-sell strategy implementation using analytics 

5. Create modern data pipelines and processing using AWS PAAS components (Glue,

Sagemaker studio, etc.) 

6. Deploying software using CI/CD tools such as Azure DevOps, Jenkins, etc.

7. Experience with API tools such as REST, Swagger, and Postman

8. Model deployment in AWS and management of production environment

9. Team player who can work with cross-functional teams to gather data and derive

insights 


Mandatory Technical skill set : 

1. Previous experience in scorecard development and credit risk strategy development 

2. Python and Jenkins

3. Logistic regression, Scorecard, ML and neural networks 

4. Statistical analysis and A/B testing

5. AWS Sagemaker, S3 , Ec2, Dockers 

6. REST API, Swagger and Postman

7. Excel

8. SQL 

9. Visualisation tools such as Redash / Grafana 

10. Bitbucket, Githubs and versioning tools

Read more
Cambridge Wealth (Baker Street Fintech)
Pune
2 - 6 yrs
₹9L - ₹16L / yr
skill iconPython
Wealth management
fintech
skill iconDjango
skill iconFlask
+13 more

   

About Us: The Next Generation of WealthTech  

We're Cambridge Wealth, an award-winning force in mutual fund distribution and Fintech. We're not just moving money; we're redefining wealth management for everyone from retail investors to ultra-HNIs (including the NRI segment). Our brand is synonymous with excellence, backed by accolades from the BSE and top Mutual Fund houses.


If you thrive on building high-performance, scalable systems that drive real-world financial impact, you'll feel right at home. Join us in Pune to build the future of finance.

[Learn more: www.cambridgewealth.in]


The Role: Engineering Meets Team Meets Customer

We're looking for an experienced, hands-on Tech Catalyst to accelerate our product innovation. This isn't just a coding job; it's a chance to blend deep backend expertise with product strategy. You will be the engine driving rapid, data-driven product experiments, leveraging AI and Machine Learning to create smart, personalized financial solutions. You'll lead by example, mentoring a small, dedicated team and ensuring technical excellence and rapid deployment in the high-stakes financial domain.


Key Impact Areas: Ship Fast, Break Ground  

1. Backend & AI/ML Innovation  

  • Rapid Prototyping: Design and execute quick, iterative experiments to validate new features and market hypotheses, moving from concept to production in days, not months.
  • AI-Powered Features: Build scalable Python-based backend services that integrate AI/ML models to enhance customer profiling, portfolio recommendation, and risk analysis.
  • System Architecture: Own the performance, stability, and scalability of our core fintech platform, implementing best practices in modern backend development.

2. Product Leadership & Execution  

  • Agile Catalyst: Drive and optimize Agile sprints, ensuring clear technical milestones, efficient resource allocation, backlog grooming and maintaining a laser focus on preventing scope creep.
  • Mentorship & Management: Provide technical guidance and mentorship to a team of developers, fostering a culture of high performance, code quality, and continuous learning.
  • Domain Alignment: Translate complex financial requirements and market insights into precise, actionable technical specifications and seamless user stories.
  • Problem Solver: Proactively identify and resolve technical and process bottlenecks, acting as the ultimate problem solver for the engineering and product teams.

3. Financial Domain Expertise  

  • High-Value Delivery: Apply deep knowledge of the mutual fund and broader fintech landscape to inform product decisions, ensuring our solutions are compliant, competitive, and truly valuable to our clients.
  • Risk & Security: Proactively architect solutions with security and financial risk management baked in from the ground up, protecting client data and assets.


Your Tech Stack & Experience  

The Must-Haves  

  • Mindset: A verifiable track record as a proactive First Principle Problem Solver with an intense Passion to Ship production-ready features frequently.
  • Customer Empathy: Keeps the customer's experience in mind at all times.
  • Team Leadership: Experience in leading, mentoring, or managing a small development team, driving technical excellence and project delivery.
  • Systems Thinker: Diagnoses and solves problems by viewing the organization as an interconnected system to anticipate broad impacts and develop holistic, strategic solutions.
  • Backend Powerhouse: 2+ years of professional experience with a strong focus on backend development.
  • Python Guru: Expert proficiency in Python and related frameworks (e.g., Django, Flask) for building robust, scalable APIs and services.
  • AI/ML Integration: Proven ability to leverage and integrate AI/ML models into production-level applications.
  • Data Driven: Expert in SQL for complex data querying, analysis, and ETL processes.
  • Financial Domain Acumen:Strong, demonstrable knowledge of financial products, especially mutual funds, wealth management, and key fintech metrics.

 

 

Nice-to-Haves  

  • Experience with cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes).
  • Familiarity with Zoho Analytics, Zoho CRM and Zoho Deluge
  • Familiarity with modern data analysis tools and visualization platforms (e.g., Mixpanel, Tableau, or custom dashboard tools).
  • Understanding of Mutual Fund, AIF, PMS operations

 

Ready to Own the Backend and Shape Finance?  

This is where your code meets the capital market. If you’re a Fintech-savvy Python expert ready to lead a team and build a scalable platform in Pune, we want to talk.

Apply now to join our award-winning, forward-thinking team.

 

Our High-Velocity Hiring Process:  

  • You Apply & Engage: Quick application and a few insightful questions. (5 min)
  • Online Tech Challenge: Prove your tech mettle. (90 min)
  • People Sync: A focused call to understand if there is cultural and value alignment. (30 min)
  • Deep Dive Technical Interview: Discuss architecture and projects with our senior engineers. (1 hour)
  • Founder's Vision Interview: Meet the leadership and discuss your impact. (1 hour)
  • Offer & Onboarding: Reference and BGV check follow the successful offer.

 

What are you building right now that you're most proud of?

 

Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Bengaluru (Bangalore)
2 - 5 yrs
₹15L - ₹22L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Generative AI
PyTorch
NumPy
+2 more

Job Title: AI Engineer

Location: Bengaluru 

Experience: 3 Years 

Working Days: 5 Days

About the Role

We’re reimagining how enterprises interact with documents and workflows—starting with BFSI and healthcare. Our AI-first platforms are transforming credit decisioning, document intelligence, and underwriting at scale. The focus is on Intelligent Document Processing (IDP), GenAI-powered analysis, and human-in-the-loop (HITL) automation to accelerate outcomes across lending, insurance, and compliance workflows.

As an AI Engineer, you’ll be part of a high-caliber engineering team building next-gen AI systems that:

  • Power robust APIs and platforms used by underwriters, credit analysts, and financial institutions.
  • Build and integrate GenAI agents.
  • Enable “human-in-the-loop” workflows for high-assurance decisions in real-world conditions.

Key Responsibilities

  • Build and optimize ML/DL models for document understanding, classification, and summarization.
  • Apply LLMs and RAG techniques for validation, search, and question-answering tasks.
  • Design and maintain data pipelines for structured and unstructured inputs (PDFs, OCR text, JSON, etc.).
  • Package and deploy models as REST APIs or microservices in production environments.
  • Collaborate with engineering teams to integrate models into existing products and workflows.
  • Continuously monitor and retrain models to ensure reliability and performance.
  • Stay updated on emerging AI frameworks, architectures, and open-source tools; propose improvements to internal systems.

Required Skills & Experience

  • 2–5 years of hands-on experience in AI/ML model development, fine-tuning, and building ML solutions.
  • Strong Python proficiency with libraries such as NumPy, Pandas, scikit-learn, PyTorch, or TensorFlow.
  • Solid understanding of transformers, embeddings, and NLP pipelines.
  • Experience working with LLMs (OpenAI, Claude, Gemini, etc.) and frameworks like LangChain.
  • Exposure to OCR, document parsing, and unstructured text analytics.
  • Familiarity with model serving, APIs, and microservice architectures (FastAPI, Flask).
  • Working knowledge of Docker, cloud environments (AWS/GCP/Azure), and CI/CD pipelines.
  • Strong grasp of data preprocessing, evaluation metrics, and model validation workflows.
  • Excellent problem-solving ability, structured thinking, and clean, production-ready coding practices.


Read more
Jugaad
Kunal Tadakaluri
Posted by Kunal Tadakaluri
Chennai
1 - 5 yrs
₹2.4L - ₹9.6L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Automation

IMP: Please read through before applying!


Nature of role: Full-time; On-site

Location: Thiruvanmiyur, Chennai


Responsibilities:


Build and manage automation workflows using n8n, Make (Integromat), Zapier, or custom APIs.


Integrate tools across JugaadX, WhatsApp, Shopify, Meta, Google Workspace, CRMs, and internal systems.


Develop and maintain scalable, modular automation systems with clear documentation.


Integrate and experiment with AI tools and APIs such as OpenAI, Gemini, Claude, HeyGen, Runway, etc.


Create intelligent workflows — from chatbots and lead scorers to content generators and auto-responders.


Manage cloud infrastructure (VPS, Docker, SSL, security) for automations and dashboards.


Identify repetitive tasks and convert them into reliable automated processes.


Build centralized dashboards and automated reports for teams and clients.


Stay up-to-date with the latest in AI, automation, and LLM technologies, and bring new ideas to life within Jugaad’s ecosystem.


Requirements:


Hands-on experience with n8n, Make, or Zapier (or similar tools).


Familiarity with OpenAI, Gemini, HuggingFace, ElevenLabs, HeyGen, and other AI platforms.


Working knowledge of JavaScript and basic Python for API scripting.


Strong understanding of REST APIs, webhooks, and authentication.


Experience with Docker, VPS (AWS/DigitalOcean), and server management.


Proficiency with Google Sheets, Airtable, JSON, and basic SQL.


Clear communication and documentation skills — able to explain technical systems simply.


Who You Are:


A self-starter who loves automation, optimization, and innovation.


Comfortable building end-to-end tech solutions independently.


Excited to collaborate across creative, marketing, and tech teams.


Always experimenting with new AI tools and smarter ways to work.


Obsessed with efficiency, scalability, and impact — you love saving time and getting more done with less.


What You Get:


A strategic and hands-on role at the intersection of AI, automation, and operations.


The chance to shape the tech backbone of Jugaad and influence how we work, scale, and innovate.


Freedom to experiment, build, and deploy your ideas fast.


A young, fast-moving team where your work directly drives impact and growth.

Read more
Cymetrix Software

at Cymetrix Software

2 candid answers
Netra Shettigar
Posted by Netra Shettigar
Gurugram
8 - 15 yrs
₹20L - ₹40L / yr
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Artificial Intelligence (AI)
Generative AI
azure cognitives service

Role Overview

We are looking for a highly skilled and intellectually curious Senior Data Scientist with 7+ years of experience in applying advanced machine learning and AI techniques to solve complex business problems. The ideal candidate will have deep expertise in Classical Machine Learning, Deep Learning, Natural Language Processing (NLP), and Generative AI (GenAI), along with strong hands-on coding skills and a proven track record of delivering impactful data science solutions. This role requires a blend of technical excellence, business acumen, and collaborative mindset.


Key Responsibilities

  • Design, develop, and deploy ML models using classical algorithms (e.g., regression, decision trees, ensemble methods) and deep learning architectures (CNNs, RNNs, Transformers).
  • Build NLP solutions for tasks such as text classification, entity recognition, summarization, and conversational AI.
  • Develop and fine-tune GenAI models for use cases like content generation, code synthesis, and personalization.
  • Architect and implement Retrieval-Augmented Generation (RAG) systems for enhanced contextual AI applications.
  • Collaborate with data engineers to build scalable data pipelines and feature stores.
  • Perform advanced feature engineering and selection to improve model accuracy and robustness.
  • Work with large-scale structured and unstructured datasets using distributed computing frameworks.
  • Translate business problems into data science solutions and communicate findings to stakeholders.
  • Present insights and recommendations through compelling storytelling and visualization.
  • Mentor junior data scientists and contribute to internal knowledge sharing and innovation.


Required Qualifications

  • 7+ years of experience in data science, machine learning, and AI.
  • Strong academic background in Computer Science, Statistics, Mathematics, or related field (Master’s or PhD preferred).
  • Proficiency in Python, SQL, and ML libraries (scikit-learn, TensorFlow, PyTorch, Hugging Face).
  • Experience with NLP and GenAI tools (e.g., Azure AI Foundry, Azure AI studio, GPT, LLaMA, LangChain).
  • Hands-on experience with Retrieval-Augmented Generation (RAG) systems and vector databases.
  • Familiarity with cloud platforms (Azure preferred, AWS/GCP acceptable) and MLOps tools (MLflow, Airflow, Kubeflow).
  • Solid understanding of data structures, algorithms, and software engineering principles.
  • Experience with Aure, Azure Copilot Studio, Azure Cognitive Services
  • Experience with Azure AI Foundry would be a strong added advantage


Preferred Skills

  • Exposure to LLM fine-tuning, prompt engineering, and GenAI safety frameworks.
  • Experience in domains such as finance, healthcare, retail, or enterprise SaaS.
  • Contributions to open-source projects, publications, or patents in AI/ML.


Soft Skills

  • Strong analytical and problem-solving skills.
  • Excellent communication and stakeholder engagement abilities.
  • Ability to work independently and collaboratively in cross-functional teams.
  • Passion for continuous learning and innovation.


Read more
Remote only
5 - 20 yrs
₹12L - ₹25L / yr
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
skill iconPython
Generative AI
Large Language Models (LLM)

We are building an AI-powered chatbot platform and looking for an AI/ML Engineer with strong backend skills as our first technical hire. You will be responsible for developing the core chatbot engine using LLMs, creating backend APIs, and building scalable RAG pipelines.

You should be comfortable working independently, shipping fast, and turning ideas into real product features. This role is ideal for someone who loves building with modern AI tools and wants to be part of a fast-growing product from day one.

Responsibilities

• Build the core AI chatbot engine using LLMs (OpenAI, Claude, Gemini, Llama etc.)

• Develop backend services and APIs using Python (FastAPI/Flask)

• Create RAG pipelines using vector databases (Pinecone, FAISS, Chroma)

• Implement embeddings, prompt flows, and conversation logic

• Integrate chatbot with web apps, WhatsApp, CRMs and 3rd-party APIs

• Ensure system reliability, performance, and scalability

• Work directly with the founder in shaping the product and roadmap

Requirements

• Strong experience with LLMs & Generative AI

• Excellent Python skills with FastAPI/Flask

• Hands-on experience with LangChain or RAG architectures

• Vector database experience (Pinecone/FAISS/Chroma)

• Strong understanding of REST APIs and backend development

• Ability to work independently, experiment fast, and deliver clean code

Nice to Have

• Experience with cloud (AWS/GCP)

• Node.js knowledge

• LangGraph, LlamaIndex

• MLOps or deployment experience


Read more
CGI Inc

at CGI Inc

3 recruiters
Shruthi BT
Posted by Shruthi BT
Bengaluru (Bangalore)
6 - 8 yrs
₹15L - ₹20L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
skill iconPython
skill iconAngular (2+)

Full Stack Engineer


Position Description

Responsibilities

• Take design mockups provided by UX/UI designers and translate them into web pages or applications using HTML and CSS. Ensure that the design is faithfully replicated in the final product.

• Develop enabling frameworks and application E2E and enhance with data analytics and AI enablement

• Ensure effective Design, Development, Validation and Support activities in line with the Customer needs, architectural requirements, and ABB Standards.

• Support ABB business units through consulting engagements.

• Develop and implement machine learning models to solve specific business problems, such as predictive analytics, classification, and recommendation systems

• Perform exploratory data analysis, clean and preprocess data, and identify trends and patterns.

• Evaluate the performance of machine learning models and fine-tune them for optimal results.

• Create informative and visually appealing data visualizations to communicate findings and insights to non-technical stakeholders.

• Conduct statistical analysis, hypothesis testing, and A/B testing to support decision-making processes.

• Define the solution, Project plan, identifying and allocation of team members, project tracking; Work with data engineers to integrate, transform, and store data from various sources.

• Collaborate with cross-functional teams, including business analysts, data engineers, and domain experts, to understand business objectives and develop data science solutions.

• Prepare clear and concise reports and documentation to communicate results and methodologies.

• Stay updated with the latest data science and machine learning trends and techniques.

• Familiarity with ML Model Deployment as REST APIs.


Background

• Engineering graduate / Masters degree with rich exposure to Data science, from a reputed institution

• Create responsive web designs that adapt to different screen sizes and devices using media queries and responsive design techniques.

• Write and maintain JavaScript code to add interactivity and dynamic functionality to web pages. This may include user input handling, form validation, and basic animations.

• Familiarity with front-end JavaScript libraries and frameworks such as React, Angular, or Vue.js. Depending on the projects, you may be responsible for working within these frameworks

• Atleast 6+ years experience in AI ML concepts, Python (preferred), prefer knowledge in deep learning frameworks like PyTorch and TensorFlow

• Domain knowledge of Manufacturing/ process Industries, Physics and first principle based analysis

• Analytical thinking for translating data into meaningful insight and could be consumed by ML Model for Training and Prediction.

• Should be able to deploy Model using Cloud services like Azure Databricks or Azure ML Studio. Familiarity with technologies like Docker, Kubernetes and MLflow is good to have.

• Agile development of customer centric prototypes or ‘Proof of Concepts’ for focused digital solutions

• Good communication skills must be able to discuss the requirements effectively with the client teams, and with internal teams.

Read more
Remote only
6 - 15 yrs
₹10L - ₹30L / yr
skill iconNextJs (Next.js)
skill iconFlutter
FastAPI
skill iconAmazon Web Services (AWS)
TypeScript
+8 more

Mission

Own architecture across web + backend, ship reliably, and establish patterns the team can scale on.

Responsibilities

  • Lead system architecture for Next.js (web) and FastAPI (backend); own code quality, reviews, and release cadence.
  • Build and maintain the web app (marketing, auth, dashboard) and a shared TS SDK (@revilo/contracts, @revilo/sdk).
  • Integrate StripeMaps, analytics; enforce accessibility and performance baselines.
  • Define CI/CD (GitHub Actions), containerization (Docker), env/promotions (staging → prod).
  • Partner with Mobile and AI engineers on API/tool schemas and developer experience.

Requirements

  • 6–10+ years; expert TypeScript, strong Python.
  • Next.js (App Router), TanStack Query, shadcn/ui; FastAPI, Postgres, pydantic/SQLModel.
  • Auth (OTP/JWT/OAuth), payments, caching, pagination, API versioning.
  • Practical CI/CD and observability (logs/metrics/traces).

Nice-to-haves

  • OpenAPI typegen (Zod), feature flags, background jobs/queues, Vercel/EAS.

Key Outcomes (ongoing)

  • Stable architecture with typed contracts; <2% crash/error on web, p95 API latency in budget, reliable weekly releases.


Read more
Agentic AI Platform

Agentic AI Platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Gurugram
3 - 6 yrs
₹10L - ₹25L / yr
DevOps
skill iconPython
Google Cloud Platform (GCP)
Linux/Unix
CI/CD
+21 more

Review Criteria

  • Strong DevOps /Cloud Engineer Profiles
  • Must have 3+ years of experience as a DevOps / Cloud Engineer
  • Must have strong expertise in cloud platforms – AWS / Azure / GCP (any one or more)
  • Must have strong hands-on experience in Linux administration and system management
  • Must have hands-on experience with containerization and orchestration tools such as Docker and Kubernetes
  • Must have experience in building and optimizing CI/CD pipelines using tools like GitHub Actions, GitLab CI, or Jenkins
  • Must have hands-on experience with Infrastructure-as-Code tools such as Terraform, Ansible, or CloudFormation
  • Must be proficient in scripting languages such as Python or Bash for automation
  • Must have experience with monitoring and alerting tools like Prometheus, Grafana, ELK, or CloudWatch
  • Top tier Product-based company (B2B Enterprise SaaS preferred)


Preferred

  • Experience in multi-tenant SaaS infrastructure scaling.
  • Exposure to AI/ML pipeline deployments or iPaaS / reverse ETL connectors.


Role & Responsibilities

We are seeking a DevOps Engineer to design, build, and maintain scalable, secure, and resilient infrastructure for our SaaS platform and AI-driven products. The role will focus on cloud infrastructure, CI/CD pipelines, container orchestration, monitoring, and security automation, enabling rapid and reliable software delivery.


Key Responsibilities:

  • Design, implement, and manage cloud-native infrastructure (AWS/Azure/GCP).
  • Build and optimize CI/CD pipelines to support rapid release cycles.
  • Manage containerization & orchestration (Docker, Kubernetes).
  • Own infrastructure-as-code (Terraform, Ansible, CloudFormation).
  • Set up and maintain monitoring & alerting frameworks (Prometheus, Grafana, ELK, etc.).
  • Drive cloud security automation (IAM, SSL, secrets management).
  • Partner with engineering teams to embed DevOps into SDLC.
  • Troubleshoot production issues and drive incident response.
  • Support multi-tenant SaaS scaling strategies.


Ideal Candidate

  • 3–6 years' experience as DevOps/Cloud Engineer in SaaS or enterprise environments.
  • Strong expertise in AWS, Azure, or GCP.
  • Strong expertise in LINUX Administration.
  • Hands-on with Kubernetes, Docker, CI/CD tools (GitHub Actions, GitLab, Jenkins).
  • Proficient in Terraform/Ansible/CloudFormation.
  • Strong scripting skills (Python, Bash).
  • Experience with monitoring stacks (Prometheus, Grafana, ELK, CloudWatch).
  • Strong grasp of cloud security best practices.



Read more
 Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 12 yrs
₹10L - ₹30L / yr
skill iconAmazon Web Services (AWS)
AWS CloudFormation
Amazon Redshift
skill iconElastic Search
ECS
+11 more

Job Details

Job Title: ML Engineer II - Aws, Aws Cloud

Industry: Technology

Domain - Information technology (IT)

Experience Required: 6-12 years

Employment Type: Full Time

Job Location: Pune

CTC Range: Best in Industry


Job Description:

Core Responsibilities:

? The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency

? Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.

? Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.

? Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.

? System Integration: Integrate models into existing systems and workflows.

? Model Deployment: Deploy models to production environments and monitor performance.

? Collaboration: Work closely with data scientists, software engineers, and other stakeholders.

? Continuous Improvement: Identify areas for improvement in model performance and systems.


Skills:

? Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).

? Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaossearch logs, etc for troubleshooting; Other tech touch points are ScyllaDB (like BigTable), OpenSearch, Neo4J graph

? Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.

? Knowledge of model monitoring and performance evaluation.


Required experience:

? Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sagemaker pipeline with ability to analyze gaps and recommend/implement improvements

? AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in


ML workflows

? AWS data: Redshift, Glue

? Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)


Skills: Aws, Aws Cloud, Amazon Redshift, Eks


Must-Haves

Aws, Aws Cloud, Amazon Redshift, Eks

NP: Immediate – 30 Days

 

Read more
Semiconductor Manufacturing Industry

Semiconductor Manufacturing Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Chennai
5 - 8 yrs
₹40L - ₹48L / yr
skill iconPython
skill iconMachine Learning (ML)
Image Processing
skill iconDeep Learning
Algorithms
+28 more

🎯 Ideal Candidate Profile:

This role requires a seasoned engineer/scientist with a strong academic background from a premier institution and significant hands-on experience in deep learning (specifically image processing) within a hardware or product manufacturing environment.


📋 Must-Have Requirements:

Experience & Education Combinations:

Candidates must meet one of the following criteria:

  • Doctorate (PhD) + 2 years of related work experience
  • Master's Degree + 5 years of related work experience
  • Bachelor's Degree + 7 years of related work experience


Technical Skills:

  • Minimum 5 years of hands-on experience in all of the following:
  • Python
  • Deep Learning (DL)
  • Machine Learning (ML)
  • Algorithm Development
  • Image Processing
  • 3.5 to 4 years of strong proficiency with PyTorch OR TensorFlow / Keras.


Industry & Institute:

  • Education: Must be from a premier institute (IIT, IISC, IIIT, NIT, BITS) or a recognized regional tier 1 college.
  • Industry: Current or past experience in a Product, Semiconductor, or Hardware Manufacturing company is mandatory.
  • Preference: Candidates from engineering product companies are strongly preferred.


ℹ️ Additional Role Details:

  • Interview Process: 3 technical rounds followed by 1 HR round.
  • Work Model: Hybrid (requiring 3 days per week in the office).


Based on the job description you provided, here is a detailed breakdown of the Required Skills and Qualifications for this AI/ML/LLM role, formatted for clarity.


📝 Required Skills and Competencies:

💻 Programming & ML Prototyping:

  • Strong Proficiency: Python, Data Structures, and Algorithms.
  • Hands-on Experience: NumPy, Pandas, Scikit-learn (for ML prototyping).


🤖 Machine Learning Frameworks:

  • Core Concepts: Solid understanding of:
  • Supervised/Unsupervised Learning
  • Regularization
  • Feature Engineering
  • Model Selection
  • Cross-Validation
  • Ensemble Methods: Experience with models like XGBoost and LightGBM.


🧠 Deep Learning Techniques:

  • Frameworks: Proficiency with PyTorch OR TensorFlow / Keras.
  • Architectures: Knowledge of:
  • Convolutional Neural Networks (CNNs)
  • Recurrent Neural Networks (RNNs)
  • Long Short-Term Memory networks (LSTMs)
  • Transformers
  • Attention Mechanisms
  • Optimization: Familiarity with optimization techniques (e.g., Adam, SGD), Dropout, and Batch Normalization.


💬 LLMs & RAG (Retrieval-Augmented Generation):

  • Hugging Face: Experience with the Transformers library (tokenizers, embeddings, model fine-tuning).
  • Vector Databases: Familiarity with Milvus, FAISS, Pinecone, or ElasticSearch.
  • Advanced Techniques: Proficiency in:
  • Prompt Engineering
  • Function/Tool Calling
  • JSON Schema Outputs


🛠️ Data & Tools:

  • Data Management: SQL fundamentals; exposure to data wrangling and pipelines.
  • Tools: Experience with Git/GitHub, Jupyter, and basic Docker.


🎓 Minimum Qualifications (Experience & Education Combinations):

Candidates must have experience building AI systems/solutions with Machine Learning, Deep Learning, and LLMs, meeting one of the following criteria:

  • Doctorate (Academic) Degree + 2 years of related work experience.
  • Master's Level Degree + 5 years of related work experience.
  • Bachelor's Level Degree + 7 years of related work experience.


⭐ Preferred Traits and Mindset:

  • Academic Foundation: Solid academic background with strong applied ML/DL exposure.
  • Curiosity: Eagerness to learn cutting-edge AI and willingness to experiment.
  • Communication: Clear communicator who can explain ML/LLM trade-offs simply.
  • Ownership: Strong problem-solving and ownership mindset.
Read more
Versatile Commerce LLP

at Versatile Commerce LLP

2 candid answers
Burugupally Shailaja
Posted by Burugupally Shailaja
Hyderabad
3 - 9 yrs
₹3L - ₹8L / yr
Retrieval Augmented Generation (RAG)
skill iconMachine Learning (ML)
Generative AI
Open-source LLMs
skill iconPython
+2 more

📍Company: Versatile Commerce

 📍 Position: Data Scientists

 📍 Experience: 3-9 yrs

 📍 Location: Hyderabad (WFO)

 📅 Notice Period: 0- 15 Days

Read more
Remote only
10 - 15 yrs
₹25L - ₹40L / yr
data engineer
Apache Spark
skill iconScala
Big Data
skill iconPython
+5 more

What You’ll Be Doing:

● Own the architecture and roadmap for scalable, secure, and high-quality data pipelines

and platforms.

● Lead and mentor a team of data engineers while establishing engineering best practices,

coding standards, and governance models.

● Design and implement high-performance ETL/ELT pipelines using modern Big Data

technologies for diverse internal and external data sources.

● Drive modernization initiatives including re-architecting legacy systems to support

next-generation data products, ML workloads, and analytics use cases.

● Partner with Product, Engineering, and Business teams to translate requirements into

robust technical solutions that align with organizational priorities.

● Champion data quality, monitoring, metadata management, and observability across the

ecosystem.

● Lead initiatives to improve cost efficiency, data delivery SLAs, automation, and

infrastructure scalability.

● Provide technical leadership on data modeling, orchestration, CI/CD for data workflows,

and cloud-based architecture improvements.


Qualifications:

● Bachelor's degree in Engineering, Computer Science, or relevant field.

● 8+ years of relevant and recent experience in a Data Engineer role.

● 5+ years recent experience with Apache Spark and solid understanding of the

fundamentals.

● Deep understanding of Big Data concepts and distributed systems.

● Demonstrated ability to design, review, and optimize scalable data architectures across

ingestion.

● Strong coding skills with Scala, Python and the ability to quickly switch between them with

ease.

● Advanced working SQL knowledge and experience working with a variety of relational

databases such as Postgres and/or MySQL.

● Cloud Experience with DataBricks.


● Strong understanding of Delta Lake architecture and working with Parquet, JSON, CSV,

and similar formats.

● Experience establishing and enforcing data engineering best practices, including CI/CD

for data, orchestration and automation, and metadata management.

● Comfortable working in an Agile environment

● Machine Learning knowledge is a plus.

● Demonstrated ability to operate independently, take ownership of deliverables, and lead

technical decisions.

● Excellent written and verbal communication skills in English.

● Experience supporting and working with cross-functional teams in a dynamic

environment.

REPORTING: This position will report to Sr. Technical Manager or Director of Engineering as

assigned by Management.

EMPLOYMENT TYPE: Full-Time, Permanent


SHIFT TIMINGS: 10:00 AM - 07:00 PM IST

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Robin Silverster
Posted by Robin Silverster
Bengaluru (Bangalore)
3 - 5 yrs
₹5L - ₹18L / yr
Retrieval Augmented Generation (RAG)
skill iconPython
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
NLB
+1 more

Hi Shanon, As discussed we have one open req for AI role. The right skill set I am looking for is experience with RAG pipelines, Performance monitoring of prompts and tuning. Multi model RAG and agentic AI soon but not night away. Needless to say Python and the NLP lib experience is a must. Database knowledge is also essential. The developer needs to be in Morgan Stanley offices 3 days per week. BLR location is preferred, if not Mumbai is also fine. The duration is long term since we are looking to expand the use cases Please let me know if you have any questions. 

Read more
Inncircles
Gangadhar M
Posted by Gangadhar M
Hyderabad
4 - 8 yrs
Best in industry
NumPy
skill iconPython
pandas
skill iconMachine Learning (ML)
skill iconDeep Learning
+6 more

Job Title: Senior AI/ML/DL Engineer

Location: Hyderabad

Department: Artificial Intelligence/Machine Learning


Job Summary:

We are seeking a highly skilled and motivated Senior AI/ML/DL Engineer to contribute to

the development and implementation of advanced artificial intelligence, machine learning,

and deep learning solutions. The ideal candidate will have a strong technical background in

AI/ML/DL, hands-on experience in building scalable models, and a passion for solving

complex problems using data-driven approaches. This role involves working closely with

cross-functional teams to deliver innovative AI/ML solutions aligned with business objectives.

Key Responsibilities:


Technical Execution:

● Design, develop, and deploy AI/ML/DL models and algorithms to solve business

challenges.

● Stay up-to-date with the latest advancements in AI/ML/DL technologies and integrate

them into solutions.

● Implement best practices for model development, validation, and deployment.


Project Development:

● Collaborate with stakeholders to identify business opportunities and translate them

into AI/ML projects.

● Work on the end-to-end lifecycle of AI/ML projects, including data collection,

preprocessing, model training, evaluation, and deployment.

● Ensure the scalability, reliability, and performance of AI/ML solutions in production

environments.


Cross-Functional Collaboration:

● Work closely with product managers, software engineers, and domain experts to

integrate AI/ML capabilities into products and services.

● Communicate complex technical concepts to non-technical stakeholders effectively.


Research and Innovation:

Explore new AI/ML techniques and methodologies to enhance solution capabilities.


● Prototype and experiment with novel approaches to solve challenging problems.

●Contribute to internal knowledge-sharing initiatives and documentation.


Quality Assurance & MLOps:

● Ensure the accuracy, robustness, and ethical use of AI/ML models.

● Implement monitoring and maintenance processes for deployed models to ensure long-term performance.

● Follow MLOps practices for efficient deployment and monitoring of AI/ML solutions.


Qualifications:


Education:

● Bachelors/Master’s or Ph.D. in Computer Science, Data Science, Artificial Intelligence, Machine Learning, or a related field.


Experience:

● 5+ years of experience in AI/ML/DL, with a proven track record of delivering AI/ML solutions in production environments.

● Strong experience with programming languages such as Python, R, or Java.

● Proficiency in AI/ML frameworks and tools (e.g., TensorFlow, PyTorch, Scikit-learn,Keras).

● Experience with cloud platforms (e.g., AWS, Azure, GCP) and big data technologies

(e.g., Hadoop, Spark).

● Familiarity with MLOps practices and tools for model deployment and monitoring.


Skills:

● Strong understanding of machine learning algorithms, deep learning architectures,

and statistical modeling.

● Excellent problem-solving and analytical skills.

● Strong communication and interpersonal skills.

● Ability to manage multiple projects and prioritize effectively.


Preferred Qualifications:

● Experience in natural language processing (NLP), computer vision, or reinforcement

learning.

● Knowledge of ethical AI practices and regulatory compliance.

● Publications or contributions to the AI/ML community (e.g., research papers,open-source projects).


What We Offer:

● Competitive salary and benefits package.

● Opportunities for professional development and career growth.

● A collaborative and innovative work environment.

● The chance to work on impactful projects that leverage cutting-edge AI/ML technologies.

Read more
Pune, Bengaluru (Bangalore), Hyderabad
8 - 12 yrs
₹14L - ₹15L / yr
skill iconR Programming
skill iconPython
Scikit-Learn
TensorFlow
PyTorch
+8 more

Role: Data Scientist (Python + R Expertise)

Exp: 8 -12 Years

CTC: up to 30 LPA


Required Skills & Qualifications:

  • 8–12 years of hands-on experience as a Data Scientist or in a similar analytical role.
  • Strong expertise in Python and R for data analysis, modeling, and visualization.
  • Proficiency in machine learning frameworks (scikit-learn, TensorFlow, PyTorch, caret, etc.).
  • Strong understanding of statistical modeling, hypothesis testing, regression, and classification techniques.
  • Experience with SQL and working with large-scale structured and unstructured data.
  • Familiarity with cloud platforms (AWS, Azure, or GCP) and deployment practices (Docker, MLflow).
  • Excellent analytical, problem-solving, and communication skills.


Preferred Skills:

  • Experience with NLP, time series forecasting, or deep learning projects.
  • Exposure to data visualization tools (Tableau, Power BI, or R Shiny).
  • Experience working in product or data-driven organizations.
  • Knowledge of MLOps and model lifecycle management is a plus.


If interested kindly share your updated resume on 82008 31681


Read more
Technology Industry

Technology Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Delhi
10 - 15 yrs
₹105L - ₹140L / yr
Data engineering
Apache Spark
Apache
Apache Kafka
skill iconJava
+25 more

MANDATORY:

  • Super Quality Data Architect, Data Engineering Manager / Director Profile
  • Must have 12+ YOE in Data Engineering roles, with at least 2+ years in a Leadership role
  • Must have 7+ YOE in hands-on Tech development with Java (Highly preferred) or Python, Node.JS, GoLang
  • Must have strong experience in large data technologies, tools like HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, Presto etc.
  • Strong expertise in HLD and LLD, to design scalable, maintainable data architectures.
  • Must have managed a team of at least 5+ Data Engineers (Read Leadership role in CV)
  • Product Companies (Prefers high-scale, data-heavy companies)


PREFERRED:

  • Must be from Tier - 1 Colleges, preferred IIT
  • Candidates must have spent a minimum 3 yrs in each company.
  • Must have recent 4+ YOE with high-growth Product startups, and should have implemented Data Engineering systems from an early stage in the Company


ROLES & RESPONSIBILITIES:

  • Lead and mentor a team of data engineers, ensuring high performance and career growth.
  • Architect and optimize scalable data infrastructure, ensuring high availability and reliability.
  • Drive the development and implementation of data governance frameworks and best practices.
  • Work closely with cross-functional teams to define and execute a data roadmap.
  • Optimize data processing workflows for performance and cost efficiency.
  • Ensure data security, compliance, and quality across all data platforms.
  • Foster a culture of innovation and technical excellence within the data team.


IDEAL CANDIDATE:

  • 10+ years of experience in software/data engineering, with at least 3+ years in a leadership role.
  • Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS.
  • Proficiency in SQL, Python, and Scala for data processing and analytics.
  • Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services.
  • Strong foundation and expertise in HLD and LLD, as well as design patterns, preferably using Spring Boot or Google Guice
  • Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks.
  • Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery
  • Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.).
  • Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB.
  • Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK.
  • Proven ability to drive technical strategy and align it with business objectives.
  • Strong leadership, communication, and stakeholder management skills.


PREFERRED QUALIFICATIONS:

  • Experience in machine learning infrastructure or MLOps is a plus.
  • Exposure to real-time data processing and analytics.
  • Interest in data structures, algorithm analysis and design, multicore programming, and scalable architecture.
  • Prior experience in a SaaS or high-growth tech company.
Read more
Global Leader in Diversified Electronics

Global Leader in Diversified Electronics

Agency job
via Peak Hire Solutions by Dhara Thakkar
Chennai
5 - 10 yrs
₹20L - ₹48L / yr
skill iconPython
skill iconDeep Learning
skill iconMachine Learning (ML)
Algorithm Development
Image Processing
+3 more


JOB DESCRIPTION/PREFERRED QUALIFICATIONS:

REQUIRED SKILLS/COMPETENCIES:


Programming Languages:

  • Strong in Python, data structures, and algorithms.
  • Hands-on with NumPy, Pandas, Scikit-learn for ML prototyping.


Machine Learning Frameworks:

  • Understanding of supervised/unsupervised learning, regularization, feature engineering, model selection, cross-validation, ensemble methods (XGBoost, LightGBM).


Deep Learning Techniques:

  • Proficiency with PyTorch or TensorFlow/Keras
  • Knowledge of CNNs, RNNs, LSTMs, Transformers, Attention mechanisms.
  • Familiarity with optimization (Adam, SGD), dropout, batch norm.


LLMs & RAG:

  • Hugging Face Transformers (tokenizers, embeddings, model fine-tuning).
  • Vector databases (Milvus, FAISS, Pinecone, ElasticSearch).
  • Prompt engineering, function/tool calling, JSON schema outputs.


Data & Tools:

  • SQL fundamentals; exposure to data wrangling and pipelines.
  • Git/GitHub, Jupyter, basic Docker.


WHAT ARE WE LOOKING FOR?

  • Solid academic foundation with strong applied ML/DL exposure.
  • Curiosity to learn cutting-edge AI and willingness to experiment.
  • Clear communicator who can explain ML/LLM trade-offs simply.
  • Strong problem-solving and ownership mindset.


MINIMUM QUALIFICATIONS:

  • Doctorate (Academic) Degree and 2 years related work experience; Master's Level Degree and related work experience of 5 years; Bachelor's Level Degree and related work experience of 7 years in building AI systems/solutions with Machine Learning, Deep Learning, and LLMs.


MUST-HAVES:

  • Education/qualification:  Preferably from premier Institute like IIT, IISC, IIIT, NIT and BITS. Also regional tier 1 colleges.


  • Doctorate (Academic) Degree and 2 years related work experience; or Master's Level Degree and related work experience of 5 years; or Bachelor's Level Degree and related work experience of 7 years


  • Min 5 yrs experience in the Mandatory Skills: Python, Deep Learning, Machine Learning, Algorithm Development and Image Processing


  • 3.5 to 4 yrs proficiency with PyTorch or TensorFlow/Keras


  • Candidates from engineering product companies have higher chances of getting shortlisted (current company or past experience)


QUESTIONNAIRE: 

Do you have at least 5 years of experience with Python, Deep Learning, Machine Learning, Algorithm Development, and Image Processing? Please mention the skills and years of experience:


Do you have experience with PyTorch or TensorFlow / Keras?

  • PyTorch
  • TensorFlow / Keras
  • Both


How many years of experience do you have with PyTorch or TensorFlow / Keras?

  • Less than 3 years
  • 3 to 3.5 years
  • 3.5 to 4 years
  • More than 4 years


Is the candidate willing to relocate to Chennai?

  • Ready to relocate
  • Based in Chennai


What type of company have you worked for in your career?

  • Service-based IT company
  • Product company
  • Semiconductor company
  • Hardware manufacturing company
  • None of the above
Read more
Gyansys Infotech
Bengaluru (Bangalore)
4 - 8 yrs
₹14L - ₹15L / yr
skill iconMachine Learning (ML)
skill iconDeep Learning
TensorFlow
Keras
PyTorch
+5 more

Role: Sr. Data Scientist

Exp: 4 -8 Years

CTC: up to 28 LPA


Technical Skills:

o Strong programming skills in Python, with hands-on experience in deep learning frameworks like TensorFlow, PyTorch, or Keras.

o Familiarity with Databricks notebooks, MLflow, and Delta Lake for scalable machine learning workflows.

o Experience with MLOps best practices, including model versioning, CI/CD pipelines, and automated deployment.

o Proficiency in data preprocessing, augmentation, and handling large-scale image/video datasets.

o Solid understanding of computer vision algorithms, including CNNs, transfer learning, and transformer-based vision models (e.g., ViT).

o Exposure to natural language processing (NLP) techniques is a plus.


Cloud & Infrastructure:

o Strong expertise in Azure cloud ecosystem,

o Experience working in UNIX/Linux environments and using command-line tools for automation and scripting.


If interested kindly share your updated resume at 82008 31681

Read more
GyanSys Inc.
Bengaluru (Bangalore)
4 - 8 yrs
₹14L - ₹15L / yr
skill iconMachine Learning (ML)
skill iconData Science
skill iconPython
PyTorch
TensorFlow
+5 more

Role: Sr. Data Scientist

Exp: 4-8 Years

CTC: up to 25 LPA



Technical Skills:

● Strong programming skills in Python, with hands-on experience in deep learning frameworks like TensorFlow, PyTorch, or Keras.

● Familiarity with Databricks notebooks, MLflow, and Delta Lake for scalable machine learning workflows.

● Experience with MLOps best practices, including model versioning, CI/CD pipelines, and automated deployment.

● Proficiency in data preprocessing, augmentation, and handling large-scale image/video datasets.

● Solid understanding of computer vision algorithms, including CNNs, transfer learning, and transformer-based vision models (e.g., ViT).

● Exposure to natural language processing (NLP) techniques is a plus.



• Educational Qualifications:

  • B.E./B.Tech/M.Tech/MCA in Computer Science, Electronics & Communication, Electrical Engineering, or a related field.
  • A master’s degree in computer science, Artificial Intelligence, or a specialization in Deep Learning or Computer Vision is highly preferred



If interested share your resume on 82008 31681

Read more
GyanSys Inc.
Bengaluru (Bangalore)
4 - 8 yrs
₹14L - ₹15L / yr
skill iconData Science
CI/CD
Natural Language Processing (NLP)
skill iconMachine Learning (ML)
TensorFlow
+5 more

Role: Sr. Data Scientist

Exp: 4-8 Years

CTC: up to 25 LPA



Technical Skills:

● Strong programming skills in Python, with hands-on experience in deep learning frameworks like TensorFlow, PyTorch, or Keras.

● Familiarity with Databricks notebooks, MLflow, and Delta Lake for scalable machine learning workflows.

● Experience with MLOps best practices, including model versioning, CI/CD pipelines, and automated deployment.

● Proficiency in data preprocessing, augmentation, and handling large-scale image/video datasets.

● Solid understanding of computer vision algorithms, including CNNs, transfer learning, and transformer-based vision models (e.g., ViT).

● Exposure to natural language processing (NLP) techniques is a plus.



• Educational Qualifications:

  • B.E./B.Tech/M.Tech/MCA in Computer Science, Electronics & Communication, Electrical Engineering, or a related field.
  • A master’s degree in computer science, Artificial Intelligence, or a specialization in Deep Learning or Computer Vision is highly preferred


Read more
 Global Leader in Diversified Electronics

Global Leader in Diversified Electronics

Agency job
via Peak Hire Solutions by Dhara Thakkar
Chennai
7 - 16 yrs
₹30L - ₹65L / yr
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Algorithms
skill iconPython
skill iconC++
+10 more

JOB DESCRIPTION/PREFERRED QUALIFICATIONS:

KEY RESPONSIBILITIES: 

  • Lead and mentor a team of algorithm engineers, providing guidance and support to ensure their professional growth and success. 
  • Develop and maintain the infrastructure required for the deployment and execution of algorithms at scale. 
  • Collaborate with data scientists, software engineers, and product managers to design and implement robust and scalable algorithmic solutions. 
  • Optimize algorithm performance and resource utilization to meet business objectives.
  • Stay up to date with the latest advancements in algorithm engineering and infrastructure technologies and apply them to improve our systems.
  • Drive continuous improvement in development processes, tools, and methodologies. 


QUALIFICATIONS: 

  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 
  • Proven experience in developing computer vision and image processing algorithms and ML/DL algorithms. 
  • Familiar with high performance computing, parallel programming and distributed systems.
  • Strong leadership and team management skills, with a track record of successfully leading engineering teams. 
  • Proficiency in programming languages such as Python, C++ and CUDA. 
  • Excellent problem-solving and analytical skills. 
  • Strong communication and collaboration abilities. 


PREFERRED QUALIFICATIONS: 

  • Experience with machine learning frameworks and libraries (e.g., TensorFlow, PyTorch, Scikit-learn). 
  • Experience with GPU architecture and algo development toolkits like Docker, Apptainer. 


MINIMUM QUALIFICATIONS: 

  • Bachelor's degree plus 8 + years of experience 
  • Master's degree plus 8 + years of experience 
  • Familiar with high performance computing, parallel programming and distributed systems.


MUST-HAVE SKILLS: 

  • Phd with 6 yrs industry exp or M.Tech + 8 yrs experience or B.Tech + 10 yrs experience.
  • 14 yrs exp if an IC role.
  • Minimum 1 yrs experience working as a Manager/Lead
  • 8 years' experience in any of the programming languages such as Python/C++/CUDA.
  • 8 years' experience in Machine learning, Artificial intelligence, Deep learning.
  • 2 to 3 years exp in Image processing & Computer vision is a MUST
  • Product / Semi-conductor / Hardware Manufacturing company experience is a MUST. Candidates should be from engineering product companies 
  • Candidates from Tier 1 colleges like (IIT, IIIT, VIT, NIT) (Preferred)
  • Relocation to Chennai is mandatory


NICE TO HAVE SKILLS: 

  • Candidates from Semicon or manufacturing companies
  • Candidates with more than 8 CPGA



Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort