Cutshort logo
Machine learning ml jobs

50+ Machine Learning (ML) Jobs in India

Apply to 50+ Machine Learning (ML) Jobs on CutShort.io. Find your next job, effortlessly. Browse Machine Learning (ML) Jobs and apply today!

icon
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹60L - ₹80L / yr
Apache Airflow
Apache Spark
MLOps
AWS CloudFormation
DevOps
+19 more

Review Criteria:

  • Strong MLOps profile
  • 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
  • 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
  • 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
  • Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
  • Must have hands-on Python for pipeline & automation development
  • 4+ years of experience in AWS cloud, with recent companies
  • (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth

 

Preferred:

  • Hands-on in Docker deployments for ML workflows on EKS / ECS
  • Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
  • Experience with CI / CD / CT using GitHub Actions / Jenkins.
  • Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
  • Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.

 

Job Specific Criteria:

  • CV Attachment is mandatory
  • Please provide CTC Breakup (Fixed + Variable)?
  • Are you okay for F2F round?
  • Have candidate filled the google form?

 

Role & Responsibilities:

We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.

 

Key Responsibilities:

  • Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
  • Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
  • Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
  • Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
  • Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
  • Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
  • Collaborate with data scientists to productionize notebooks, experiments, and model deployments.

 

Ideal Candidate:

  • 8+ years in MLOps/DevOps with strong ML pipeline experience.
  • Strong hands-on experience with AWS:
  • Compute/Orchestration: EKS, ECS, EC2, Lambda
  • Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
  • Workflow: MWAA/Airflow, Step Functions
  • Monitoring: CloudWatch, OpenSearch, Grafana
  • Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
  • Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
  • Strong Linux, scripting, and troubleshooting skills.
  • Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.

 

Education:

  • Master’s degree in computer science, Machine Learning, Data Engineering, or related field. 


Read more
Remote only
0 - 0 yrs
₹1.8L - ₹2.4L / yr
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
skill iconPHP
skill iconAmazon Web Services (AWS)
Wordpress
+5 more

Job Title: Technology Intern

Location: Remote (India)

Shift Timings:

  • 5:00 PM – 2:00 AM
  • 6:00 PM – 3:00 AM

Compensation: Stipend


Job Summary

ARDEM is looking for enthusiastic Technology Interns from Tier 1 colleges who are eager to build hands-on experience across web technologies, cloud platforms, and emerging technologies such as AI/ML. This role is ideal for final-year students (2026 pass-outs) or fresh graduates seeking real-world exposure in a fast-growing, technology-driven organization.


Eligibility & Qualifications

  • Education:
  • B.Tech (Computer Science) / M.Tech (Computer Science)
  • Tier 1 colleges preferred
  • Final semester students pursuing graduation (2026 pass-outs) or recently hired interns
  • Experience Level: Fresher
  • Communication: Excellent English communication skills (verbal & written)

Skills Required

Technical & Development Skills

  • Basic understanding of AI / Machine Learning concepts
  • Exposure to AWS (deployment or cloud fundamentals)
  • PHP development
  • WordPress development and customization
  • JavaScript (ES5 / ES6+)
  • jQuery
  • AJAX calls and asynchronous handling
  • Event handling
  • HTML5 & CSS3
  • Client-side form validation


Work Environment & Tools

  • Comfortable working in a remote setup
  • Familiarity with collaboration and remote access tools


Additional Requirements (Work-from-Home Setup)

This opportunity promotes a healthy work-life balance with remote work flexibility. Candidates must have the following minimum infrastructure:

  • System: Laptop or Desktop (Windows-based)
  • Operating System: Windows
  • Screen Size: Minimum 14 inches
  • Screen Resolution: Full HD (1920 × 1080)
  • Processor: Intel i5 or higher
  • RAM: Minimum 8 GB (Mandatory)
  • Software: AnyDesk
  • Internet Speed: 100 Mbps or higher


About ARDEM


ARDEM is a leading Business Process Outsourcing (BPO) and Business Process Automation (BPA) service provider. With over 20 years of experience, ARDEM has consistently delivered high-quality outsourcing and automation services to clients across the USA and Canada. We are growing rapidly and continuously innovating to improve our services. Our goal is to strive for excellence and become the best Business Process Outsourcing and Business Process Automation company for our customers.

Read more
Lower Parel
2 - 4 yrs
₹6L - ₹7.2L / yr
skill iconReact.js
skill iconNextJs (Next.js)
skill iconNodeJS (Node.js)
GraphQL
RESTful APIs
+22 more

Senior Full Stack Developer – Analytics Dashboard

Job Summary

We are seeking an experienced Full Stack Developer to design and build a scalable, data-driven analytics dashboard platform. The role involves developing a modern web application that integrates with multiple external data sources, processes large datasets, and presents actionable insights through interactive dashboards.

The ideal candidate should be comfortable working across the full stack and have strong experience in building analytical or reporting systems.

Key Responsibilities

  • Design and develop a full-stack web application using modern technologies.
  • Build scalable backend APIs to handle data ingestion, processing, and storage.
  • Develop interactive dashboards and data visualisations for business reporting.
  • Implement secure user authentication and role-based access.
  • Integrate with third-party APIs using OAuth and REST protocols.
  • Design efficient database schemas for analytical workloads.
  • Implement background jobs and scheduled tasks for data syncing.
  • Ensure performance, scalability, and reliability of the system.
  • Write clean, maintainable, and well-documented code.
  • Collaborate with product and design teams to translate requirements into features.

Required Technical Skills

Frontend

  • Strong experience with React.js
  • Experience with Next.js
  • Knowledge of modern UI frameworks (Tailwind, MUI, Ant Design, etc.)
  • Experience building dashboards using chart libraries (Recharts, Chart.js, D3, etc.)

Backend

  • Strong experience with Node.js (Express or NestJS)
  • REST and/or GraphQL API development
  • Background job systems (cron, queues, schedulers)
  • Experience with OAuth-based integrations

Database

  • Strong experience with PostgreSQL
  • Data modelling and performance optimisation
  • Writing complex analytical SQL queries

DevOps / Infrastructure

  • Cloud platforms (AWS)
  • Docker and basic containerisation
  • CI/CD pipelines
  • Git-based workflows

Experience & Qualifications

  • 5+ years of professional full stack development experience.
  • Proven experience building production-grade web applications.
  • Prior experience with analytics, dashboards, or data platforms is highly preferred.
  • Strong problem-solving and system design skills.
  • Comfortable working in a fast-paced, product-oriented environment.

Nice to Have (Bonus Skills)

  • Experience with data pipelines or ETL systems.
  • Knowledge of Redis or caching systems.
  • Experience with SaaS products or B2B platforms.
  • Basic understanding of data science or machine learning concepts.
  • Familiarity with time-series data and reporting systems.
  • Familiarity with meta ads/Google ads API

Soft Skills

  • Strong communication skills.
  • Ability to work independently and take ownership.
  • Attention to detail and focus on code quality.
  • Comfortable working with ambiguous requirements.

Ideal Candidate Profile (Summary)

A senior-level full stack engineer who has built complex web applications, understands data-heavy systems, and enjoys creating analytical products with a strong focus on performance, scalability, and user experience.

Read more
Voiceoc

at Voiceoc

1 video
1 recruiter
Bisman Gill
Posted by Bisman Gill
Noida
5 - 7 yrs
Upto ₹30L / yr (Varies
)
skill iconMachine Learning (ML)
skill iconPython
Large Language Models (LLM) tuning
SaaS
Team Management
+7 more

About Voiceoc

Voiceoc is a Delhi based health tech startup which was started with a vision to help healthcare companies round the globe by leveraging Voice & Text AI. We started our operations in August 2020 and today, the leading healthcare companies of US, India, Middle East & Africa leverage Voiceoc as a channel to communicate with thousands of patients on a daily basis.


Website: https://www.voiceoc.com/


Responsibilities Include (but not limited to):

We’re looking for a hands-on Chief Technology Officer (CTO) to lead all technology initiatives for Voiceoc’s US business.


This role is ideal for someone who combines strong engineering leadership with deep AI product-building experience — someone who can code, lead, and innovate at the same time.


The CTO will manage the engineering team, guide AI development, interface with clients for technical requirements, and ensure scalable, reliable delivery of all Voiceoc platforms.

Technical Leadership

  • Own end-to-end architecture, development, and deployment of Voiceoc’s AI-driven Voice & Text platforms.
  • Work closely with the Founder to define the technology roadmap, ensuring alignment with business priorities and client needs.
  • Oversee AI/ML feature development — including LLM integrations, automation workflows, and backend systems.
  • Ensure system scalability, data security, uptime, and performance across all active deployments (US Projects).
  • Collaborate with the AI/ML engineers to guide RAG pipelines, voicebot logic, and LLM prompt optimization.

Hands-On Contribution

  • Actively contribute to the core codebase (preferably Python/FastAPI/Node).
  • Lead by example in code reviews, architecture design, and debugging.
  • Experiment with LLM frameworks (OpenAI, Gemini, Mistral, etc.) and explore their applications in healthcare automation.

Product & Delivery Management

  • Translate client requirements into clear technical specifications and deliverables.
  • Oversee product versioning, release management, QA, and DevOps pipelines.
  • Collaborate with client success and operations teams to handle technical escalations, performance issues, and integration requests.
  • Drive AI feature innovation — identify opportunities for automation, personalization, and predictive insights.

Team Management

  • Manage and mentor an 8–10 member engineering team.
  • Conduct weekly sprint reviews, define coding standards, and ensure timely, high-quality delivery.
  • Hire and train new engineers to expand Voiceoc’s technical capability.
  • Foster a culture of accountability, speed, and innovation.

Client-Facing & Operational Ownership

  • Join client calls (US-based hospitals) to understand technical requirements or resolve issues directly.
  • Collaborate with the founder on technical presentations and proof-of-concept discussions.
  • Handle A–Z of tech operations for the US business — infrastructure, integrations, uptime, and client satisfaction.

Technical Requirements

Must-Have:

  • 5-7 years of experience in software engineering with at least 2+ years in a leadership capacity.
  • Strong proficiency in Python (FastAPI, Flask, or Django).
  • Experience integrating OpenAI / Gemini / Mistral / Whisper / LangChain.
  • Solid experience with AI/ML model integration, LLMs, and RAG pipelines.
  • Proven expertise in cloud deployment (AWS / GCP), Docker, and CI/CD.
  • Strong understanding of backend architecture, API integrations, and system design.
  • Experience building scalable, production-grade SaaS or conversational AI systems.
  • Excellent communication and leadership skills — capable of interfacing with both engineers and clients.

Good to Have (Optional):

  • Familiarity with telephony & voice tech stacks (Twilio, Exotel, Asterisk etc.).

What We Offer

  • Opportunity to lead the entire technology vertical for a growing global healthtech startup.
  • Direct collaboration with the Founder/CEO on strategy and innovation.
  • Competitive compensation — salary + meaningful equity stake.
  • Dynamic and fast-paced work culture with tangible impact on global healthcare.

Other Details

  • Work Mode: Hybrid - Noida (Office) + Home
  • Work Timing: US Hours
Read more
Unilog

at Unilog

3 candid answers
1 video
Bisman Gill
Posted by Bisman Gill
Remote, BLR, Mysore
8yrs+
Upto ₹52L / yr (Varies
)
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Google Vertex AI
Agentic AI
PyTorch
+7 more

About Unilog

Unilog is the only connected product content and eCommerce provider serving the Wholesale Distribution, Manufacturing, and Specialty Retail industries. Our flagship CX1 Platform is at the center of some of the most successful digital transformations in North America. CX1 Platform’s syndicated product content, integrated eCommerce storefront, and automated PIM tool simplify our customers' path to success in the digital marketplace.

With more than 500 customers, Unilog is uniquely positioned as the leader in eCommerce and product content for Wholesale distribution, manufacturing, and specialty retail. 

Unilog’ s Mission Statement

At Unilog, our mission is to provide purpose-built connected product content and eCommerce solutions that empower our customers to succeed in the face of intense competition. By virtue of living our mission, we are able to transform the way Wholesale Distributors, Manufacturers, and Specialty Retailers go to market. We help our customers extend a digital version of their business and accelerate their growth.


Designation:- AI Architect

Location: Bangalore/Mysore/Remote  

Job Type: Full-time  

Department: Software R&D  


About the Role  

We are looking for a highly motivated AI Architect to join our CTO Office and drive the exploration, prototyping, and adoption of next-generation technologies. This role offers a unique opportunity to work at the forefront of AI/ML, Generative AI (Gen AI), Large Language Models (LLMs), Vector Databases, AI Search, Agentic AI, Automation, and more.  

As an Architect, you will be responsible for identifying emerging technologies, building proof-of-concepts (PoCs), and collaborating with cross-functional teams to define the future of AI-driven solutions. Your work will directly influence the company’s technology strategy and help shape disruptive innovations.


Key Responsibilities  

Research & Experimentation: Stay ahead of industry trends, evaluate emerging AI/ML technologies, and prototype novel solutions in areas like Gen AI, Vector Search, AI Agents, and Automation. 


Proof-of-Concept Development: Rapidly build, test, and iterate PoCs to validate new technologies for potential business impact.


AI/ML Engineering: Design and develop AI/ML models, LLMs, embedding’s, and intelligent search capabilities leveraging state-of-the-art techniques. 


Vector & AI Search: Explore vector databases and optimize retrieval-augmented generation (RAG) workflows.  


Automation & AI Agents: Develop autonomous AI agents and automation frameworks to enhance business processes.  


Collaboration & Thought Leadership: Work closely with software developers and product teams to integrate innovations into production-ready solutions.


Innovation Strategy: Contribute to the technology roadmap, patents, and research papers to establish leadership in emerging domains.  


Required Qualifications  


  1. 8-14 years of experience in AI/ML, software engineering, or a related field.  
  2. Strong hands-on expertise in Python, TensorFlow, PyTorch, LangChain, Hugging Face, OpenAI APIs, Claude, Gemini.
  3. Experience with LLMs, embeddings, AI search, vector databases (e.g., Pinecone, FAISS, Weaviate, PGVector), and agentic AI.  
  4. Familiarity with cloud platforms (AWS, Azure, GCP) and AI/ML infrastructure.  
  5. Strong problem-solving skills and a passion for innovation.  
  6. Ability to communicate complex ideas effectively and work in a fast-paced, experimental environment.  


Preferred Qualifications  

  • Experience with multi-modal AI (text, vision, audio), reinforcement learning, or AI security.  
  • Knowledge of data pipelines, MLOps, and AI governance.  
  • Contributions to open-source AI/ML projects or published research papers.  


Why Join Us?  

  • Work on cutting-edge AI/ML innovations with the CTO Office.  
  • Influence the company’s future AI strategy and shape emerging technologies.  
  • Competitive compensation, growth opportunities, and a culture of continuous learning.    


About our Benefits:

Unilog offers a competitive total rewards package including competitive salary, multiple medical, dental, and vision plans to meet all our employees’ needs, 401K match, career development, advancement opportunities, annual merit, pay-for-performance bonus eligibility, a generous time-off policy, and a flexible work environment.


Unilog is committed to building the best team and we are committed to fair hiring practices where we hire people for their potential and advocate for diversity, equity, and inclusion. As such, we do not discriminate or make decisions based on your race, color, religion, creed, ancestry, sex, national origin, age, disability, familial status, marital status, military status, veteran status, sexual orientation, gender identity, or expression, or any other protected class. 

Read more
Financial Services

Financial Services

Agency job
via WITS Innovation Lab by kanchan Tigga
The Capital, Bandra (East), Mumbai
2.5 - 10 yrs
₹5L - ₹22L / yr
skill iconJenkins
skill iconPython
Shell Scripting
MLOps
DevOps
+9 more

Responsibilities


  • Design, develop, and implement MLOps pipelines for the continuous deployment and integration of machine learning models
  • Collaborate with data scientists and engineers to understand model requirements and optimize deployment processes
  • Automate the training, testing and deployment processes for machine learning models
  • Continuously monitor and maintain models in production, ensuring optimal performance, accuracy and reliability
  • Implement best practices for version control, model reproducibility and governance
  • Optimize machine learning pipelines for scalability, efficiency and cost-effectiveness
  • Troubleshoot and resolve issues related to model deployment and performance
  • Ensure compliance with security and data privacy standards in all MLOps activities
  • Keep up to date with the latest MLOps tools, technologies and trends
  • Provide support and guidance to other team members on MLOps practices


Required Skills And Experience


  • 3-10 years of experience in MLOps, DevOps or a related field
  • Bachelors degree in computer science, Data Science or a related field
  • Strong understanding of machine learning principles and model lifecycle management
  • Experience in Jenkins pipeline development
  • Experience in automation scripting


Read more
Client is at the cutting-edge of AI, Psychology and large-scale data. We believe that we have an opportunity (and even a responsibility) to personalize and humanize how people interact over the internet; and an opportunity to inspire far more trustworthy relationships online than it has ever been possible before.

Client is at the cutting-edge of AI, Psychology and large-scale data. We believe that we have an opportunity (and even a responsibility) to personalize and humanize how people interact over the internet; and an opportunity to inspire far more trustworthy relationships online than it has ever been possible before.

Agency job
via HyrHub by Shwetha Naik
Bengaluru (Bangalore)
10 - 15 yrs
₹60L - ₹90L / yr
skill iconPython
skill iconJavascript
skill iconNodeJS (Node.js)
Artificial Intelligence (AI)
MySQL
+2 more

10+ years of experience in successfully building,

deploying, and running complex, large-scale web or data products.

● Proven Management Experience: Demonstrated success

managing a team of 5+ engineers for at least 2 years (managing

timelines, performance, and hiring). You know how to transition a

team from 'startup chaos' to 'structured agility'.

● Full-stack Authority: Deep expertise with Javascript, Node.js,

MySQL, and Python. You must have world-class expertise in at least

one area but possess a solid understanding of the entire stack in a

multi-tier environment.

● Architectural Track Record: Has built at least two

professional-grade products as the tech owner/architect and led the

delivery of complex products from conception to release.


● Experience in working with REST APIs, Machine Learning,

Algorithms & AWS.

● Familiar with visualization libraries and database technologies.

● Your reputation in the technology community within your domain.

● Your participation and success in competitive programming.

● Work on unusual/extraordinary hobby projects during school/college

that were not a part of the curriculum.

● The school that you come from and organizations where you have

worked earlier.

Read more
KGiSL MICROCOLLEGE
Hiring Recruitment
Posted by Hiring Recruitment
Pollachi
1 - 5 yrs
₹2L - ₹3L / yr
skill iconHTML/CSS
Artificial Intelligence (AI)
skill iconPython
skill iconMachine Learning (ML)
skill iconJavascript


Technical Trainer at the Pollachi location.

Trainer - Pollachi.

Willing to travel around a 30km radius from Pollachi.

Job Description: Technical Trainer

Expertise: HTML, CSS, JavaScript, Python, Artificial Intelligence (AI), and Machine Learning (ML), IoT, and Robotics (Optional).

Work Location: Flexible (Work from Home & Office available)

Target Audience: School students and teachers

Employment Type: Full-time, IoT and Robotics (Optional). 


Key Responsibilities:

* Develop and deliver content in an easy-to-understand format suitable for varying audience levels. 

* Prepare training materials, exercises, and assessments to evaluate participant progress and measure their learning outcomes. Adapt teaching methods to suit both in-person (office) and virtual (work-from-home) formats.

* Stay updated with the latest trends and tools in technology to ensure high-quality training delivery.

Read more
Matchmaking platform

Matchmaking platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai
2 - 5 yrs
₹21L - ₹28L / yr
skill iconData Science
skill iconPython
Natural Language Processing (NLP)
MySQL
skill iconMachine Learning (ML)
+15 more

Review Criteria

  • Strong Data Scientist/Machine Learnings/ AI Engineer Profile
  • 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
  • Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
  • Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
  • Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
  • Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
  • Preferred (Company) – Must be from product companies

 

Job Specific Criteria

  • CV Attachment is mandatory
  • What's your current company?
  • Which use cases you have hands on experience?
  • Are you ok for Mumbai location (if candidate is from outside Mumbai)?
  • Reason for change (if candidate has been in current company for less than 1 year)?
  • Reason for hike (if greater than 25%)?

 

Role & Responsibilities

  • Partner with Product to spot high-leverage ML opportunities tied to business metrics.
  • Wrangle large structured and unstructured datasets; build reliable features and data contracts.
  • Build and ship models to:
  • Enhance customer experiences and personalization
  • Boost revenue via pricing/discount optimization
  • Power user-to-user discovery and ranking (matchmaking at scale)
  • Detect and block fraud/risk in real time
  • Score conversion/churn/acceptance propensity for targeted actions
  • Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
  • Design and run A/B tests with guardrails.
  • Build monitoring for model/data drift and business KPIs


Ideal Candidate

  • 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
  • Proven, hands-on success in at least two (preferably 3–4) of the following:
  • Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
  • Fraud/risk detection (severe class imbalance, PR-AUC)
  • Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
  • Propensity models (payment/churn)
  • Programming: strong Python and SQL; solid git, Docker, CI/CD.
  • Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
  • ML breadth: recommender systems, NLP or user profiling, anomaly detection.
  • Communication: clear storytelling with data; can align stakeholders and drive decisions.


Read more
SimplyFI Softech

at SimplyFI Softech

2 candid answers
Shivangi Ahuja
Posted by Shivangi Ahuja
Mumbai
2 - 4 yrs
Upto ₹9L / yr (Varies
)
skill iconPython
skill iconMachine Learning (ML)
NumPy
pandas
Scikit-Learn
+7 more

About the Company

SimplyFI Softech India Pvt. Ltd. is a product-led company working across AI, Blockchain, and Cloud. The team builds intelligent platforms for fintech, SaaS, and enterprise use cases, focused on solving real business problems with production-grade systems.


Role Overview

This role is for someone who enjoys working hands-on with data and machine learning models. You’ll support real-world AI use cases end to end, from data prep to model integration, while learning how AI systems are built and deployed in production.


Key Responsibilities

  • Design, develop, and deploy machine learning models with guidance from senior engineers
  • Work with structured and unstructured datasets for cleaning, preprocessing, and feature engineering
  • Implement ML algorithms using Python and standard ML libraries
  • Train, test, and evaluate models and track performance metrics
  • Assist in integrating AI/ML models into applications and APIs
  • Perform basic data analysis and visualization to extract insights
  • Participate in code reviews, documentation, and team discussions
  • Stay updated on ML, AI, and Generative AI trends


Required Skills & Qualifications

  • Bachelor’s degree in Computer Science, AI, Data Science, or a related field
  • Strong foundation in Python
  • Clear understanding of core ML concepts: supervised and unsupervised learning
  • Hands-on exposure to NumPy, Pandas, and Scikit-learn
  • Basic familiarity with TensorFlow or PyTorch
  • Understanding of data structures, algorithms, and statistics
  • Good analytical thinking and problem-solving skills
  • Comfortable working in a fast-moving product environment


Good to Have

  • Exposure to NLP, Computer Vision, or Generative AI
  • Experience with Jupyter Notebook or Google Colab
  • Basic knowledge of SQL or NoSQL databases
  • Understanding of REST APIs and model deployment concepts
  • Familiarity with Git/GitHub
  • AI/ML internships or academic projects
Read more
Rokkun Systems Private Limited

Rokkun Systems Private Limited

Agency job
via Thomasmount Consulting by Shirin Shahana
Bengaluru (Bangalore)
5 - 7 yrs
₹15L - ₹18L / yr
angular
skill iconPHP
Artificial Intelligence (AI)
skill iconMachine Learning (ML)

Role Overview:

We are looking for a PHP & Angular Developer to build and maintain scalable full-stack web applications. The role requires strong backend expertise in PHP, solid frontend development using Angular, and exposure or interest in AI/ML-powered features.

Key Responsibilities:

  • Develop and maintain applications using PHP and Angular
  • Build and consume RESTful APIs
  • Create reusable Angular components using TypeScript
  • Work with MySQL/PostgreSQL databases
  • Collaborate with Product, QA, and AI/ML teams
  • Integrate AI/ML APIs where applicable
  • Ensure performance, security, and scalability
  • Debug and resolve production issues

Required Skills:

  • 5–7 years experience in PHP development
  • Strong hands-on with Laravel / CodeIgniter
  • Experience with Angular (v10+)
  • HTML, CSS, JavaScript, TypeScript
  • REST APIs, JSON
  • MySQL / PostgreSQL
  • Git, MVC architecture

Good to Have:

  • Exposure to AI/ML concepts or API integrations
  • Python-based ML services (basic)
  • Cloud platforms (AWS / Azure / GCP)
  • Docker, CI/CD
  • Agile/Scrum experience
  • Product/start-up background

Role Overview:

We are looking for a PHP & Angular Developer to build and maintain scalable full-stack web applications. The role requires strong backend expertise in PHP, solid frontend development using Angular, and exposure or interest in AI/ML-powered features.

Key Responsibilities:

  • Develop and maintain applications using PHP and Angular
  • Build and consume RESTful APIs
  • Create reusable Angular components using TypeScript
  • Work with MySQL/PostgreSQL databases
  • Collaborate with Product, QA, and AI/ML teams
  • Integrate AI/ML APIs where applicable
  • Ensure performance, security, and scalability
  • Debug and resolve production issues

Required Skills:

  • 5–7 years experience in PHP development
  • Strong hands-on with Laravel / CodeIgniter
  • Experience with Angular (v10+)
  • HTML, CSS, JavaScript, TypeScript
  • REST APIs, JSON
  • MySQL / PostgreSQL
  • Git, MVC architecture

Good to Have:

  • Exposure to AI/ML concepts or API integrations
  • Python-based ML services (basic)
  • Cloud platforms (AWS / Azure / GCP)
  • Docker, CI/CD
  • Agile/Scrum experience
  • Product/start-up background


Read more
Planview

at Planview

3 candid answers
3 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
8yrs+
Upto ₹72L / yr (Varies
)
skill iconPython
SQL
skill iconAmazon Web Services (AWS)
skill iconMachine Learning (ML)
Large Language Models (LLM) tuning
+2 more

The Opportunity

Planview is looking for a passionate Sr Data Scientist to join our team tasked with developing innovative tools for connected work. You are an experienced expert in supporting enterprise

 applications using Data Analytics, Machine Learning, and Generative AI.

You will use this experience to lead other data scientists and data engineers. You will also effectively engage with product teams to specify, validate, prototype, scale, and deploy features with a consistent customer experience across the Planview product suite.

     

Responsibilities (What you'll do)

  • Enable Data Science features within Planview applications by working in a fast-paced start-up mindset.
  • Collaborate closely with product management to enable Data Science features that deliver significant value to customers, ensuring that these features are optimized for operational efficiency.
  • Manage every stage of the AI/ML development lifecycle, from initial concept through deployment in a production environment.
  • Provide leadership to other Data Scientists by exemplifying exceptional quality in work, nurturing a culture of continuous learning, and offering daily guidance in their research endeavors.
  • Effectively communicate ideas drawn from complex data with clarity and insight.


Qualifications (What you'll bring)

  • Master’s in operations research, Statistics, Computer Science, Data Science, or related field.
  • 8+ years of experience as a data scientist, data engineer, or ML engineer.
  • Demonstrable history for bringing Data Science features to Enterprise applications.
  • Exceptional Python and SQL coding skills.
  • Experience with Optimization, Machine Learning, Generative AI, NLP, Statistics, and Simulation.
  • Experience with AWS Data and ML Technologies (Sagemaker, Glue, Athena, Redshift)


Preferred qualifications:

  • Experience working with datasets in the domains of project management, software development, and resource planning.
  • Experience with common libraries and frameworks in data science (Scikit Learn, TensorFlow, PyTorch).
  • Experience with ML platform tools (AWS SageMaker).
  • Skilled at working as part of a global, diverse workforce of high-performing individuals.
  • AWS Certification is a plus
Read more
Studymitr
Aditi Shinde
Posted by Aditi Shinde
Jaipur
2 - 4 yrs
₹3L - ₹4.6L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
AWS Lambda
skill iconPython
RESTful APIs
+3 more

About Role

We are looking for a hands-on Python Engineer with strong experience in backend development, AI-driven systems, and cloud infrastructure. The ideal candidate should be comfortable working across Python services, AI/ML pipelines, and cloud-native environments, and capable of building production-grade, scalable systems.

This role offers high ownership, exposure to real-world AI systems, and long-term growth, making it ideal for engineers who want to build meaningful products rather than just features


Key Responsibilities

  • Design, develop, and maintain scalable backend services using Python
  • Build APIs and services using FastAPI, Flask, or Django
  • Ensure performance, reliability, and scalability of backend systems
  • Integrate AI/ML models into production systems (model inference, automation)
  • Build and maintain AI pipelines for data processing and inference
  • Deploy and manage applications on AWS, with exposure to GCP and Azure
  • Implement CI/CD pipelines, containerization, and cloud deployments
  • Collaborate with product, frontend, and AI teams on end-to-end delivery
  • Optimize cloud infrastructure for cost, performance, and reliability
  • Collaborate with product, frontend, and AI teams on end-to-end delivery
  • Follow best practices for security, monitoring, and logging


Required Qualifications

  • 2–4 years of professional experience in Python development
  • Strong understanding of backend frameworks: FastAPI, Flask, Django
  • Hands-on experience integrating AI/ML systems into applications
  • Solid experience with AWS (EC2, S3, Lambda, RDS, IAM)
  • Exposure to Google Cloud Platform (GCP) and Microsoft Azure
  • Experience with Docker and CI/CD workflows
  • Understanding of scalable system design principles
  • Strong problem-solving and debugging skills
  • Ability to work collaboratively in a product-driven environment


Perks and Benefits

  • Work in Nikhil Kamath funded startup
  • ₹3 – ₹4.6 LPA with ESOPs linked to performance and tenure
  • Opportunity to build long-term wealth through ESOP participation
  • Work on production-scale AI systems used in real-world applications
  • Hands-on experience with AWS, GCP, and Azure architectures
  • Work with a team that values clean engineering, experimentation, and execution
  • Exposure to modern backend frameworks, AI pipelines, and DevOps practices
  • High autonomy, fast decision-making, and real ownership of features and systems



Read more
Impacto Digifin Technologies

at Impacto Digifin Technologies

4 candid answers
1 recruiter
Navitha Reddy
Posted by Navitha Reddy
Bengaluru (Bangalore)
2 - 3 yrs
₹6L - ₹8L / yr
skill iconMachine Learning (ML)
skill iconDeep Learning
Natural Language Processing (NLP)
Voice Over IP (VoIP)
Artificial Intelligence (AI)
+4 more

Job Title: AI/ML Engineer – Voice (2–3 Years)

Location: Bengaluru (On-site)

Employment Type: Full-time


About Impacto Digifin Technologies

Impacto Digifin Technologies enables enterprises to adopt digital transformation through intelligent, AI-powered solutions. Our platforms reduce manual work, improve accuracy, automate complex workflows, and ensure compliance—empowering organizations to operate with speed, clarity, and confidence.


We combine automation where it’s fastest with human oversight where it matters most. This hybrid approach ensures trust, reliability, and measurable efficiency across fintech and enterprise operations.


Role Overview

We are looking for an AI Engineer Voice with strong applied experience in machine learning, deep learning, NLP, GenAI, and full-stack voice AI systems.


This role requires someone who can design, build, deploy, and optimize end-to-end voice AI pipelines, including speech-to-text, text-to-speech, real-time streaming voice interactions, voice-enabled AI applications, and voice-to-LLM integrations.


You will work across core ML/DL systems, voice models, predictive analytics, banking-domain AI applications, and emerging AGI-aligned frameworks. The ideal candidate is an applied engineer with strong fundamentals, the ability to prototype quickly, and the maturity to contribute to R&D when needed.


This role is collaborative, cross-functional, and hands-on.


Key Responsibilities

Voice AI Engineering

  • Build end-to-end voice AI systems, including STT, TTS, VAD, audio processing, and conversational voice pipelines.
  • Implement real-time voice pipelines involving streaming interactions with LLMs and AI agents.
  • Design and integrate voice calling workflows, bi-directional audio streaming, and voice-based user interactions.
  • Develop voice-enabled applications, voice chat systems, and voice-to-AI integrations for enterprise workflows.
  • Build and optimize audio preprocessing layers (noise reduction, segmentation, normalization)
  • Implement voice understanding modules, speech intent extraction, and context tracking.

Machine Learning & Deep Learning

  • Build, deploy, and optimize ML and DL models for prediction, classification, and automation use cases.
  • Train and fine-tune neural networks for text, speech, and multimodal tasks.
  • Build traditional ML systems where needed (statistical, rule-based, hybrid systems).
  • Perform feature engineering, model evaluation, retraining, and continuous learning cycles.

NLP, LLMs & GenAI

  • Implement NLP pipelines including tokenization, NER, intent, embeddings, and semantic classification.
  • Work with LLM architectures for text + voice workflows
  • Build GenAI-based workflows and integrate models into production systems.
  • Implement RAG pipelines and agent-based systems for complex automation.

Fintech & Banking AI

  • Work on AI-driven features related to banking, financial risk, compliance automation, fraud patterns, and customer intelligence.
  • Understand fintech data structures and constraints while designing AI models.

Engineering, Deployment & Collaboration

  • Deploy models on cloud or on-prem (AWS / Azure / GCP / internal infra).
  • Build robust APIs and services for voice and ML-based functionalities.
  • Collaborate with data engineers, backend developers, and business teams to deliver end-to-end AI solutions.
  • Document systems and contribute to internal knowledge bases and R&D.

Security & Compliance

  • Follow fundamental best practices for AI security, access control, and safe data handling.
  • Awareness of financial compliance standards (plus, not mandatory).
  • Follow internal guidelines on PII, audio data, and model privacy.

Primary Skills (Must-Have)

Core AI

  • Machine Learning fundamentals
  • Deep Learning architectures
  • NLP pipelines and transformers
  • LLM usage and integration
  • GenAI development
  • Voice AI (STT, TTS, VAD, real-time pipelines)
  • Audio processing fundamentals
  • Model building, tuning, and retraining
  • RAG systems
  • AI Agents (orchestration, multi-step reasoning)

Voice Engineering

  • End-to-end voice application development
  • Voice calling & telephony integration (framework-agnostic)
  • Realtime STT ↔ LLM ↔ TTS interactive flows
  • Voice chat system development
  • Voice-to-AI model integration for automation

Fintech/Banking Awareness

  • High-level understanding of fintech and banking AI use cases
  • Data patterns in core banking analytics (advantageous)

Programming & Engineering

  • Python (strong competency)
  • Cloud deployment understanding (AWS/Azure/GCP)
  • API development
  • Data processing & pipeline creation

Secondary Skills (Good to Have)

  • MLOps & CI/CD for ML systems
  • Vector databases
  • Prompt engineering
  • Model monitoring & evaluation frameworks
  • Microservices experience
  • Basic UI integration understanding for voice/chat
  • Research reading & benchmarking ability

Qualifications

  • 2–3 years of practical experience in AI/ML/DL engineering.
  • Bachelor’s/Master’s degree in CS, AI, Data Science, or related fields.
  • Proven hands-on experience building ML/DL/voice pipelines.
  • Experience in fintech or data-intensive domains preferred.

Soft Skills

  • Clear communication and requirement understanding
  • Curiosity and research mindset
  • Self-driven problem solving
  • Ability to collaborate cross-functionally
  • Strong ownership and delivery discipline
  • Ability to explain complex AI concepts simply



Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Mumbai
2 - 5 yrs
₹25L - ₹31L / yr
skill iconData Science
skill iconMachine Learning (ML)

Strong Data Scientist/Machine Learnings/ AI Engineer Profile

Mandatory (Experience 1) – Must have 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models

Mandatory (Experience 2) – Must have strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.

Mandatory (Experience 3) – Must have hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models

Mandatory (Experience 4) – Must have strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text

Mandatory (Experience 5) – Must have experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments

Mandatory (Company) – Must be from product companies, Avoid candidates from financial domains (e.g., JPMorgan, banks, fintech)

Read more
Proximity Works

at Proximity Works

1 video
5 recruiters
Nikita Sinha
Posted by Nikita Sinha
Mumbai
5 - 8 yrs
Upto ₹45L / yr (Varies
)
skill iconJava
skill iconPython
skill iconGoogle Analytics
Vector database
skill iconMachine Learning (ML)

We are looking for a Senior Backend Engineer to build and operate the core AI/ML-backed systems that power large-scale, consumer-facing products. You will work on production-grade AI runtimes, retrieval systems, and ML-adjacent backend infrastructure, making pragmatic tradeoffs across quality, latency, reliability, and cost.

This role is not an entry point into AI/ML. You are expected to already have hands-on experience shipping ML-backed backend systems in production.


At Proximity, you won’t just build APIs - you’ll own critical backend systems end-to-end, collaborate closely with Applied ML and Product teams, and help define the foundations that power intelligent experiences at scale.


Responsibilities -

  • Own and deliver end-to-end backend systems for AI product runtime, including orchestration, request lifecycle management, state/session handling, and policy enforcement.
  • Design and implement retrieval and memory primitives end-to-end — document ingestion, chunking strategies, embeddings generation, indexing, vector/hybrid search, re-ranking, caching, freshness, and deletion semantics.
  • Productionize ML workflows and interfaces, including feature and metadata services, online/offline parity, model integration contracts, and evaluation instrumentation.
  • Drive performance, reliability, and cost optimization, owning P50/P95 latency, throughput, cache hit rates, token and inference costs, and infrastructure efficiency.
  • Build observability by default, including structured logs, metrics, distributed tracing, guardrail signals, failure taxonomies, and reliable fallback paths.
  • Collaborate closely with Applied ML teams on model routing, prompt and tool schemas, evaluation datasets, and release safety gates.
  • Write clean, testable, and maintainable backend code, contributing to design reviews, code reviews, and operational best practices.
  • Take systems from design → build → deploy → operate, including on-call ownership and incident response.
  • Continuously identify bottlenecks and failure modes in AI-backed systems and proactively improve system robustness.

Requirements

  • Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
  • 6–10 years of experience building backend systems in production, with 2–3+ years working on ML/AI-backed products such as search, recommendations, ranking, RAG pipelines, or AI assistants.
  • Strong practical understanding of ML system fundamentals, including embeddings, vector similarity, reranking, retrieval quality, and evaluation metrics (precision/recall, nDCG, MRR).
  • Proven experience implementing or operating RAG pipelines, covering ingestion, chunking, indexing, query understanding, hybrid retrieval, and rerankers.
  • Solid distributed systems fundamentals, including API design, idempotency, concurrency, retries, circuit breakers, rate limiting, and multi-tenant reliability.
  • Experience with common ML/AI platform components, such as feature stores, metadata systems, streaming or batch pipelines, offline evaluation jobs, and A/B measurement hooks.
  • Strong proficiency in backend programming languages and frameworks (e.g., Go, Java, Python, or similar) and API development.
  • Ability to work independently, take ownership of complex systems, and collaborate effectively with cross-functional teams.
  • Strong problem-solving, communication, and system-design skills.


Desired Skills -

  • Experience with agentic runtimes, including tool-calling or function-calling patterns, structured outputs, and production guardrails.
  • Hands-on exposure to vector and hybrid retrieval stacks such as FAISS, Milvus, Pinecone, or Elasticsearch.
  • Experience running systems on Kubernetes, with strong knowledge of observability stacks like OpenTelemetry, Prometheus, Grafana, and distributed tracing.
  • Familiarity with privacy, security, and data governance considerations for user and model data.

Benefits

  • Best in class compensation: We hire only the best, and we pay accordingly.
  • Proximity Talks: Meet engineers, designers, and product leaders — and learn from experts across domains.

Keep on learning with a world-class team: Work on real, production AI systems at scale, challenge yourself daily, and grow alongside some of the best minds in the industry.



Read more
Leading digital testing boutique firm

Leading digital testing boutique firm

Agency job
via Peak Hire Solutions by Dhara Thakkar
Delhi
5 - 8 yrs
₹11L - ₹15L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Software Testing (QA)
Natural Language Processing (NLP)
Analytics
+11 more

Review Criteria

  • Strong AI/ML Test Engineer
  • 5+ years of overall experience in Testing/QA
  • 2+ years of experience in testing AI/ML models and data-driven applications, across NLP, recommendation engines, fraud detection, and advanced analytics models
  • Must have expertise in validating AI/ML models for accuracy, bias, explainability, and performance, ensuring decisions are fair, reliable, and transparent
  • Must have strong experience to design AI/ML test strategies, including boundary testing, adversarial input simulation, and anomaly monitoring to detect manipulation attempts by marketplace users (buyers/sellers)
  • Proficiency in AI/ML testing frameworks and tools (like PyTest, TensorFlow Model Analysis, MLflow, Python-based data validation libraries, Jupyter) with the ability to integrate into CI/CD pipelines
  • Must understand marketplace misuse scenarios, such as manipulating recommendation algorithms, biasing fraud detection systems, or exploiting gaps in automated scoring
  • Must have strong verbal and written communication skills, able to collaborate with data scientists, engineers, and business stakeholders to articulate testing outcomes and issues.
  • Degree in Engineering, Computer Science, IT, Data Science, or a related discipline (B.E./B.Tech/M.Tech/MCA/MS or equivalent)
  • Candidate must be based within Delhi NCR (100 km radius)


Preferred

  • Certifications such as ISTQB AI Testing, TensorFlow, Cloud AI, or equivalent applied AI credentials are an added advantage.


Job Specific Criteria

  • CV Attachment is mandatory
  • Have you worked with large datasets for AI/ML testing?
  • Have you automated AI/ML testing using PyTest, Jupyter notebooks, or CI/CD pipelines?
  • Please provide details of 2 key AI/ML testing projects you have worked on, including your role, responsibilities, and tools/frameworks used.
  • Are you willing to relocate to Delhi and why (if not from Delhi)?
  • Are you available for a face-to-face round?


Role & Responsibilities

  • 5 years’ experience in testing AIML models and data driven applications including natural language processing NLP recommendation engines fraud detection and advanced analytics models
  • Proven expertise in validating AI models for accuracy bias explainability and performance to ensure decisions eg bid scoring supplier ranking fraud detection are fair reliable and transparent
  • Handson experience in data validation and model testing ensuring training and inference pipelines align with business requirements and procurement rules
  • Strong skills in data science equipped to design test strategies for AI systems including boundary testing adversarial input simulation and dri monitoring to detect manipulation aempts by marketplace users buyers sellers
  • Proficient in data science for defining AIML testing frameworks and tools TensorFlow Model Analysis MLflow PyTest Python based data validation libraries Jupyter with ability to integrate into CICD pipelines
  • Business awareness of marketplace misuse scenarios such as manipulating recommendation algorithms biasing fraud detection systems or exploiting gaps in automated scoring
  • Education Certifications Bachelors masters in engineering CSIT Data Science or equivalent
  • Preferred Certifications ISTQB AI Testing TensorFlowCloud AI certifications or equivalent applied AI credentials


Ideal Candidate

  • 5 years’ experience in testing AIML models and data driven applications including natural language processing NLP recommendation engines fraud detection and advanced analytics models
  • Proven expertise in validating AI models for accuracy bias explainability and performance to ensure decisions eg bid scoring supplier ranking fraud detection are fair reliable and transparent
  • Handson experience in data validation and model testing ensuring training and inference pipelines align with business requirements and procurement rules
  • Strong skills in data science equipped to design test strategies for AI systems including boundary testing adversarial input simulation and dri monitoring to detect manipulation aempts by marketplace users buyers sellers
  • Proficient in data science for defining AIML testing frameworks and tools TensorFlow Model Analysis MLflow PyTest Python based data validation libraries Jupyter with ability to integrate into CICD pipelines
  • Business awareness of marketplace misuse scenarios such as manipulating recommendation algorithms biasing fraud detection systems or exploiting gaps in automated scoring
  • Education Certifications Bachelors masters in engineering CSIT Data Science or equivalent
  • Preferred Certifications ISTQB AI Testing TensorFlow Cloud AI certifications or equivalent applied AI credentials


Read more
Techgenzi Private Limited
Coimbatore
2 - 5 yrs
₹6L - ₹12L / yr
skill iconPython
skill iconMachine Learning (ML)
RESTful APIs
Artificial Intelligence (AI)
Large Language Models (LLM) tuning
+5 more

Role: Senior AI Engineer

Work Location: TechGenzi Coimbatore Office (ODC for Tiramai.ai)

Employment Type: Full-time

Experience: 2–5 years (Full-stack development with AI exposure)


About the Role & Work Location.

The selected candidate will be employed by Tiramai.ai and will work exclusively on Tiramai.ai projects. The role will be based out of TechGenzi’s Coimbatore office, which functions as an Offshore Development Center (ODC) supporting Tiramai.ai’s product and engineering initiatives.

Primary Focus

As an AI Engineer at our enterprise SaaS and AI-native organization, you will play a pivotal role in building secure, scalable, and intelligent digital solutions. This role combines full-stack development expertise with applied AI skills to create next-generation platforms that empower enterprises to modernize and act smarter with AI. You will work on AI-driven features, APIs, and cloud-native applications that are production-ready, compliance-conscious, and aligned with our mission of delivering responsible AI innovation.



Key Responsibilities

  • Design, develop, and maintain full-stack applications using Python (backend) and React/Angular (frontend).
  • Build and integrate AI-driven modules, leveraging GenAI, ML models, and AI-native tools into enterprise-grade SaaS products.
  • Develop scalable REST APIs and microservices with security, compliance, and performance in mind.
  • Collaborate with architects, product managers, and cross-functional teams to translate requirements into production-ready features.
  • Ensure adherence to secure coding standards, data privacy regulations, and human-in-the-loop AI principles.
  • Participate in code reviews, system design discussions, and continuous integration/continuous deployment (CI/CD) practices.
  • Contribute to reusable libraries, frameworks, and best practices to accelerate AI platform development.


Skills Required

  • Strong proficiency in Python for backend development.
  • Frontend expertise in React.js or Angular with 2+ years of experience.
  • Hands-on experience in full SDLC development (design, build, test, deploy, maintain).
  • Familiarity with AI/ML frameworks (e.g., TensorFlow, PyTorch) or GenAI tools (LangChain, vector DBs, OpenAI APIs).
  • Knowledge of cloud-native development (AWS/Azure/GCP), Docker, Kubernetes, and CI/CD pipelines.
  • Strong understanding of REST APIs, microservices, and enterprise-grade security standards.
  • Ability to work collaboratively in fast-paced, cross-functional teams with strong problem-solving and analytical skills.
  • Exposure to responsible AI principles (explainability, bias mitigation, compliance) is a plus.


Growth Path

  • AI Engineer (24 years) focus on full-stack + AI integration, delivering production-ready features.
  • Senior AI Engineer (4–6 years) lead modules, mentor juniors, and drive AI feature development at scale.
  • Lead AI Engineer (6–8 years) own solution architecture for AI features, ensure security/compliance, collaborate closely with product/tech leaders.
  • AI Architect / Engineering Manager (8+ years) shape AI platform strategy, guide large-scale deployments, and influence product/technology roadmap.
Read more
Bengaluru (Bangalore)
5 - 10 yrs
₹25L - ₹50L / yr
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconPython
skill iconJava
Data engineering
+10 more

Job Title : Senior Software Engineer (Full Stack — AI/ML & Data Applications)

Experience : 5 to 10 Years

Location : Bengaluru, India

Employment Type : Full-Time | Onsite


Role Overview :

We are seeking a Senior Full Stack Software Engineer with strong technical leadership and hands-on expertise in AI/ML, data-centric applications, and scalable full-stack architectures.

In this role, you will design and implement complex applications integrating ML/AI models, lead full-cycle development, and mentor engineering teams.


Mandatory Skills :

Full Stack Development (React/Angular/Vue + Node.js/Python/Java), Data Engineering (Spark/Kafka/ETL), ML/AI Model Integration (TensorFlow/PyTorch/scikit-learn), Cloud & DevOps (AWS/GCP/Azure, Docker, Kubernetes, CI/CD), SQL/NoSQL Databases (PostgreSQL/MongoDB).


Key Responsibilities :

  • Architect, design, and develop scalable full-stack applications for data and AI-driven products.
  • Build and optimize data ingestion, processing, and pipeline frameworks for large datasets.
  • Deploy, integrate, and scale ML/AI models in production environments.
  • Drive system design, architecture discussions, and API/interface standards.
  • Ensure engineering best practices across code quality, testing, performance, and security.
  • Mentor and guide junior developers through reviews and technical decision-making.
  • Collaborate cross-functionally with product, design, and data teams to align solutions with business needs.
  • Monitor, diagnose, and optimize performance issues across the application stack.
  • Maintain comprehensive technical documentation for scalability and knowledge-sharing.

Required Skills & Experience :

  • Education : B.E./B.Tech/M.E./M.Tech in Computer Science, Data Science, or equivalent fields.
  • Experience : 5+ years in software development with at least 2+ years in a senior or lead role.
  • Full Stack Proficiency :
  • Front-end : React / Angular / Vue.js
  • Back-end : Node.js / Python / Java
  • Data Engineering : Experience with data frameworks such as Apache Spark, Kafka, and ETL pipeline development.
  • AI/ML Expertise : Practical exposure to TensorFlow, PyTorch, or scikit-learn and deploying ML models at scale.
  • Databases : Strong knowledge of SQL & NoSQL systems (PostgreSQL, MongoDB) and warehousing tools (Snowflake, BigQuery).
  • Cloud & DevOps : Working knowledge of AWS, GCP, or Azure; containerization & orchestration (Docker, Kubernetes); CI/CD; MLflow/SageMaker is a plus.
  • Visualization : Familiarity with modern data visualization tools (D3.js, Tableau, Power BI).

Soft Skills :

  • Excellent communication and cross-functional collaboration skills.
  • Strong analytical mindset with structured problem-solving ability.
  • Self-driven with ownership mentality and adaptability in fast-paced environments.

Preferred Qualifications (Bonus) :

  • Experience deploying distributed, large-scale ML or data-driven platforms.
  • Understanding of data governance, privacy, and security compliance.
  • Exposure to domain-driven data/AI use cases in fintech, healthcare, retail, or e-commerce.
  • Experience working in Agile environments (Scrum/Kanban).
  • Active open-source contributions or a strong GitHub technical portfolio.
Read more
Newpage Solutions

at Newpage Solutions

2 candid answers
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
7yrs+
Upto ₹45L / yr (Varies
)
skill iconPython
Large Language Models (LLM)
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Generative AI
+11 more

Lead AI Engineer

Location: Bengaluru, Hybrid | Type: Full-time


About Newpage Solutions

Newpage Solutions is a global digital health innovation company helping people live longer, healthier lives. We partner with life sciences organisations—which include pharmaceutical, biotech, and healthcare leaders—to build transformative AI and data-driven technologies addressing real-world health challenges.

From strategy and research to UX design and agile development, we deliver and validate impactful solutions using lean, human-centered practices.

We are proud to be a ‘Great Place to Work®’ certified company for the last three consecutive years. We also hold a top Glassdoor rating and are named among the "Top 50 Most Promising Healthcare Solution Providers" by CIOReview.

As an organisation, we foster creativity, continuous learning, and inclusivity, creating an environment where bold ideas thrive and make a measurable difference in people’s lives.


Your Mission

We’re seeking a highly experienced, technically exceptional Lead AI Engineer to architect and deliver next-generation Generative AI and Agentic systems. You will drive end-to-end innovation, from model selection and orchestration design to scalable backend implementation, all while collaborating with cross-functional teams to transform AI research into production-ready solutions.

This is an individual-contributor leadership role for someone who thrives on ownership, fast execution, and technical excellence. You will define the standards for quality, scalability, and innovation across all AI initiatives.


What You’ll Do

Develop AI Applications & Agentic Systems

  • Architect, build, and optimise production-grade Generative AI and agentic applications using frameworks such as LangChain, LangGraph, LlamaIndex, Semantic Kernel, n8n, Pydantic AI or custom orchestration layers integrating with LLMs such as GPT, Claude, Gemini as well as self-hosted LLMs along with MCP integrations.
  • Implement Retrieval-Augmented Generation (RAG) techniques leveraging vector databases (Pinecone, ChromaDB, Weaviate, pgvector, etc.) and search engines such as ElasticSearch / Solr using both TF/IDF BM25-based full-text search and similarity search techniques.
  • Implement guardrails, observability, fine-tune and train models for industry or domain-specific use cases.
  • Build multi-modal workflows using text, image, voice, and video.
  • Design robust prompt & context engineering frameworks to improve accuracy, repeatability, quality, cost, and latency.
  • Build supporting microservices and modular backends using Python, JavaScript, or Java aligned with domain-driven design, SOLID principles, OOP, and clean architecture, using various databases including relational, document, Key-Value, Graph, and event-driven systems using Kafka / MSK, SQS, etc.
  • Deploy cloud-native applications in hyper-scalers such as AWS / GCP / Azure using containerisation and orchestration with Docker / Kubernetes or serverless architecture.
  • Apply industry best engineering practices: TDD, well-structured and clean code with linting, domain-driven design, security-first design (secrets management, rotation, SAST, DAST), comprehensive observability (structured logging, metrics, tracing), containerisation & orchestration (Docker, Kubernetes), automated CI/CD pipelines (GitHub Actions, Jenkins).

AI-Assisted Development, Context Engineering & Innovation

  • Use AI-assisted development tools such as Claude Code, GitHub Copilot, Codex, Roo Code, Cursor to accelerate development while maintaining code quality and maintainability.
  • Utilise coding assistant tools with native instructions, templates, guides, workflows, sub-agents, and more to create developer workflows that improve development velocity, standardisation, and reliability across AI teams.
  • Ensure industry best practices to develop well-structured code that is testable, maintainable, performant, scalable, and secure.
  • Partner with Product, Design, and ML teams to translate conceptual AI features into scalable user-facing products.
  • Provide technical mentorship and guide team members in system design, architecture reviews, and AI best practices.
  • Lead POCs, internal research experiments, and innovation sprints to explore and validate emerging AI techniques.

What You Bring

  • 7–12 years of total experience in software development, with at least 3 years in AI/ML systems engineering or Generative AI.
  • Experience with cloud-native deployments and services in AWS / GCP / Azure, with the ability to architect distributed systems.
  • A ‘no-compromise’ attitude with engineering best practices such as clean code, TDD, containerisation, security, CI/CD, scalability, performance, and cost optimisation.
  • Active user of AI-assisted development tools (Claude Code, GitHub Copilot, Cursor) with demonstrable experience using structured workflows and sub-agents.
  • A deep understanding of LLMs, context engineering approaches, and best practices, with the ability to optimise accuracy, latency, and cost.
  • Python or JavaScript experience with strong grasp of OOP, SOLID principles, 12-factor application development, and scalable microservice architecture.
  • Proven track record developing and deploying GenAI/LLM-based systems in production.
  • Advanced understanding of context engineering, prompt construction, optimisation, and evaluation techniques.
  • End-to-end implementation experience using vector databases and retrieval pipelines.
  • Experience with GitHub Actions, Docker, Kubernetes, and cloud-native deployments.
  • Obsession with clean code, system scalability, and performance optimisation.
  • Ability to balance rapid prototyping with long-term maintainability.
  • Excel at working independently while collaborating effectively across teams.
  • Stay ahead of the curve on new AI models, frameworks, and best practices.
  • Have a founder’s mindset and love solving ambiguous, high-impact technical challenges.
  • Bachelor’s or Master’s degree in Computer Science, Machine Learning, or a related technical discipline.

Bonus Skills / Experience

  • Understanding of MLOps, model serving, scaling, and monitoring workflows (e.g., BentoML, MLflow, Vertex AI, AWS Sagemaker).
  • Experience building streaming + batch data ingestion and transformation pipelines (Spark / Airflow / Beam).
  • Mobile and front-end web application development experience.

What We Offer

  • A people-first culture – Supportive peers, open communication, and a strong sense of belonging.
  • Smart, purposeful collaboration – Work with talented colleagues to create technologies that solve meaningful business challenges.
  • Balance that lasts – We respect your time and support a healthy integration of work and life.
  • Room to grow – Opportunities for learning, leadership, and career development, shaped around you.
  • Meaningful rewards – Competitive compensation that recognises both contribution and potential.
Read more
ICloudEMS
Remote only
4 - 6 yrs
₹6L - ₹10L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
skill iconDeep Learning
Image Processing
Cloud Computing

We are looking for an AI Engineer (Computer Vision) to design and deploy intelligent video analytics solutions using CCTV feeds. The role focuses on analyzing real-time and recorded video to extract insights such as attention levels, engagement, movement patterns, posture, and overall group behavior. You will work closely with data scientists, backend teams, and product managers to build scalable, privacy-aware AI systems.

Key Responsibilities

  • Develop and deploy computer vision models for CCTV-based video analytics
  • Analyze gaze, posture, facial expressions, movement, and crowd behavior
  • Build real-time and batch video processing pipelines
  • Train, fine-tune, and optimize deep learning models for production
  • Convert visual signals into actionable insights & dashboards
  • Ensure privacy, security, and ethical AI compliance
  • Improve model accuracy, latency, and scalability
  • Collaborate with engineering teams for end-to-end deployment

Required Skills

  • Strong experience in Computer Vision & Deep Learning
  • Proficiency in Python
  • Hands-on experience with OpenCV, TensorFlow, PyTorch
  • Knowledge of CNNs, object detection, tracking, pose estimation
  • Experience with video analytics & CCTV data
  • Understanding of model optimization and deployment

Good to Have

  • Experience with real-time video streaming (RTSP, CCTV feeds)
  • Familiarity with edge AI or GPU optimization
  • Exposure to education analytics or surveillance systems
  • Knowledge of cloud deployment (AWS/GCP/Azure)


Read more
is a global digital solutions partner trusted by leading Fortune 500 companies in industries such as pharma & healthcare, retail, and BFSI.It is expertise in data and analytics, data engineering, machine learning, AI, and automation help companies streamline operations and unlock business value.

is a global digital solutions partner trusted by leading Fortune 500 companies in industries such as pharma & healthcare, retail, and BFSI.It is expertise in data and analytics, data engineering, machine learning, AI, and automation help companies streamline operations and unlock business value.

Agency job
via HyrHub by Shwetha Naik
Bengaluru (Bangalore), Mangalore
12 - 14 yrs
₹28L - ₹32L / yr
Data engineering
skill iconMachine Learning (ML)
Generative AI
Architecture
skill iconPython
+1 more

Skills: Gen AI, Machine learning Models, AWS/ Azure, redshift, Python, Apachi, Airflow, Devops, minimum 4-5years experience as Architect, should be from Data Engineering background.

• 8+ years of experience in data engineering, data science, or architecture roles.

• Experience designing enterprise-grade AI platforms.

• Certification in major cloud platforms (AWS/Azure/GCP).

• Experience with governance tooling (Collibra, Alation) and lineage systems

• Strong hands-on background in data engineering, analytics, or data science.

• Expertise in building data platforms using:

o Cloud: AWS (Glue, S3, Redshift), Azure (Data Factory, Synapse), GCP (BigQuery,

Dataflow).

o Compute: Spark, Databricks, Flink.

o Data modelling: dimensional, relational, NoSQL, graph.

• Proficiency with Python, SQL, and data pipeline orchestration tools.

• Understanding of ML frameworks and tools: TensorFlow, PyTorch, Scikit-learn, MLflow, etc.

• Experience implementing MLOps, model deployment, monitoring, logging, and versioning.


Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹60L - ₹80L / yr
DevOps
Apache Spark
Apache Airflow
skill iconMachine Learning (ML)
Pipeline management
+13 more

Review Criteria:

  • Strong MLOps profile
  • 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
  • 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
  • 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
  • Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
  • Must have hands-on Python for pipeline & automation development
  • 4+ years of experience in AWS cloud, with recent companies
  • (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth

 

Preferred:

  • Hands-on in Docker deployments for ML workflows on EKS / ECS
  • Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
  • Experience with CI / CD / CT using GitHub Actions / Jenkins.
  • Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
  • Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.

 

Job Specific Criteria:

  • CV Attachment is mandatory
  • Please provide CTC Breakup (Fixed + Variable)?
  • Are you okay for F2F round?
  • Have candidate filled the google form?

 

Role & Responsibilities:

We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.

 

Key Responsibilities:

  • Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
  • Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
  • Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
  • Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
  • Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
  • Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
  • Collaborate with data scientists to productionize notebooks, experiments, and model deployments.

 

Ideal Candidate:

  • 8+ years in MLOps/DevOps with strong ML pipeline experience.
  • Strong hands-on experience with AWS:
  • Compute/Orchestration: EKS, ECS, EC2, Lambda
  • Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
  • Workflow: MWAA/Airflow, Step Functions
  • Monitoring: CloudWatch, OpenSearch, Grafana
  • Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
  • Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
  • Strong Linux, scripting, and troubleshooting skills.
  • Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.

 

Education:

  • Master’s degree in computer science, Machine Learning, Data Engineering, or related field. 
Read more
Product company

Product company

Agency job
via Trinity consulting by Priyanka G
Remote, Bengaluru (Bangalore)
3 - 7 yrs
₹4L - ₹8L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Retrieval Augmented Generation (RAG)
skill iconDocker
skill iconKubernetes
+1 more

Experience: 3+ years


Responsibilities:


  • Build, train and fine tune ML models
  • Develop features to improve model accuracy and outcomes.
  • Deploy models into production using Docker, kubernetes and cloud services.
  • Proficiency in Python, MLops, expertise in data processing and large scale data set.
  • Hands on experience in Cloud AI/ML services.
  • Exposure in RAG Architecture
Read more
TrumetricAI
Yashika Tiwari
Posted by Yashika Tiwari
Bengaluru (Bangalore)
6 - 10 yrs
₹15L - ₹30L / yr
Natural Language Processing (NLP)
skill iconDeep Learning
Artificial Intelligence (AI)
Generative AI
skill iconMachine Learning (ML)
+1 more

Senior Machine Learning Engineer

About the Role

We are looking for a Senior Machine Learning Engineer who can take business problems, design appropriate machine learning solutions, and make them work reliably in production environments.

This role is ideal for someone who not only understands machine learning models, but also knows when and how ML should be applied, what trade-offs to make, and how to take ownership from problem understanding to production deployment.

Beyond technical skills, we need someone who can lead a team of ML Engineers, design end-to-end ML solutions, and clearly communicate decisions and outcomes to both engineering teams and business stakeholders. If you enjoy solving real problems, making pragmatic decisions, and owning outcomes from idea to deployment, this role is for you.

What You’ll Be Doing

Building and Deploying ML Models

  • Design, build, evaluate, deploy, and monitor machine learning models for real production use cases.
  • Take ownership of how a problem is approached, including deciding whether ML is the right solution and what type of ML approach fits the problem.
  • Ensure scalability, reliability, and efficiency of ML pipelines across cloud and on-prem environments.
  • Work with data engineers to design and validate data pipelines that feed ML systems.
  • Optimize solutions for accuracy, performance, cost, and maintainability, not just model metrics.

Leading and Architecting ML Solutions

  • Lead a team of ML Engineers, providing technical direction, mentorship, and review of ML approaches.
  • Architect ML solutions that integrate seamlessly with business applications and existing systems.
  • Ensure models and solutions are explainable, auditable, and aligned with business goals.
  • Drive best practices in MLOps, including CI/CD, model monitoring, retraining strategies, and operational readiness.
  • Set clear standards for how ML problems are framed, solved, and delivered within the team.

Collaborating and Communicating

  • Work closely with business stakeholders to understand problem statements, constraints, and success criteria.
  • Translate business problems into clear ML objectives, inputs, and expected outputs.
  • Collaborate with software engineers, data engineers, platform engineers, and product managers to integrate ML solutions into production systems.
  • Present ML decisions, trade-offs, and outcomes to non-technical stakeholders in a simple and understandable way.

What We’re Looking For

Machine Learning Expertise

  • Strong understanding of supervised and unsupervised learning, deep learning, NLP techniques, and large language models (LLMs).
  • Experience choosing appropriate modeling approaches based on the problem, available data, and business constraints.
  • Experience training, fine-tuning, and deploying ML and LLM models for real-world use cases.
  • Proficiency in common ML frameworks such as TensorFlow, PyTorch, Scikit-learn, etc.

Production and Cloud Deployment

  • Hands-on experience deploying and running ML systems in production environments on AWS, GCP, or Azure.
  • Good understanding of MLOps practices, including CI/CD for ML models, monitoring, and retraining workflows.
  • Experience with Docker, Kubernetes, or serverless architectures is a plus.
  • Ability to think beyond deployment and consider operational reliability and long-term maintenance.

Data Handling

  • Strong programming skills in Python.
  • Proficiency in SQL and working with large-scale datasets.
  • Ability to reason about data quality, data limitations, and how they impact ML outcomes.
  • Familiarity with distributed computing frameworks like Spark or Dask is a plus.

Leadership and Communication

  • Ability to lead and mentor ML Engineers and work effectively across teams.
  • Strong communication skills to explain ML concepts, decisions, and limitations to business teams.
  • Comfortable taking ownership and making decisions in ambiguous problem spaces.
  • Passion for staying updated with advancements in ML and AI, with a practical mindset toward adoption.

Experience Needed

  • 6+ years of experience in machine learning engineering or related roles.
  • Proven experience designing, selecting, and deploying ML solutions used in production.
  • Experience managing ML systems after deployment, including monitoring and iteration.
  • Proven track record of working in cross-functional teams and leading ML initiatives.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Shivangi Bhattacharyya
Posted by Shivangi Bhattacharyya
Bengaluru (Bangalore)
6 - 10 yrs
Best in industry
skill iconPython
Generative AI
skill iconMachine Learning (ML)
SQL
Business Intelligence (BI)
+1 more

Job Description: 


Exp Range - [6y to 10y]


Qualifications:


  • Minimum Bachelors Degree in Engineering or Computer Applications or AI/Data science
  • Experience working in product companies/Startups for developing, validating, productionizing AI model in the recent projects in last 3 years.
  • Prior experience in Python, Numpy, Scikit, Pandas, ETL/SQL, BI tools in previous roles preferred


Require Skills: 

  • Must Have – Direct hands-on experience working in Python for scripting automation analysis and Orchestration
  • Must Have – Experience working with ML Libraries such as Scikit-learn, TensorFlow, PyTorch, Pandas, NumPy etc.
  • Must Have – Experience working with models such as Random forest, Kmeans clustering, BERT…
  • Should Have – Exposure to querying warehouses and APIs
  • Should Have – Experience with writing moderate to complex SQL queries
  • Should Have – Experience analyzing and presenting data with BI tools or Excel
  • Must Have – Very strong communication skills to work with technical and non technical stakeholders in a global environment

 

Roles and Responsibilities:

  • Work with Business stakeholders, Business Analysts, Data Analysts to understand various data flows and usage.
  • Analyse and present insights about the data and processes to Business Stakeholders
  • Validate and test appropriate AI/ML models based on the prioritization and insights developed while working with the Business Stakeholders
  • Develop and deploy customized models on Production data sets to generate analytical insights and predictions
  • Participate in cross functional team meetings and provide estimates of work as well as progress in assigned tasks.
  • Highlight risks and challenges to the relevant stakeholders so that work is delivered in a timely manner.
  • Share knowledge and best practices with broader teams to make everyone aware and more productive.


Read more
VMax eSolutions India Pvt Ltd
Hyderabad
10 - 15 yrs
₹35L - ₹45L / yr
Generative AI
PEFT (Parameter-Efficient Fine-Tuning)
Voice processing
Artificial Intelligence (AI)
GPU computing
+3 more

We are seeking an experienced AI Architect to design, build, and scale production-ready AI voice conversation agents deployed locally (on-prem / edge / private cloud) and optimized for GPU-accelerated, high-throughput environments.

You will own the end-to-end architecture of real-time voice systems, including speech recognition, LLM orchestration, dialog management, speech synthesis, and low-latency streaming pipelines—designed for reliability, scalability, and cost efficiency.

This role is highly hands-on and strategic, bridging research, engineering, and production infrastructure.


Key Responsibilities

Architecture & System Design

  • Design low-latency, real-time voice agent architectures for local/on-prem deployment
  • Define scalable architectures for ASR → LLM → TTS pipelines
  • Optimize systems for GPU utilization, concurrency, and throughput
  • Architect fault-tolerant, production-grade voice systems (HA, monitoring, recovery)

Voice & Conversational AI

  • Design and integrate:
  • Automatic Speech Recognition (ASR)
  • Natural Language Understanding / LLMs
  • Dialogue management & conversation state
  • Text-to-Speech (TTS)
  • Build streaming voice pipelines with sub-second response times
  • Enable multi-turn, interruptible, natural conversations

Model & Inference Engineering

  • Deploy and optimize local LLMs and speech models (quantization, batching, caching)
  • Select and fine-tune open-source models for voice use cases
  • Implement efficient inference using TensorRT, ONNX, CUDA, vLLM, Triton, or similar

Infrastructure & Production

  • Design GPU-based inference clusters (bare metal or Kubernetes)
  • Implement autoscaling, load balancing, and GPU scheduling
  • Establish monitoring, logging, and performance metrics for voice agents
  • Ensure security, privacy, and data isolation for local deployments

Leadership & Collaboration

  • Set architectural standards and best practices
  • Mentor ML and platform engineers
  • Collaborate with product, infra, and applied research teams
  • Drive decisions from prototype → production → scale

Required Qualifications

Technical Skills

  • 7+ years in software / ML systems engineering
  • 3+ years designing production AI systems
  • Strong experience with real-time voice or conversational AI systems
  • Deep understanding of LLMs, ASR, and TTS pipelines
  • Hands-on experience with GPU inference optimization
  • Strong Python and/or C++ background
  • Experience with Linux, Docker, Kubernetes

AI & ML Expertise

  • Experience deploying open-source LLMs locally
  • Knowledge of model optimization:
  • Quantization
  • Batching
  • Streaming inference
  • Familiarity with voice models (e.g., Whisper-like ASR, neural TTS)

Systems & Scaling

  • Experience with high-QPS, low-latency systems
  • Knowledge of distributed systems and microservices
  • Understanding of edge or on-prem AI deployments

Preferred Qualifications

  • Experience building AI voice agents or call automation systems
  • Background in speech processing or audio ML
  • Experience with telephony, WebRTC, SIP, or streaming audio
  • Familiarity with Triton Inference Server / vLLM
  • Prior experience as Tech Lead or Principal Engineer

What We Offer

  • Opportunity to architect state-of-the-art AI voice systems
  • Work on real-world, high-scale production deployments
  • Competitive compensation and equity (if applicable)
  • High ownership and technical influence
  • Collaboration with top-tier AI and infrastructure talent
Read more
Hyderabad
6 - 10 yrs
₹30L - ₹35L / yr
Artificial Intelligence (AI)
AI Agents
Voice recognition
Generative AI
skill iconMachine Learning (ML)

Company Description


VMax e-Solutions India Private Limited, based in Hyderabad, is a dynamic organization specializing in Open Source ERP Product Development and Mobility Solutions. As an ISO 9001:2015 and ISO 27001:2013 certified company, VMax is dedicated to delivering tailor-made and scalable products, with a strong focus on e-Governance projects across multiple states in India. The company's innovative technologies aim to solve real-life problems and enhance the daily services accessed by millions of citizens. With a culture of continuous learning and growth, VMax provides its team members opportunities to develop expertise, take ownership, and grow their careers through challenging and impactful work.


About the Role


We’re hiring a Senior Data Scientist with deep real-time voice AI experience and strong backend engineering skills.


1. You’ll own and scale our end-to-end voice agent pipeline that powers AI SDRs, customer support 2. agents, and internal automation agents on calls. This is a hands-on, highly technical role where you’ll design and optimize low-latency, high-reliability voice systems.


3. You’ll work closely with our founders, product, and platform teams, with significant ownership over architecture, benchmarks.


What You’ll Do


1. Own the voice stack end-to-end – from telephony / WebRTC entrypoints to STT, turn-taking, LLM reasoning, and TTS back to the caller.


2. Design for real-time – architect and optimize streaming pipelines for sub-second latency, barge-in, interruptions, and graceful recovery on bad networks.


3. Integrate and tune models – evaluate, select, and integrate STT/TTS/LLM/VAD providers (and self-hosted models) for different use-cases, balancing quality, speed, and cost.


4. Build orchestration & tooling – implement agent orchestration logic, evaluation frameworks, call simulators, and dashboards for latency, quality, and reliability.


5. Harden for production – ensure high availability, observability, and robust fault-tolerance for thousands of concurrent calls in customer VPCs.


6. Shape the voice roadmap – influence how voice fits into our broader Agentic OS vision (simulation, analytics, multi-agent collaboration, etc.).


You’re a Great Fit If You Have


1. 6+ years of software engineering experience (backend or full-stack) in production systems.


2. Strong experience building real-time voice agents or similar systems using:


STT / ASR (e.g. Whisper, Deepgram, Assembly, AWS Transcribe, GCP Speech)


TTS (e.g. ElevenLabs, PlayHT, AWS Polly, Azure Neural TTS)


VAD / turn-taking and streaming audio pipelines


LLMs (e.g. OpenAI, Anthropic, Gemini, local models)


3. Proven track record designing and operating low-latency, high-throughput streaming systems (WebRTC, gRPC, websockets, Kafka, etc.).


4. Hands-on experience integrating ML models into live, user-facing applications with real-time inference & monitoring.


5. Solid backend skills with Python and TypeScript/Node.js; strong fundamentals in distributed systems, concurrency, and performance optimization.


6. Experience with cloud infrastructure – especially AWS (EKS, ECS, Lambda, SQS/Kafka, API Gateway, load balancers).


7. Comfortable working in Kubernetes / Docker environments, including logging, metrics, and alerting.


8. Startup DNA – at least 2 years in an early or mid-stage startup where you shipped fast, owned outcomes, and worked close to the customer.


Nice to Have


1. Experience self-hosting AI models (ASR / TTS / LLMs) and optimizing them for latency, cost, and reliability.


2. Telephony integration experience (e.g. Twilio, Vonage, Aircall, SignalWire, or similar).


3. Experience with evaluation frameworks for conversational agents (call quality scoring, hallucination checks, compliance rules, etc.).


4. Background in speech processing, signal processing, or dialog systems.


5. Experience deploying into enterprise VPC / on-prem environments and working with security/compliance constraints.

Read more
Capital Squared
Hiring Team
Posted by Hiring Team
Remote only
5 - 10 yrs
₹25L - ₹55L / yr
skill iconPython
Scikit-Learn
pandas
skill iconPostgreSQL
Data engineering
+3 more

Full-Stack Machine Learning Engineer

Role: Full-Time, Long-Term Required: Python Preferred: C++


OVERVIEW

We are seeking a versatile ML engineer to join as a core member of our technical team. This is a long-term position for someone who wants to build sophisticated production systems and grow with a small, focused team. You will work across the entire stack—from data ingestion and feature engineering through model training, validation, and deployment.


The ideal candidate combines strong software engineering fundamentals with deep ML expertise, particularly in time series forecasting and quantitative applications. You should be comfortable operating independently, making architectural decisions, and owning systems end-to-end.


CORE TECHNICAL REQUIREMENTS

Python (Required): Professional-level proficiency writing clean, production-grade code—not just notebooks. Deep understanding of NumPy, Pandas, and their performance characteristics. You know when to use vectorized operations, understand memory management for large datasets, and can profile and optimize bottlenecks. Experience with async programming and multiprocessing is valuable.


Machine Learning (Required): Hands-on experience building and deploying ML systems in production. This goes beyond training models—you understand the full lifecycle: data validation, feature engineering, model selection, hyperparameter optimization, validation strategies, monitoring, and maintenance.


Specific experience we value: gradient boosting frameworks (LightGBM, XGBoost, CatBoost), time series forecasting, probabilistic prediction and uncertainty quantification, feature selection and dimensionality reduction, cross-validation strategies for non-IID data, model calibration.


You should understand overfitting deeply—not just as a concept but as something you actively defend against through proper validation, regularization, and architectural choices.

Data Pipelines (Required): Design and implement robust pipelines handling real-world messiness: missing data, late arrivals, schema changes, upstream failures. You understand idempotency, exactly-once semantics, and backfill strategies. Experience with workflow orchestration (Airflow, Prefect, Dagster) expected. Comfortable with ETL/ELT patterns, incremental vs full recomputation, data quality monitoring, database design and query optimization (PostgreSQL preferred), time series data at scale.


C++ (Preferred): Experience valuable for performance-critical components. Writing efficient C++ and interfacing with Python (pybind11, Cython) is a significant advantage.


HIGHLY DESIRABLE: MULTI-AGENT ORCHESTRATION

We are building systems leveraging LLM-based automation. Experience with multi-agent frameworks highly desirable: LangChain, LangGraph, or similar agent frameworks; designing reliable AI pipelines with error handling and fallbacks; prompt engineering and output parsing; managing context and state across agent interactions. You do not need to be an expert, but genuine interest and hands-on experience will set you apart.


DOMAIN EXPERIENCE: FINANCIAL DATA AND CRYPTO

Preference for candidates with experience in quantitative finance, algorithmic trading, or fintech; cryptocurrency markets and their unique characteristics; financial time series data and forecasting systems; market microstructure, volatility, and regime dynamics. This helps you understand why reproducibility is non-negotiable, why validation must account for temporal structure, and why production reliability cannot be an afterthought.


ENGINEERING STANDARDS

Code Quality: Readable, maintainable code others can modify. Proper version control (meaningful commits, branches, code review). Testing where appropriate. Documentation: docstrings, READMEs, decision records.


Production Mindset: Think about failure modes before they happen. Build in observability: logging, metrics, alerting. Design for reproducibility—same inputs produce same outputs.

Systems Thinking: Consider component interactions, not just isolated behavior. Understand tradeoffs: speed vs accuracy, flexibility vs simplicity. Zoom between architecture and implementation.


WHAT WE ARE LOOKING FOR

Self-Direction: Given a problem and context, you break it down, identify the path forward, and execute. You ask questions when genuinely blocked, not when you could find the answer yourself.


Long-Term Orientation: You think in years, not months. You make decisions considering future maintainability.


Intellectual Honesty: You acknowledge uncertainty and distinguish between what you know versus guess. When something fails, you dig into why.


Communication: You explain complex concepts clearly and document your reasoning.


EDUCATION

University degree in a quantitative/technical field preferred: Computer Science, Mathematics, Statistics, Physics, Engineering. Equivalent demonstrated expertise through work also considered.


TO APPLY

Include: (1) CV/resume, (2) Brief description of a production ML system you built, (3) Links to relevant work if available, (4) Availability and timezone.

Read more
Mobileum

at Mobileum

1 recruiter
Eman Khan
Posted by Eman Khan
Bengaluru (Bangalore), Mumbai, Gurugram
7yrs+
₹30L - ₹62L / yr
Retrieval Augmented Generation (RAG)
Large Language Models (LLM)
Telecom
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
+2 more

About Us

Mobileum is a leading provider of Telecom analytics solutions for roaming, core network, security, risk management, domestic and international connectivity testing, and customer intelligence.

More than 1,000 customers rely on its Active Intelligence platform, which provides advanced analytics solutions, allowing customers to connect deep network and operational intelligence with real-time actions that increase revenue, improve customer experience, and reduce costs.


Headquartered in Silicon Valley, Mobileum has global offices in Australia, Dubai, Germany, Greece, India, Portugal, Singapore and UK with global HC of over 1800+. 


Join Mobileum Team

At Mobileum we recognize that our team is the main reason for our success. What does work with us mean? Opportunities!

Role: GenAI/LLM Engineer – Domain-Specific AI Solutions (Telecom)


About the Job

We are seeking a highly skilled GenAI/LLM Engineer to design, fine-tune, and operationalize Large Language Models (LLMs) for telecom business applications. This role will be instrumental in building domain-specific GenAI solutions, including the development of domain-specific LLMs, to transform telecom operational processes, customer interactions, and internal decision-making workflows.


Roles & Responsibility:

  • Build domain-specific LLMs by curating domain-relevant datasets and training/fine-tuning LLMs tailored for telecom use cases.
  • Fine-tune pre-trained LLMs (e.g., GPT, Llama, Mistral) using telecom-specific datasets to improve task accuracy and relevance.
  • Design and implement prompt engineering frameworks, optimize prompt construction and context strategies for telco-specific queries and processes.
  • Develop Retrieval-Augmented Generation (RAG) pipelines integrated with vector databases (e.g., FAISS, Pinecone) to enhance LLM performance on internal knowledge.
  • Build multi-agent LLM pipelines using orchestration tools (LangChain, LlamaIndex) to support complex telecom workflows.
  • Collaborate cross-functionally with data engineers, product teams, and domain experts to translate telecom business logic into GenAI workflows.
  • Conduct systematic model evaluation focused on minimizing hallucinations, improving domain-specific accuracy, and tracking performance improvements on business KPIs.
  • Contribute to the development of internal reusable GenAI modules, coding standards, and best practices documentation.


Desired Profile

  • Familiarity with multi-modal LLMs (text + tabular/time-series).
  • Experience with OpenAI function calling, LangGraph, or agent-based orchestration.
  • Exposure to telecom datasets (e.g., call records, customer tickets, network logs).
  • Experience with low-latency inference optimization (e.g., quantization, distillation).


Technical skills

  • Hands-on experience in fine-tuning transformer models, prompt engineering, and RAG architecture design.
  • Experience delivering production-ready AI solutions in enterprise environments; telecom exposure is a plus.
  • Advanced knowledge of transformer architectures, fine-tuning techniques (LoRA, PEFT, adapters), and transfer learning.
  • Proficiency in Python, with significant experience using PyTorch, Hugging Face Transformers, and related NLP libraries.
  • Practical expertise in prompt engineering, RAG pipelines, and LLM orchestration tools (LangChain, LlamaIndex).
  • Ability to build domain-adapted LLMs, from data preparation to final model deployment.


Work Experience

7+ years of professional experience in AI/ML, with at least 2+ years of practical exposure to LLMs or GenAI deployments.


Educational Qualification

  • Master’s or Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, Natural Language Processing, or a related field.
  • Ph.D. preferred for foundational model work and advanced research focus.
Read more
Matchmaking platform

Matchmaking platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai
2 - 5 yrs
₹15L - ₹28L / yr
skill iconData Science
skill iconPython
Natural Language Processing (NLP)
MySQL
skill iconMachine Learning (ML)
+15 more

Review Criteria

  • Strong Data Scientist/Machine Learnings/ AI Engineer Profile
  • 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
  • Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
  • Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
  • Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
  • Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
  • Preferred (Company) – Must be from product companies

 

Job Specific Criteria

  • CV Attachment is mandatory
  • What's your current company?
  • Which use cases you have hands on experience?
  • Are you ok for Mumbai location (if candidate is from outside Mumbai)?
  • Reason for change (if candidate has been in current company for less than 1 year)?
  • Reason for hike (if greater than 25%)?

 

Role & Responsibilities

  • Partner with Product to spot high-leverage ML opportunities tied to business metrics.
  • Wrangle large structured and unstructured datasets; build reliable features and data contracts.
  • Build and ship models to:
  • Enhance customer experiences and personalization
  • Boost revenue via pricing/discount optimization
  • Power user-to-user discovery and ranking (matchmaking at scale)
  • Detect and block fraud/risk in real time
  • Score conversion/churn/acceptance propensity for targeted actions
  • Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
  • Design and run A/B tests with guardrails.
  • Build monitoring for model/data drift and business KPIs


Ideal Candidate

  • 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
  • Proven, hands-on success in at least two (preferably 3–4) of the following:
  • Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
  • Fraud/risk detection (severe class imbalance, PR-AUC)
  • Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
  • Propensity models (payment/churn)
  • Programming: strong Python and SQL; solid git, Docker, CI/CD.
  • Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
  • ML breadth: recommender systems, NLP or user profiling, anomaly detection.
  • Communication: clear storytelling with data; can align stakeholders and drive decisions.




Read more
Stupa Sports Analytics

at Stupa Sports Analytics

1 recruiter
Divya Pahil
Posted by Divya Pahil
Gurugram
2 - 7 yrs
₹10L - ₹40L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)

🎯 About Us

Stupa builds cutting-edge AI for real-time sports intelligence ; automated commentary, player tracking, non-contact biomechanics, ball trajectory, LED graphics, and broadcast-grade stats. Your models will be seen live by millions across global events.


🌍 Global Travel

Work that literally travels the world. You’ll deploy systems at international tournaments across Asia, Europe, and the Middle East, working inside world-class stadiums, courts, and TV production rooms.


✨ What You’ll Build

  • AI Language Products
  • Automated live commentary (LLM + ASR + OCR), real-time subtitles, AI storytelling.
  • Non-Contact Measurement (CV + Tracking + Pose Estimation)
  • Player velocity, footwork, acceleration, shot recognition, 2D/3D reconstruction, real-time edge inference.
  • End-to-End Streaming Pipelines
  • Temporal segmentation, multi-modal fusion, low-latency edge + cloud deployment.


🧠 What You’ll Do

Train and optimise ML/CV/NLP models for live sports, build tracking & pose pipelines, create LLM/ASR-based commentary systems, deploy on edge/cloud, ship rapid POCs→production, manage datasets & accuracy, and collaborate with product, engineering, and broadcast teams.


🧩 Requirements

Core Skills:

  • Strong ML fundamentals (NLP/CV/multimodal)
  • PyTorch/TensorFlow, transformers, ASR or pose estimation
  • Data pipelines, optimisation, evaluation
  • Deployment (Docker, ONNX, TensorRT, FastAPI, K8s, edge GPU)
  • Strong Python engineering


Bonus: Sports analytics, LLM fine-tuning, low-latency optimisation, prior production ML systems.

🌟 Why Join Us

  • Your models go LIVE in global sports broadcasts
  • International travel for tournaments
  • High ownership, zero bureaucracy
  • Build India’s most advanced AI × Sports product
  • Cool, futuristic problems + freedom to innovate
  • Up to ₹40LPA for exceptional talent

🔥 You Belong Here If You…

Build what the world hasn’t seen • Want impact on live sports • Thrive in fast-paced ownership-driven environments.



Read more
Hashone Careers
Madhavan I
Posted by Madhavan I
Bengaluru (Bangalore)
6 - 10 yrs
₹15L - ₹30L / yr
skill iconPython
skill iconMachine Learning (ML)
Scikit-Learn
XGBoost
PyTorch
+1 more

Job Description: Applied Scientist

Location: Bangalore / Hybrid / Remote

Company: LodgIQ

Industry: Hospitality / SaaS / Machine Learning


About LodgIQ

Headquartered in New York, LodgIQ delivers a revolutionary B2B SaaS platform to the

travel industry. By leveraging machine learning and artificial intelligence, we enable precise

forecasting and optimized pricing for hotel revenue management. Backed by Highgate

Ventures and Trilantic Capital Partners, LodgIQ is a well-funded, high-growth startup with a

global presence.

About the Role

We are seeking a highly motivated Applied Scientist to join our Data Science team. This

individual will play a key role in enhancing and scaling our existing forecasting and pricing

systems and developing new capabilities that support our intelligent decision-making

platform.

We are looking for team members who: ● Are deeply curious and passionate about applying machine learning to real-world

problems. ● Demonstrate strong ownership and the ability to work independently. ● Excel in both technical execution and collaborative teamwork. ● Have a track record of shipping products in complex environments.

What You’ll Do ● Build, train, and deploy machine learning and operations research models for

forecasting, pricing, and inventory optimization. ● Work with large-scale, noisy, and temporally complex datasets. ● Collaborate cross-functionally with engineering and product teams to move models

from research to production. ● Generate interpretable and trusted outputs to support adoption of AI-driven rate

recommendations. ● Contribute to the development of an AI-first platform that redefines hospitality revenue

management.

Required Qualifications ● Bachelor's or Master’s degree or PhD in Computer Science or related field. ● 3-5 years of hands-on experience in a product-centric company, ideally with full model

lifecycle exposure.


Commented [1]: Leaving note here

Acceptable Degree types - Masters or PhD

Fields

Operations Research

Industrial/Systems Engineering

Computer Science

Applied Mathematics


● Demonstrated ability to apply machine learning and optimization techniques to solve

real-world business problems.

● Proficient in Python and machine learning libraries such as PyTorch, statsmodel,

LightGBM, scikit-learn, XGBoost

● Strong knowledge of Operations Research models (Stochastic optimization, dynamic

programming) and forecasting models (time-series and ML-based).

● Understanding of machine learning and deep learning foundations.

● Translate research into commercial solutions

● Strong written and verbal communication skills to explain complex technical concepts

clearly to cross-functional teams.

● Ability to work independently and manage projects end-to-end.

Preferred Experience

● Experience in revenue management, pricing systems, or demand forecasting,

particularly within the hotel and hospitality domain.

● Applied knowledge of reinforcement learning techniques (e.g., bandits, Q-learning,

model-based control).

● Familiarity with causal inference methods (e.g., DAGs, treatment effect estimation).

● Proven experience in collaborative product development environments, working closely

with engineering and product teams.

Why LodgIQ?

● Join a fast-growing, mission-driven company transforming the future of hospitality.

● Work on intellectually challenging problems at the intersection of machine learning,

decision science, and human behavior.

● Be part of a high-impact, collaborative team with the autonomy to drive initiatives from

ideation to production.

● Competitive salary and performance bonuses.

● For more information, visit https://www.lodgiq.com

Read more
Meraki Labs
Agency job
via ENTER by Rajkishor Mishra
Bengaluru (Bangalore)
8 - 12 yrs
₹60L - ₹70L / yr
skill iconMachine Learning (ML)
Generative AI
skill iconPython
Artificial Intelligence (AI)
Large Language Models (LLM) tuning
+9 more

Job Overview:


As a Technical Lead, you will be responsible for leading the design, development, and deployment of AI-powered Edtech solutions. You will mentor a team of engineers, collaborate with data scientists, and work closely with product managers to build scalable and efficient AI systems. The ideal candidate should have 8-10 years of experience in software development, machine learning, AI use case development and product creation along  with strong expertise in cloud-based architectures.


Key Responsibilities:


AI Tutor & Simulation Intelligence

  • Architect the AI intelligence layer that drives contextual tutoring, retrieval-based reasoning, and fact-grounded explanations.
  • Build RAG (retrieval-augmented generation) pipelines and integrate verified academic datasets from textbooks and internal course notes.
  • Connect the AI Tutor with the Simulation Lab, enabling dynamic feedback — the system should read experiment results, interpret them, and explain why outcomes occur.
  • Ensure AI responses remain transparent, syllabus-aligned, and pedagogically accurate.


Platform & System Architecture

  • Lead the development of a modular, full-stack platform unifying courses, explainers, AI chat, and simulation windows in a single environment.
  • Design microservice architectures with API bridges across content systems, AI inference, user data, and analytics.
  • Drive performance, scalability, and platform stability — every millisecond and every click should feel seamless.


Reliability, Security & Analytics

  • Establish system observability and monitoring pipelines (usage, engagement, AI accuracy).
  • Build frameworks for ethical AI, ensuring transparency, privacy, and student safety.
  • Set up real-time learning analytics to measure comprehension and identify concept gaps.


Leadership & Collaboration

  • Mentor and elevate engineers across backend, ML, and front-end teams.
  • Collaborate with the academic and product teams to translate physics pedagogy into engineering precision.
  • Evaluate and integrate emerging tools — multi-modal AI, agent frameworks, explainable AI — into the product roadmap.


Qualifications & Skills:


  • 8–10 years of experience in software engineering, ML systems, or scalable AI product builds.
  • Proven success leading cross-functional AI/ML and full-stack teams through 0→1 and scale-up phases.
  • Expertise in cloud architecture (AWS/GCP/Azure) and containerization (Docker, Kubernetes).
  • Experience designing microservices and API ecosystems for high-concurrency platforms.
  • Strong knowledge of LLM fine-tuning, RAG pipelines, and vector databases (Pinecone, Weaviate, etc.).
  • Demonstrated ability to work with educational data, content pipelines, and real-time systems.


Bonus Skills (Nice to Have):

  • Experience with multi-modal AI models (text, image, audio, video).
  • Knowledge of AI safety, ethical AI, and explain ability techniques.
  • Prior work in AI-powered automation tools or AI-driven SaaS products.
Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Kochi (Cochin), Trivandrum, Hyderabad, Thiruvananthapuram
8 - 10 yrs
₹10L - ₹25L / yr
Business Analysis
Data Visualization
PowerBI
SQL
Tableau
+18 more

Job Description – Senior Technical Business Analyst

Location: Trivandrum (Preferred) | Open to any location in India

Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST

 

About the Role

We are seeking highly motivated and analytically strong Senior Technical Business Analysts who can work seamlessly with business and technology stakeholders to convert a one-line problem statement into a well-defined project or opportunity. This role is ideal for fresh graduates who have a strong foundation in data analytics, data engineering, data visualization, and data science, along with a strong drive to learn, collaborate, and grow in a dynamic, fast-paced environment.

As a Technical Business Analyst, you will be responsible for translating complex business challenges into actionable user stories, analytical models, and executable tasks in Jira. You will work across the entire data lifecycle—from understanding business context to delivering insights, solutions, and measurable outcomes.

 

Key Responsibilities

Business & Analytical Responsibilities

  • Partner with business teams to understand one-line problem statements and translate them into detailed business requirementsopportunities, and project scope.
  • Conduct exploratory data analysis (EDA) to uncover trends, patterns, and business insights.
  • Create documentation including Business Requirement Documents (BRDs)user storiesprocess flows, and analytical models.
  • Break down business needs into concise, actionable, and development-ready user stories in Jira.

Data & Technical Responsibilities

  • Collaborate with data engineering teams to design, review, and validate data pipelinesdata models, and ETL/ELT workflows.
  • Build dashboards, reports, and data visualizations using leading BI tools to communicate insights effectively.
  • Apply foundational data science concepts such as statistical analysispredictive modeling, and machine learning fundamentals.
  • Validate and ensure data quality, consistency, and accuracy across datasets and systems.

Collaboration & Execution

  • Work closely with product, engineering, BI, and operations teams to support the end-to-end delivery of analytical solutions.
  • Assist in development, testing, and rollout of data-driven solutions.
  • Present findings, insights, and recommendations clearly and confidently to both technical and non-technical stakeholders.

 

Required Skillsets

Core Technical Skills

  • 6+ years of Technical Business Analyst experience within an overall professional experience of 8+ years
  • Data Analytics: SQL, descriptive analytics, business problem framing.
  • Data Engineering (Foundational): Understanding of data warehousing, ETL/ELT processes, cloud data platforms (AWS/GCP/Azure preferred).
  • Data Visualization: Experience with Power BI, Tableau, or equivalent tools.
  • Data Science (Basic/Intermediate): Python/R, statistical methods, fundamentals of ML algorithms.

 

Soft Skills

  • Strong analytical thinking and structured problem-solving capability.
  • Ability to convert business problems into clear technical requirements.
  • Excellent communication, documentation, and presentation skills.
  • High curiosity, adaptability, and eagerness to learn new tools and techniques.

 

Educational Qualifications

  • BE/B.Tech or equivalent in:
  • Computer Science / IT
  • Data Science

 

What We Look For

  • Demonstrated passion for data and analytics through projects and certifications.
  • Strong commitment to continuous learning and innovation.
  • Ability to work both independently and in collaborative team environments.
  • Passion for solving business problems using data-driven approaches.
  • Proven ability (or aptitude) to convert a one-line business problem into a structured project or opportunity.

 

Why Join Us?

  • Exposure to modern data platforms, analytics tools, and AI technologies.
  • A culture that promotes innovation, ownership, and continuous learning.
  • Supportive environment to build a strong career in data and analytics.

 

Skills: Data Analytics, Business Analysis, Sql


Must-Haves

Technical Business Analyst (6+ years), SQL, Data Visualization (Power BI, Tableau), Data Engineering (ETL/ELT, cloud platforms), Python/R

 

******

Notice period - 0 to 15 days (Max 30 Days)

Educational Qualifications: BE/B.Tech or equivalent in: (Computer Science / IT) /Data Science

Location: Trivandrum (Preferred) | Open to any location in India

Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST

Read more
Daten  Wissen Pvt Ltd

at Daten Wissen Pvt Ltd

1 recruiter
Ashwini poojari
Posted by Ashwini poojari
Bhayander, Thane
0 - 0 yrs
₹4500 - ₹5000 / mo
skill iconMachine Learning (ML)
skill iconDeep Learning
Artificial Intelligence (AI)
Natural Language Processing (NLP)
skill iconPython
+8 more

Artificial Intelligence Research Intern 


We are looking for a passionate and skilled AI Intern to join our dynamic team for a 6-month full-time internship. This is an excellent opportunity to work on cutting-edge technologies in Artificial Intelligence, Machine Learning, Deep Learning, and Natural Language Processing (NLP), contributing to real-world projects that create a tangible impact. 


Key Responsibilities: 

• Research, design, develop, and implement AI and Deep Learning algorithms. 

• Work on NLP systems and models for tasks such as text classification, sentiment analysis, and 

data extraction. 

• Evaluate and optimize machine learning and deep learning models. 

• Collect, process, and analyze large-scale datasets. 

• Use advanced techniques for text representation and classification. 

• Write clean, efficient, and testable code for production-ready applications. 

• Perform web scraping and data extraction using Python (requests, BeautifulSoup, Selenium, APIs, etc.). 

• Collaborate with cross-functional teams and clearly communicate technical concepts to both technical and non-technical audiences. 


Required Skills and Experience: 

• Theoretical and practical knowledge of AI, ML, and DL concepts. 

• Good Understanding of Python and libraries such as:TensorFlow, PyTorch, Keras, Scikit-learn, Numpy, Pandas, Scipy, Matplotlib NLP tools like NLTK, spaCy, etc. 

• Strong understanding of Neural Network Architectures (CNNs, RNNs, LSTMs). 

• Familiarity with data structures, data modeling, and software architecture. 

• Understanding of text representation techniques (n-grams, BoW, TF-IDF, etc.). 

• Comfortable working in Linux/UNIX environments. 

• Basic knowledge of HTML, JavaScript, HTTP, and Networking. 

• Strong communication skills and a collaborative mindset. 


Job Type: Full-Time Internship 

Location: In-Office (Bhayander) 


Read more
Bengaluru (Bangalore)
1 - 3 yrs
₹4L - ₹5L / yr
skill iconPython
skill iconRust
PyTorch
model context protocol
Generative AI
+3 more

ML Intern

Hyperworks Imaging is a cutting-edge technology company based out of Bengaluru, India since 2016. Our team uses the latest advances in deep learning and multi-modal machine learning techniques to solve diverse real world problems. We are rapidly growing, working with multiple companies around the world.

JOB OVERVIEW

We are seeking a talented and results-oriented ML Intern to join our growing team in India. In this role, you will be responsible for developing and implementing new advanced ML algorithms and AI agents for creating assistants of the future. 

The ideal candidate will work on a complete ML pipeline starting from extraction, transformation and analysis of data to developing novel ML algorithms. The candidate will implement latest research papers and closely work with various stakeholders to ensure data-driven decisions and integrate the solutions into a robust ML pipeline.

RESPONSIBILITIES:

  • Create AI agents using Model Context Protocols (MCPs), Claude Code, DsPy etc.
  • Develop custom evals for AI agents.
  • Build and maintain ML pipelines
  • Optimize and evaluate ML models to ensure accuracy and performance.
  • Define system requirements and integrate ML algorithms into cloud based workflows.
  • Write clean, well-documented, and maintainable code following best practices


REQUIREMENTS:

  • 1-3+ years of experience in data science, machine learning, or a similar role.
  • Demonstrated expertise with python, PyTorch, and TensorFlow.
  • Graduated/Graduating with B.Tech/M.Tech/PhD degrees in Electrical Engg./Electronics Engg./Computer Science/Maths and Computing/Physics
  • Has done coursework in Linear Algebra, Probability, Image Processing, Deep Learning and Machine Learning.
  • Has demonstrated experience with Model Context Protocols (MCPs), DsPy, AI Agents, MLOps etc


WHO CAN APPLY:

Only those candidates will be considered who,

  • have relevant skills and interests
  • can commit full time
  • Can show prior work and deployed projects
  • can start immediately

Please note that we will reach out to ONLY those applicants who satisfy the criteria listed above.

SALARY DETAILS: Commensurate with experience.

JOINING DATE: Immediate

JOB TYPE: Full-time




Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹50L - ₹75L / yr
Ansible
Terraform
skill iconAmazon Web Services (AWS)
Platform as a Service (PaaS)
CI/CD
+30 more

ROLE & RESPONSIBILITIES:

We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.


KEY RESPONSIBILITIES:

1.     Cloud Security (AWS)-

  • Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
  • Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
  • Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
  • Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
  • Ensure encryption of data at rest/in transit across all cloud services.

 

2.     DevOps Security (IaC, CI/CD, Kubernetes, Linux)-

Infrastructure as Code & Automation Security:

  • Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
  • Enforce misconfiguration scanning and automated remediation.

CI/CD Security:

  • Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
  • Implement secure build, artifact signing, and deployment workflows.

Containers & Kubernetes:

  • Harden Docker images, private registries, runtime policies.
  • Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
  • Apply CIS Benchmarks for Kubernetes and Linux.

Monitoring & Reliability:

  • Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
  • Ensure audit logging across cloud/platform layers.


3.     MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-

Pipeline & Workflow Security:

  • Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
  • Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.

ML Platform Security:

  • Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
  • Control model access, artifact protection, model registry security, and ML metadata integrity.

Data Security:

  • Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
  • Enforce data versioning security, lineage tracking, PII protection, and access governance.

ML Observability:

  • Implement drift detection (data drift/model drift), feature monitoring, audit logging.
  • Integrate ML monitoring with Grafana/Prometheus/CloudWatch.


4.     Network & Endpoint Security-

  • Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
  • Conduct vulnerability assessments, penetration test coordination, and network segmentation.
  • Secure remote workforce connectivity and internal office networks.


5.     Threat Detection, Incident Response & Compliance-

  • Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
  • Build security alerts, automated threat detection, and incident workflows.
  • Lead incident containment, forensics, RCA, and remediation.
  • Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
  • Maintain security policies, procedures, RRPs (Runbooks), and audits.


IDEAL CANDIDATE:

  • 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
  • Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
  • Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
  • Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
  • Strong Linux security (CIS hardening, auditing, intrusion detection).
  • Proficiency in Python, Bash, and automation/scripting.
  • Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
  • Understanding of microservices, API security, serverless security.
  • Strong understanding of vulnerability management, penetration testing practices, and remediation plans.


EDUCATION:

  • Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
  • Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.


PERKS, BENEFITS AND WORK CULTURE:

  • Competitive Salary Package
  • Generous Leave Policy
  • Flexible Working Hours
  • Performance-Based Bonuses
  • Health Care Benefits
Read more
Gurugram, Bengaluru (Bangalore), Hyderabad, Mumbai
5 - 12 yrs
₹35L - ₹52L / yr
skill iconData Science
skill iconMachine Learning (ML)
Artificial Intelligence (AI)

Strong Senior Data Scientist (AI/ML/GenAI) Profile

Mandatory (Experience 1) – Must have a minimum of 5+ years of experience in designing, developing, and deploying Machine Learning / Deep Learning (ML/DL) systems in production

Mandatory (Experience 2) – Must have strong hands-on experience in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.

Mandatory (Experience 3) – Must have 1+ years of experience in fine-tuning Large Language Models (LLMs) using techniques like LoRA/QLoRA, and building RAG (Retrieval-Augmented Generation) pipelines.

Mandatory (Experience 4) – Must have experience with MLOps and production-grade systems including Docker, Kubernetes, Spark, model registries, and CI/CD workflows

mandatory (Note): Budget for upto 5 yoe is 25Lacs, Upto 7 years is 35 lacs and upto 12 years is 45 lacs, Also client can pay max 30-40% based on candidature

Read more
ClanX

at ClanX

2 candid answers
Ariba Khan
Posted by Ariba Khan
Bengaluru (Bangalore)
5 - 8 yrs
Upto ₹55L / yr (Varies
)
skill iconPython
Natural Language Processing (NLP)
skill iconMachine Learning (ML)
OCR
Large Language Models (LLM)
+1 more

This opportunity through ClanX is for Parspec (direct payroll with Parspec)


Please note you would be interviewed for this role at Parspec while ClanX is a recruitment partner helping Parspec find the right candidates and manage the hiring process.


Parspec is hiring a Machine Learning Engineer to build state-of-the-art AI systems, focusing on NLP, LLMs, and computer vision in a high-growth, product-led environment.


Company Details:

Parspec is a fast-growing technology startup revolutionizing the construction supply chain. By digitizing and automating critical workflows, Parspec empowers sales teams with intelligent tools to drive efficiency and close more deals. With a mission to bring a data-driven future to the construction industry, Parspec blends deep domain knowledge with AI expertise.


Requirements:

  • Bachelor’s or Master’s degree in Science or Engineering.
  • 5-7 years of experience in ML and data science.
  • Recent hands-on work with LLMs, including fine-tuning, RAG, agent flows, and integrations.
  • Strong understanding of foundational models and transformers.
  • Solid grasp of ML and DL fundamentals, with experience in CV and NLP.
  • Recent experience working with large datasets.
  • Python experience with ML libraries like numpy, pandas, sklearn, matplotlib, nltk and others.
  • Experience with frameworks like Hugging Face, Spacy, BERT, TensorFlow, PyTorch, OpenRouter or Modal.


Requirements:

Must haves

  • Design, develop, and deploy NLP, CV, and recommendation systems
  • Train and implement deep learning models
  • Research and explore novel ML architectures
  • Build and maintain end-to-end ML pipelines
  • Collaborate across product, design, and engineering teams
  • Work closely with business stakeholders to shape product features
  • Ensure high scalability and performance of AI solutions
  • Uphold best practices in engineering and contribute to a culture of excellence
  • Actively participate in R&D and innovation within the team

Good to haves

  • Experience building scalable AI pipelines for extracting structured data from unstructured sources.
  • Experience with cloud platforms, containerization, and managed AI services.
  • Knowledge of MLOps practices, CI/CD, monitoring, and governance.
  • Experience with AWS or Django.
  • Familiarity with databases and web application architecture.
  • Experience with OCR or PDF tools.


Responsibilities:

  • Design, develop, and deploy NLP, CV, and recommendation systems
  • Train and implement deep learning models
  • Research and explore novel ML architectures
  • Build and maintain end-to-end ML pipelines
  • Collaborate across product, design, and engineering teams
  • Work closely with business stakeholders to shape product features
  • Ensure high scalability and performance of AI solutions
  • Uphold best practices in engineering and contribute to a culture of excellence
  • Actively participate in R&D and innovation within the team


Interview Process

  1. Technical interview (coding, ML concepts, project walkthrough)
  2. System design and architecture round
  3. Culture fit and leadership interaction
  4. Final offer discussion
Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹60L - ₹80L / yr
Apache Airflow
Apache Spark
AWS CloudFormation
MLOps
DevOps
+23 more

Review Criteria:

  • Strong MLOps profile
  • 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
  • 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
  • 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
  • Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
  • Must have hands-on Python for pipeline & automation development
  • 4+ years of experience in AWS cloud, with recent companies
  • (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth

 

Preferred:

  • Hands-on in Docker deployments for ML workflows on EKS / ECS
  • Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
  • Experience with CI / CD / CT using GitHub Actions / Jenkins.
  • Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
  • Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.

 

Job Specific Criteria:

  • CV Attachment is mandatory
  • Please provide CTC Breakup (Fixed + Variable)?
  • Are you okay for F2F round?
  • Have candidate filled the google form?

 

Role & Responsibilities:

We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.

 

Key Responsibilities:

  • Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
  • Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
  • Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
  • Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
  • Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
  • Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
  • Collaborate with data scientists to productionize notebooks, experiments, and model deployments.

 

Ideal Candidate:

  • 8+ years in MLOps/DevOps with strong ML pipeline experience.
  • Strong hands-on experience with AWS:
  • Compute/Orchestration: EKS, ECS, EC2, Lambda
  • Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
  • Workflow: MWAA/Airflow, Step Functions
  • Monitoring: CloudWatch, OpenSearch, Grafana
  • Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
  • Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
  • Strong Linux, scripting, and troubleshooting skills.
  • Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.

 

Education:

  • Master’s degree in computer science, Machine Learning, Data Engineering, or related field. 
Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 12 yrs
₹15L - ₹30L / yr
skill iconMachine Learning (ML)
skill iconAmazon Web Services (AWS)
skill iconKubernetes
ECS
Amazon Redshift
+14 more

Core Responsibilities:

  • The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
  • Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
  • Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
  • Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
  • System Integration: Integrate models into existing systems and workflows.
  • Model Deployment: Deploy models to production environments and monitor performance.
  • Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
  • Continuous Improvement: Identify areas for improvement in model performance and systems.

 

Skills:

  • Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
  • Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaossearch logs, etc for troubleshooting; Other tech touch points are ScyllaDB (like BigTable), OpenSearch, Neo4J graph
  • Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
  • Knowledge of model monitoring and performance evaluation.

 

Required experience:

  • Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sagemaker pipeline with ability to analyze gaps and recommend/implement improvements
  • AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
  • AWS data: Redshift, Glue
  • Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)

 

Skills: Aws, Aws Cloud, Amazon Redshift, Eks

 

Must-Haves

Machine Learning +Aws+ (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sagemaker

Notice period - 0 to 15days only

Hybrid work mode- 3 days office, 2 days at home

Read more
Pune
6 - 8 yrs
₹45L - ₹50L / yr
skill iconPython
databricks
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
CI/CD

We are looking for a Senior AI / ML Engineer to join our fast-growing team and help build AI-driven data platforms and intelligent solutions. If you are passionate about AI, data engineering, and building real-world GenAI systems, this role is for you!



🔧 Key Responsibilities

• Develop and deploy AI/ML models for real-world applications

• Build scalable pipelines for data processing, training, and evaluation

• Work on LLMs, RAG, embeddings, and agent workflows

• Collaborate with data engineers, product teams, and software developers

• Write clean, efficient Python code and ensure high-quality engineering practices

• Handle model monitoring, performance tuning, and documentation



Required Skills

• 2–5 years of experience in AI/ML engineering

• Strong knowledge of Python, TensorFlow/PyTorch

• Experience with LLMs, GenAI, RAG, or NLP

• Knowledge of Databricks, MLOps or cloud platforms (AWS/Azure/GCP)

• Good understanding of APIs, distributed systems, and data pipelines



🎯 Good to Have

• Experience in healthcare, SaaS, or big data

• Exposure to Databricks Mosaic AI

• Experience building AI agents

Read more
Clink

at Clink

2 candid answers
1 product
Hari Krishna
Posted by Hari Krishna
Hyderabad, Bengaluru (Bangalore)
0 - 2 yrs
₹4L - ₹8L / yr
Artificial Intelligence (AI)
Large Language Models (LLM)
skill iconPython
skill iconMachine Learning (ML)
FastAPI
+2 more

Role Overview

Join our core tech team to build the intelligence layer of Clink's platform. You'll architect AI agents, design prompts, build ML models, and create systems powering personalized offers for thousands of restaurants. High-growth opportunity working directly with founders, owning critical features from day one.


Why Clink?

Clink revolutionizes restaurant loyalty using AI-powered offer generation and customer analytics:

  • ML-driven customer behavior analysis (Pattern detection)
  • Personalized offers via LLMs and custom AI agents
  • ROI prediction and forecasting models
  • Instagram marketing rewards integration


Tech Stack:

  • Python,
  • FastAPI,
  • PostgreSQL,
  • Redis,
  • Docker,
  • LLMs


You Will Work On:

AI Agents: Design and optimize AI agents

ML Models: Build redemption prediction, customer segmentation, ROI forecasting

Data & Analytics: Analyze data, build behavior pattern pipelines, create product bundling matrices

System Design: Architect scalable async AI pipelines, design feedback loops, implement A/B testing

Experimentation: Test different LLM approaches, explore hybrid LLM+ML architectures, prototype new capabilities


Must-Have Skills

Technical: 0-2 years AI/ML experience (projects/internships count), strong Python, LLM API knowledge, ML fundamentals (supervised learning, clustering), Pandas/NumPy proficiency

Mindset: Extreme curiosity, logical problem-solving, builder mentality (side projects/hackathons), ownership mindset

Nice to Have: Pydantic, FastAPI, statistical forecasting, PostgreSQL/SQL, scikit-learn, food-tech/loyalty domain interest

Read more
Ekloud INC
Kratika Agarwal
Posted by Kratika Agarwal
Bengaluru (Bangalore)
4 - 6 yrs
₹5L - ₹14L / yr
skill iconMachine Learning (ML)
skill iconPython
gpu framework
TensorFlow
Keras
+2 more

We are looking for enthusiastic engineers passionate about building and maintaining solutioning platform components on cloud and Kubernetes infrastructure. The ideal candidate will go beyond traditional SRE responsibilities by collaborating with stakeholders, understanding the applications hosted on the platform, and designing automation solutions that enhance platform efficiency, reliability, and value.

[Technology and Sub-technology]

• ML Engineering / Modelling

• Python Programming

• GPU frameworks: TensorFlow, Keras, Pytorch etc.

• Cloud Based ML development and Deployment AWS or Azure


[Qualifications]

• Bachelor’s Degree in Computer Science, Computer Engineering or equivalent technical degree

• Proficient programming knowledge in Python or Java and ability to read and explain open source codebase.

• Good foundation of Operating Systems, Networking and Security Principles

• Exposure to DevOps tools, with experience integrating platform components into Sagemaker/ECR and AWS Cloud environments.

• 4-6 years of relevant experience working on AI/ML projects


[Primary Skills]:

• Excellent analytical & problem solving skills.

• Exposure to Machine Learning and GenAI technologies.

• Understanding and hands-on experience with AI/ML Modeling, Libraries, frameworks, and Tools (TensorFlow, Keras, Pytorch etc.)

• Strong knowledge of Python, SQL/NoSQL

• Cloud Based ML development and Deployment AWS or Azure

Read more
AI Industry

AI Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai, Bengaluru (Bangalore), Hyderabad, Gurugram
5 - 12 yrs
₹20L - ₹46L / yr
skill iconData Science
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Generative AI
skill iconDeep Learning
+14 more

Review Criteria

  • Strong Senior Data Scientist (AI/ML/GenAI) Profile
  • 5+ years of experience in designing, developing, and deploying Machine Learning / Deep Learning (ML/DL) systems in production
  • Must have strong hands-on experience in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.
  • 1+ years of experience in fine-tuning Large Language Models (LLMs) using techniques like LoRA/QLoRA, and building RAG (Retrieval-Augmented Generation) pipelines.
  • Must have experience with MLOps and production-grade systems including Docker, Kubernetes, Spark, model registries, and CI/CD workflows

 

Preferred

  • Prior experience in open-source GenAI contributions, applied LLM/GenAI research, or large-scale production AI systems
  • Preferred (Education) – B.S./M.S./Ph.D. in Computer Science, Data Science, Machine Learning, or a related field.

 

Job Specific Criteria

  • CV Attachment is mandatory
  • Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
  • Are you okay with 3 Days WFO?
  • Virtual Interview requires video to be on, are you okay with it?


Role & Responsibilities

Company is hiring a Senior Data Scientist with strong expertise in AI, machine learning engineering (MLE), and generative AI. You will play a leading role in designing, deploying, and scaling production-grade ML systems — including large language model (LLM)-based pipelines, AI copilots, and agentic workflows. This role is ideal for someone who thrives on balancing cutting-edge research with production rigor and loves mentoring while building impact-first AI applications.

 

Responsibilities:

  • Own the full ML lifecycle: model design, training, evaluation, deployment
  • Design production-ready ML pipelines with CI/CD, testing, monitoring, and drift detection
  • Fine-tune LLMs and implement retrieval-augmented generation (RAG) pipelines
  • Build agentic workflows for reasoning, planning, and decision-making
  • Develop both real-time and batch inference systems using Docker, Kubernetes, and Spark
  • Leverage state-of-the-art architectures: transformers, diffusion models, RLHF, and multimodal pipelines
  • Collaborate with product and engineering teams to integrate AI models into business applications
  • Mentor junior team members and promote MLOps, scalable architecture, and responsible AI best practices


Ideal Candidate

  • 5+ years of experience in designing, deploying, and scaling ML/DL systems in production
  • Proficient in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX
  • Experience with LLM fine-tuning, LoRA/QLoRA, vector search (Weaviate/PGVector), and RAG pipelines
  • Familiarity with agent-based development (e.g., ReAct agents, function-calling, orchestration)
  • Solid understanding of MLOps: Docker, Kubernetes, Spark, model registries, and deployment workflows
  • Strong software engineering background with experience in testing, version control, and APIs
  • Proven ability to balance innovation with scalable deployment
  • B.S./M.S./Ph.D. in Computer Science, Data Science, or a related field
  • Bonus: Open-source contributions, GenAI research, or applied systems at scale


Read more
BOSCH MOBILITY

BOSCH MOBILITY

Agency job
Bengaluru (Bangalore)
8 - 16 yrs
₹18L - ₹20L / yr
CAMERA
Sensors
SENSOR
skill iconDeep Learning
Artificial Intelligence (AI)
+14 more

Job Title: Sensor Expert – MLFF (Multi-Lane Free Flow) Engagement Type: Consultant / External Associate Organization: Bosch - MPIN

Location: Bangalore, India


Purpose of the Role:


To provide technical expertise in sensing technologies for MLFF (Multi-Lane Free Flow) and ITMS (Intelligent Traffic Management System) solutions. The role focuses on camera systems, AI/ML based computer vision, and multi-sensor integration (camera, RFID, radar) to drive solution performance, optimization, and business success. Key


Responsibilities:

• Lead end-to-end sensor integration for MLFF and ITMS platforms.


• Manage camera systems, ANPR, and data packet processing.


• Apply AI/ML techniques for performance optimization in computer vision.


• Collaborate with System Integrators and internal teams on architecture and implementation.


• Support B2G proposals (smart city, mining, infrastructure projects) with domain expertise.


• Drive continuous improvement in deployed MLFF solutions. 


Key Competencies:


• Deep understanding of camera and sensor technologies, AI/ML for vision systems, and system integration.


• Experience in PoC development and solution optimization.


• Strong analytical, problem-solving, and collaboration skills.

• Familiarity with B2G environments and public infrastructure tenders preferred.


Qualification & Experience:


• Bachelor’s/Master’s in Electronics, Electrical, or Computer Science.


• 8–10 years of experience in camera technology, AI/ML, and sensor integration.


• Proven track record in system design, implementation, and field optimization.

Read more
NA

NA

Agency job
via eTalent Services by JaiPrakash Bharti
Remote only
3 - 8 yrs
₹5L - ₹14L / yr
skill iconPython
skill iconMachine Learning (ML)
Windows Azure
TensorFlow
MLFlow
+6 more

Role: Azure AI Tech Lead

Exp-3.5-7 Years

Location: Remote / Noida (NCR)

Notice Period: Immediate to 15 days

 

Mandatory Skills: Python, Azure AI/ML, PyTorch, TensorFlow, JAX, HuggingFace, LangChain, Kubeflow, MLflow, LLMs, RAG, MLOps, Docker, Kubernetes, Generative AI, Model Deployment, Prometheus, Grafana

 

JOB DESCRIPTION

As the Azure AI Tech Lead, you will serve as the principal technical expert leading the design, development, and deployment of advanced AI and ML solutions on the Microsoft Azure platform. You will guide a team of engineers, establish robust architectures, and drive end-to-end implementation of AI projects—transforming proof-of-concepts into scalable, production-ready systems.

 

Key Responsibilities:

  • Lead architectural design and development of AI/ML solutions using Azure AI, Azure OpenAI, and Cognitive Services.
  • Develop and deploy scalable AI systems with best practices in MLOps across the full model lifecycle.
  • Mentor and upskill AI/ML engineers through technical reviews, training, and guidance.
  • Implement advanced generative AI techniques including LLM fine-tuning, RAG systems, and diffusion models.
  • Collaborate cross-functionally to translate business goals into innovative AI solutions.
  • Enforce governance, responsible AI practices, and performance optimization standards.
  • Stay ahead of trends in LLMs, agentic AI, and applied research to shape next-gen solutions.

 

Qualifications:

  • Bachelor’s or Master’s in Computer Science or related field.
  • 3.5–7 years of experience delivering end-to-end AI/ML solutions.
  • Strong expertise in Azure AI ecosystem and production-grade model deployment.
  • Deep technical understanding of ML, DL, Generative AI, and MLOps pipelines.
  • Excellent analytical and problem-solving abilities; applied research or open-source contributions preferred.


Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 12 yrs
₹25L - ₹30L / yr
skill iconMachine Learning (ML)
AWS CloudFormation
Online machine learning
skill iconAmazon Web Services (AWS)
ECS
+20 more

MUST-HAVES: 

  • Machine Learning + Aws + (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sage maker
  • Notice period - 0 to 15 days only 
  • Hybrid work mode- 3 days office, 2 days at home


SKILLS: AWS, AWS CLOUD, AMAZON REDSHIFT, EKS


ADDITIONAL GUIDELINES:

  • Interview process: - 2 Technical round + 1 Client round
  • 3 days in office, Hybrid model. 


CORE RESPONSIBILITIES:

  • The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
  • Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
  • Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
  • Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
  • System Integration: Integrate models into existing systems and workflows.
  • Model Deployment: Deploy models to production environments and monitor performance.
  • Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
  • Continuous Improvement: Identify areas for improvement in model performance and systems.


SKILLS:

  • Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
  • Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaos search logs, etc. for troubleshooting; Other tech touch points are Scylla DB (like BigTable), OpenSearch, Neo4J graph
  • Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
  • Knowledge of model monitoring and performance evaluation.


REQUIRED EXPERIENCE:

  • Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sage maker pipeline with ability to analyze gaps and recommend/implement improvements
  • AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
  • AWS data: Redshift, Glue
  • Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
Read more
Inferigence Quotient

at Inferigence Quotient

1 recruiter
Neeta Trivedi
Posted by Neeta Trivedi
Bengaluru (Bangalore)
2 - 3 yrs
₹6L - ₹12L / yr
skill iconPython
skill iconDeep Learning
skill iconMachine Learning (ML)
skill iconC++
CUDA
+11 more

AI based systems design and development, entire pipeline from image/ video ingest, metadata ingest, processing, encoding, transmitting.


Implementation and testing of advanced computer vision algorithms.

Dataset search, preparation, annotation, training, testing, fine tuning of vision CNN models. Multimodal AI, LLMs, hardware deployment, explainability.


Detailed analysis of results. Documentation, version control, client support, upgrades.

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort