Cutshort logo
Machine learning ml jobs

50+ Machine Learning (ML) Jobs in India

Apply to 50+ Machine Learning (ML) Jobs on CutShort.io. Find your next job, effortlessly. Browse Machine Learning (ML) Jobs and apply today!

icon
Public Listed - Product Based company

Public Listed - Product Based company

Agency job
via Recruiting Bond by Pavan Kumar
Bengaluru (Bangalore)
4 - 8 yrs
₹25L - ₹70L / yr
skill iconData Science
data platforms
Data-flow analysis
Data pipelines
AI Infrastructure
+28 more

🤖 Data Scientist – Frontier AI for Data Platforms & Distributed Systems (4–8 Years)

Experience: 4–8 Years

Location: Bengaluru (On-site / Hybrid)

Company: Publicly Listed, Global Product Platform


🧠 About the Mission

We are building a Top 1% AI-Native Engineering & Data Organization — from first principles.

This is not incremental improvement.

This is a full-stack transformation of a large-scale enterprise into an AI-native data platform company.

We are re-architecting:

  • Legacy systems → AI-native architectures
  • Static pipelines → autonomous, self-healing systems
  • Data platforms → intelligent, learning systems
  • Software workflows → agentic execution layers

This is the kind of shift you would expect from companies like Google or Microsoft —

Except here, you will build it from day zero and scale it globally.


🧠 The Opportunity: This role sits at the intersection of three high-impact domains:

1. Frontier AI Systems: Large Language Models (LLMs), Small Language Models (SLMs), and Agentic AI

2. Data Platforms: Warehouses, Lakehouses, Streaming Systems, Query Engines

3. Distributed Systems: High-throughput, low-latency, multi-region infrastructure


We are building systems where:

  • Data platforms optimize themselves using ML/LLMs
  • Pipelines are autonomous, self-healing, and adaptive
  • Queries are generated, optimized, and executed intelligently
  • Infrastructure learns from usage and evolves continuously

This is: AI as the control plane for data infrastructure


🧩 What You’ll Work On

You will design and build AI-native systems deeply embedded inside data infrastructure.

1. AI-Native Data Platforms

  • Build LLM-powered interfaces:
  • Natural language → SQL / pipelines / transformations
  • Design semantic data layers:
  • Embeddings, vector search, knowledge graphs
  • Develop AI copilots:
  • For data engineers, analysts, and platform users

2. Autonomous Data Pipelines

  • Build self-healing ETL/ELT systems using AI agents
  • Create pipelines that:
  • Detect anomalies in real time
  • Automatically debug failures
  • Dynamically optimize transformations

3. Intelligent Query & Compute Optimization

  • Apply ML/LLMs to:
  • Query planning and execution
  • Cost-based optimization using learned models
  • Workload prediction and scheduling
  • Build systems that:
  • Learn from query patterns
  • Continuously improve performance and cost efficiency

4. Distributed Data + AI Infrastructure

  • Architect systems operating at:
  • Billions of events per day
  • Petabyte-scale data
  • Work with:
  • Distributed compute engines (Spark / Flink / Ray class systems)
  • Streaming systems (Kafka-class infra)
  • Vector databases and hybrid retrieval systems

5. Learning Systems & Feedback Loops

  • Build closed-loop AI systems:
  • Execution → feedback → model updates
  • Develop:
  • Continual learning pipelines
  • Online learning systems for infra optimization
  • Experimentation frameworks (A/B, bandits, eval pipelines)

6. LLM & Agentic Systems (Infra-Aware)

  • Build agents that understand data systems
  • Enable:
  • Autonomous pipeline debugging
  • Root cause analysis for infra failures
  • Intelligent orchestration of data workflows


🧠 What We’re Looking For

Core Foundations

  • Strong grounding in:
  • Machine Learning, Deep Learning, NLP
  • Statistics, optimization, probabilistic systems
  • Distributed systems fundamentals
  • Deep understanding of:
  • Transformer architectures
  • Modern LLM ecosystems

Hands-On Expertise

  • Experience building:
  • LLM / GenAI systems (RAG, fine-tuning, embeddings)
  • Data platforms (warehouse, lake, lakehouse architectures)
  • Distributed pipelines and compute systems
  • Strong programming skills:
  • Python (ML/AI stack)
  • SQL (deep understanding — query planning, optimization mindset)


Systems Thinking (Critical)

You think in systems, not components.

  • Built or worked on:
  • Large-scale data pipelines
  • High-throughput distributed systems
  • Low-latency, high-concurrency architectures
  • Understand:
  • Query optimization and execution
  • Data partitioning, indexing, caching
  • Trade-offs in distributed systems


🔥 What Sets You Apart (Top 1%)

  • Built AI-powered data platforms or infra systems in production
  • Designed or contributed to:
  • Query engines / optimizers
  • Data observability / lineage systems
  • AI-driven infra or AIOps platforms
  • Experience with:
  • Multi-modal AI (logs, metrics, traces, text)
  • Agentic AI systems
  • Autonomous infrastructure
  • Worked on systems at scale comparable to:
  • Google (BigQuery-like systems)
  • Meta (real-time analytics infra)
  • Snowflake / Databricks (lakehouse architectures)


🧬 Ideal Background (Not Mandatory)

We often see strong candidates from:

  • Data infrastructure or platform engineering teams
  • AI-first startups or research-driven environments
  • High-scale product companies

Experience building:

  • Internal platforms used by 1000s of engineers
  • Systems serving millions of users / high throughput workloads
  • Multi-region, distributed cloud systems


🧠 The Kind of Problems You’ll Solve

  • Can LLMs replace traditional query optimizers?
  • How do we build self-healing data pipelines at scale?
  • Can data systems learn from every query and improve automatically?
  • How do we embed reasoning and planning into infrastructure layers?
  • What does a fully autonomous data platform look like?


Background: We Commonly See (But Not Limited To)

Our team often includes engineers from top-tier institutions and strong research or product backgrounds, including:

  • Leading engineering schools in India and globally
  • Engineers with experience in top product companies, AI startups, or research-driven environments
  • That said, we care far more about demonstrated ability, depth, and impact than pedigree alone.


Read more
Public Listed - Product Based Company

Public Listed - Product Based Company

Agency job
via Recruiting Bond by Pavan Kumar
Bengaluru (Bangalore)
7 - 10 yrs
₹40L - ₹75L / yr
skill iconJava
skill iconPython
skill iconGo Programming (Golang)
skill iconNodeJS (Node.js)
Database Design
+36 more

🧭 Tech Lead (Backend / Fullstack | 7–10 Years)

Location: Bangalore (On-Site, Hybrid)

Company Type: Public-Listed Product Company


We’re Building a “Top 1% Engineering Org”

We’re building a high-talent-density, AI-first R&D organization from scratch — inside a publicly listed company undergoing a full-scale transformation.

Think:

→ Rewriting legacy systems into AI-native architectures

→ Embedding LLMs + Agentic AI into core workflows

→ Reimagining platforms, infra, and data systems for the next decade

This is the kind of shift you’d expect from Google, Microsoft, or Meta —

Except you get to build it from day 0 → scale it globally.


About the Role / Team

We are building a next-generation AI-first R&D organization in Bengaluru, focused on solving complex problems across LLMs, Agentic AI systems, distributed computing, and enterprise-scale architectures.

This initiative is part of a publicly listed global company investing heavily in AI-driven transformation, re-architecting its platforms into intelligent, autonomous systems powered by large language models, workflows, and decision engines.


You will be working on:

  • Agentic AI systems & LLM-powered workflows
  • Distributed, scalable backend systems
  • Enterprise-grade AI platforms
  • Automation-first engineering environments

🚀 The Mandate

Lead execution of mission-critical systems while staying hands-on — bridging architecture and delivery.


🧩 What You’ll Do

  • Own end-to-end delivery of complex engineering initiatives (0→1, 1→N)
  • Design systems across backend + frontend (if fullstack)
  • Translate ambiguous problems into structured technical solutions
  • Drive engineering best practices, code quality, and velocity
  • Mentor engineers and elevate team performance
  • Collaborate with stakeholders on roadmap and execution strategy


🧠 What We’re Looking For

  • Strong experience in backend systems + optional frontend frameworks
  • Proven ability to lead projects and deliver at scale
  • Solid understanding of system design and architecture patterns
  • Ability to balance speed vs quality vs scalability trade-offs
  • Strong communication and leadership without authority
  • Strong coding skills in Python / Java / Go / Node.js
  • Solid understanding of data structures, system design basics, and backend architecture
  • Experience building scalable APIs and services
  • Familiarity or curiosity around AI/LLMs, async systems, or event-driven design
  • Strong debugging, problem-solving, and ownership mindset


Nice to Have

  • Experience integrating LLMs, vector databases, or AI pipelines
  • Contributions to architecture at scale
  • Experience with Agentic AI / LLM orchestration frameworks
  • Background in product engineering or platform companies
  • Exposure to global-scale systems (millions of users / high throughput)


🔥 What Sets You Apart

  • Experience leading platform builds or major system rewrites
  • Exposure to AI systems, LLM integrations, or intelligent workflows
  • Built platforms used by millions of users / high-throughput systems
  • Experience with event-driven systems, stream processing, or infra platforms
  • Prior work on AI/ML platforms, model serving, or intelligent systems


Background: We Commonly See (But Not Limited To)

  • Our team often includes engineers from top-tier institutions and strong research or product company or DeepTech or AI Product backgrounds, including:
  • Leading engineering schools in India and globally
  • Engineers with experience in top product companies, AI startups, or research-driven environments
  • That said, we care far more about demonstrated ability, depth, and impact than pedigree alone.


Read more
Recruiting Bond

at Recruiting Bond

2 candid answers
Pavan Kumar
Posted by Pavan Kumar
Bengaluru (Bangalore), Mumbai
10 - 16 yrs
₹75L - ₹130L / yr
Distributed Systems
Microservices
Enterprise architecture
System Design & Architecture
Event-Driven Architecture
+29 more

🚨 We’re Building a “Top 1% Engineering Org”


We’re building a high-talent-density, AI-first R&D organization from scratch — inside a publicly listed company undergoing a full-scale transformation.

Think:

→ Rewriting legacy systems into AI-native architectures

→ Embedding LLMs + Agentic AI into core workflows

→ Reimagining platforms, infra, and data systems for the next decade

This is the kind of shift you’d expect from Google, Microsoft, or Meta —

Except you get to build it from day 0 → scale it globally.


About the Role / Team

We are building a next-generation AI-first R&D organization in Bengaluru, focused on solving complex problems across LLMs, Agentic AI systems, distributed computing, and enterprise-scale architectures.


This initiative is part of a publicly listed global company investing heavily in AI-driven transformation, re-architecting its platforms into intelligent, autonomous systems powered by large language models, workflows, and decision engines.


You will be working on:

  • Agentic AI systems & LLM-powered workflows
  • Distributed, scalable backend systems
  • Enterprise-grade AI platforms
  • Automation-first engineering environments


🚀 The Mandate

Own and evolve the technical backbone of an AI-first enterprise platform.


You will define architecture across LLM-powered systems, distributed services, and data platforms — and lead critical transformations from legacy → AI-native systems.


🧩 What You’ll Do

  • Architect large-scale distributed systems powering AI-driven workflows
  • Lead 0→1 and 1→N platform builds (LLM integrations, agentic systems, orchestration layers)
  • Redesign legacy systems into scalable, modular, AI-native architectures
  • Drive system design excellence across teams (APIs, infra, observability, reliability)
  • Make high-stakes decisions on trade-offs (latency, cost, scalability, model performance)
  • Mentor senior engineers and influence engineering culture/org standards
  • Partner with product, data, and leadership on long-term technical strategy


🧠 What We’re Looking For

  • Proven track record building high-scale backend or platform systems
  • Deep expertise in distributed systems, microservices, cloud (AWS/GCP/Azure)
  • Strong exposure to data systems/infra / Data / real-time architectures
  • Experience or strong interest in LLMs, GenAI, or AI system design
  • Exceptional system design, abstraction, and problem-solving ability
  • High ownership mindset — you think in terms of systems, not tickets
  • Strong coding skills in Python / Java / Go / Node.js
  • Solid understanding of data structures, system design basics, and backend architecture
  • Experience building scalable APIs and services
  • Familiarity or curiosity around AI/LLMs, async systems, or event-driven design
  • Strong debugging, problem-solving, and ownership mindset
  • Solve hard system problems (latency, scale, reliability)
  • Drive cross-team technical decisions and standards
  • Mentor senior engineers and influence org-wide architecture 
  • Design large-scale distributed systems and backend platforms
  • Mentorship & Technical Leadership 
  • Expertise in system design, scalability, and performance optimization


Nice to Have

  • Experience integrating LLMs, vector databases, or AI pipelines
  • Contributions to architecture at scale
  • Experience with Agentic AI / LLM orchestration frameworks
  • Background in product engineering or platform companies
  • Exposure to global-scale systems (millions of users / high throughput)


🔥 What Sets You Apart

  • Built platforms used by millions of users / high-throughput systems
  • Experience with event-driven systems, stream processing, or infra platforms
  • Prior work on AI/ML platforms, model serving, or intelligent systems
Read more
Deqode

at Deqode

1 recruiter
Samiksha Agrawal
Posted by Samiksha Agrawal
Anywhere
6 - 10 yrs
₹10L - ₹25L / yr
skill iconMachine Learning (ML)
Windows Azure
skill iconPython

Role: ML Engineer

Location: Remote

Experience: 5+ Years


𝗞𝗲𝘆 𝗦𝗸𝗶𝗹𝗹𝘀 Required:

• Azure ML Studio, AKS, Blob Storage, ADF, ADO Pipelines

• Model deployment & versioning via Azure ML

• MLflow for experiment tracking & model lifecycle management

• MLOps best practices — orchestration, CI/CD, model monitoring

• Strong Python skills (Linting, Black, dependency management)

• Drift detection & performance monitoring

• Docker-based deployment (good to have)

Read more
SAAS Industry

SAAS Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
9 - 14 yrs
₹50L - ₹65L / yr
skill iconNodeJS (Node.js)
skill iconPython
skill iconAmazon Web Services (AWS)
TypeScript
skill iconMongoDB
+25 more

Job Details

Job Title: Director of Engineering

Industry: SAAS

Function – Information Technology

Experience Required: 9-14 years

Working Days: 6 days

Employment Type: Full Time

Job Location: Bangalore

CTC Range: Best in Industry

 

Preferred Skills: TypeScript, AWS, NodeJS, mongodb, React.js, WebGL, Three.js, AI/ML, Docker,nKubernetes

 

Criteria

Candidate must be having 9+ years of engineering experience, with 3u20134 years in technical leadership

Hands-on expertise with React/Next.js, Node.js/Python, and AWS.

Ability to design scalable architectures for high-performance systems.

Should have AI/ML deployment experience

Strong 3D graphics/WebGL/Three.js knowledge.

Candidates should be from SAAS/Software/IT Services based startups or scaleup companies only

 

Job Description

The Role:

Company is hiring a hands-on Director of Engineering who codes, architects systems, and builds teams. You’ll set the technical foundation, drive engineering excellence, and own the architecture of our AI, 3D, and XR platform.

This is not a pure management role - expect to spend 50–60% of your time writing code, solving deep technical problems, and owning mission-critical systems. As we scale, this role transitions into CTO, taking full ownership of technical vision and long-term strategy. 

 

What You’ll Own:

1. Technical Leadership & Architecture

● Architect company’s full-stack platform across frontend, backend, infrastructure, and AI.

● Scale core systems: VersaAI engine, rendering pipeline, AR deployment, analytics.

● Make decisions on stack, scalability patterns, architecture, and technical debt.

● Own design for high-performance 3D asset processing, real-time rendering, and ML deployment.

● Lead architectural discussions, design reviews, and set engineering standards.

 

2. Hands-On Development

● Write production-grade code across frontend, backend, APIs, and cloud infra.

● Build critical features and core system components independently.

● Debug complex systems and optimize performance end-to-end.

● Implement and optimize AI/ML pipelines for 3D generation, CV, and recognition.

● Build scalable backend services for large-scale asset processing and real-time pipelines.

● Develop WebGL/Three.js rendering and AR workflows.

 

3. Team Building & Engineering Management

● Hire and grow a team of 5–8 engineers initially (scaling to 15–20).

● Establish engineering culture, values, and best practices.

● Build career frameworks, performance systems, and growth plans.

● Conduct 1:1s, mentor engineers, and drive continuous improvement.

● Set up processes for agile execution, deployments, and incident response.

 

4. Product & Cross-Functional Collaboration

● Work with the founder and product team on roadmap, feasibility, and prioritization.

● Translate product requirements into technical execution plans.

● Collaborate with design for UX quality and technical alignment.

● Support sales and customer success with integrations and technical discussions.

● Contribute technical inputs to product strategy and customer-facing initiatives.

 

5. Engineering Operations & Infrastructure

● Own CI/CD, testing frameworks, deployments, and automation.

● Create monitoring, logging, and alerting setups for reliability.

● Manage AWS infrastructure with a focus on cost and performance.

● Build internal tools, documentation, and developer workflows.

● Ensure enterprise-grade security, compliance, and reliability.

 

Tech Stack:

1. Frontend

React.js, Next.js, TypeScript, WebGL, Three.js

2. Backend

Node.js, Python, Express/FastAPI, REST, GraphQL

3. AI/ML

PyTorch, TensorFlow, CV models, Stable Diffusion, LLMs, ML pipelines

4. 3D & Graphics

Three.js, WebGL, Babylon.js, glTF, USDZ, rendering optimization

5. Databases

PostgreSQL, MongoDB, Redis, vector databases

6. Cloud & Infra

AWS (EC2, S3, Lambda, SageMaker), Docker, Kubernetes CI/CD: GitHub Actions

Monitoring: Datadog, Sentry

 

What We’re Looking For:

1. Must-Haves

● 9+ years of engineering experience, with 3–4 years in technical leadership.

● Deep full-stack experience with strong system design fundamentals.

● Proven success building products from 0→1 in fast-paced environments.

● Hands-on expertise with React/Next.js, Node.js/Python, and AWS.

● Ability to design scalable architectures for high-performance systems.

● Strong people leadership with experience hiring and mentoring teams.

● Ready to code, review, design, and lead from the front.

● Startup mindset: fast execution, problem-solving, ownership.

 

2. Highly Desirable

● AI/ML deployment experience (CV, generative AI, 3D reconstruction).

● Strong 3D graphics/WebGL/Three.js knowledge.

● Experience with real-time systems, rendering optimizations, or large-scale pipelines.

● Background in B2B SaaS, XR, gaming, or immersive tech.

● Experience scaling engineering teams from 5 → 20+.

● Open-source contributions or technical content creation.

● Experience working closely with founders or executive leadership.

 

Why Company:

● Hard, meaningful engineering problems at the intersection of AI, 3D, XR, and web tech.

● Build from day zero – architecture, team, and culture.

● Path to CTO as the company scales.

● High autonomy to drive technical decisions.

● Direct founder collaboration on product vision.

● High ownership, high-growth environment.

● Backed by global leaders: Microsoft, Google, NVIDIA, AWS.

 

Location & Work Culture:

● Location: HSR Layout, Bengaluru

● Schedule: 6 days a week, (5 days-in-office, Saturdays WFH)

● Culture: High-intensity, high-integrity, engineering-first

● Team: Young, ambitious, technically strong

Read more
RIA Advisory
Abhishek Surwade
Posted by Abhishek Surwade
Remote, Pune
8 - 15 yrs
₹30L - ₹60L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
AI Agents
Large Language Models (LLM)

Experience: Experience: 10+ years of experience in software development & project management, with specialization in AI/ML

Qualification: B.E/B.Tech

Location: Pune

 

Role Overview

We are seeking a Head of AI Center of Excellence to execute our enterprise AI strategy. This role will be responsible for designing and delivering agentic AI systems and production-grade AI solutions, while driving rapid experimentation and pilot-ready proof-of-concepts in a fast-paced environment.

·    Required Qualifications:

  • 10+ years of overall software development & management experience with 5+ years of hands-on experience in AI/ML system design and development
  • Experience with technical project management; managing a team of AI/ML engineers across multiple projects
  • Proven expertise in:
  • Agentic AI architectures, LLM-based systems, and orchestration frameworks
  • ML/DL model development, training, fine-tuning, and evaluation
  • MLOps, model deployment, monitoring, and lifecycle management
  • Strong proficiency in Python and modern AI/ML frameworks (e.g., LangGraph, PyTorch, TensorFlow, Hugging Face)
  • Experience with cloud platforms and AI services (AWS, Azure, or GCP)
  • Demonstrated ability to deliver pilot-ready AI PoCs quickly and effectively
Read more
Infinity Assurance
Sourabh Pal
Posted by Sourabh Pal
Delhi
2 - 3 yrs
₹4L - ₹6L / yr
skill iconMachine Learning (ML)
skill iconPython
Artificial Intelligence (AI)
Computer Vision

Job Title: AI/ML Engineer

Work Location: U.S Complex, Adjacent to Jasola Apollo Metro Station, Mathura Road New Delhi-110076

 

We, Infinity Assurance Solutions specialize in Warranty Service Administration, Extended Warranty, Accidental Damage Protection and a wide range of service products under our own brand “InfyShield.”

 

Our offerings cover Mobile Phones, Home Appliances, Consumer Electronics, Kitchen Appliances, IT Equipment, Office Automation, AV Solutions, Classroom and Conference Room Technologies, and more.

 

·        We have a very extensive Enterprise grade End to End Business Management Software Application that is unmatched in the industry.

·        The application has multiple Sub-applications and functionalities including Sales, Insurance Claims, Warranty Claims, Payments, Collections, Approvals, Billing / Invoicing, Payment / Tax / Bank Reconciliations, Partners Management, HRMS, Client Management etc. to suite end to end business needs of any enterprise.

·        The application also has multiple integrations for Payment gateways, Voice calls, Video Calls, SMS, emails, WhatsApp, client applications, courier, Maps and databases, etc.

·        To fuel our growth, we are inviting Computer Vision Engineer as we are building our software development team to execute new business growth plans and a fresh product roadmap.

·        This position requires talents who are multi-skilled with hands-on experience; to work independently as well as in teams.

·        Ideal candidates will be responsible to design, modify, develop, write, and implement software applications and components.

·        Our technology processes documents and images across warranty, insurance, claims, and identity workflows—where trust, precision, and fraud prevention are paramount.

 

Detailed Role Description

 

·        Assist in developing a secure, AI-powered platform for verification and assurance.

·        Contribute to developing and improving computer vision models for image forgery detection, replay detection, and advance fraud analysis.

·        Implement and experiment with image processing techniques such as noise analysis, Error Level Analysis (ELA), blur detection, and frequency-domain features.

·        Support OCR pipelines using tools like PaddleOCR or Azure AI Vision to extract text from photos of identity documents.

·        To Help prepare and clean real-world image datasets, including handling low-quality, noisy, and partially occluded images.

·        Integrate trained models into Python-based APIs (FastAPI) for internal testing and production use.

·        Collaborate with senior engineers to test, debug, and optimize model performance.

·        Document experiments, findings, and implementation details clearly.

 

Required Skills

 

·        2-4 years experience in Machine Learning, computer vision, image processing, and AI

·        Bachelor’s or Master’s degree in AI & Machine Learning /Computer Science/Data Science or related fields.

·        Proficiency in Machine Learning, Python

·        Core experience with OpenCV, NumPy and core image processing concepts

·        Handson experience with PyTorch, TensorFlow

·        Understanding of Convolutional Neural Networks (CNNs) fundamentals

·        Hands-on experience REST APIs or FastAPI

·        Exposure to OCR, document processing, or facial analysis

 

Desired Candidate Profile

 

·        Perior experience related image forensics, fraud detection, and biometrics

·        Comfortable working with imperfect real-world data

·        Good communication skills and team-oriented mindset

 

Important Notes & Perks:

 

·        Attractive pay structure as per the Market Standards

·        Huge career growth opportunity

·        Preference will be given to candidates who can join early

·        Should have worked in small teams with multi-skilled resources

·        This is a full-time, work-from-office opportunity (Preference will be given to candidates who are interested in Monday to Saturday; 6 days a week)

·        Applications may be submitted via google form as per the link: https://forms.gle/TC8kypz3SwN256sP6

 

About us:

We, Infinity Assurance Solutions, Private Limited, a New Delhi-based portfolio company of Indian Angel Network, Aegis Centre for Entrepreneurship, Artha Venture Fund, eVista Venture and other marquee industry veterans; specialize in Warranty Service Administration, Extended Warranty, Accidental Damage Protection and various other service products for wide range of Mobile Phones, Home Appliances, Consumer Electronics, AV Solutions, Classroom / Conference-room Solutions, Kitchen Appliances, IT, Office automation, Personal Gadgets etc.

 

Incorporated in January 2014; as a debt-free, operationally profitable with positive net retained earnings, we have grown rapidly. Going forward, we are looking to grow multi-fold with newer areas of business expansion.

 

Our success is attributed to a very agile and technologically driven unique service delivery model, loyal long-term clients, in-house application, and lean organization structure.

 

More about us:

https://www.infinityassurance.com

https://www.infyshield.com;

https://www.infyvault.com


Read more
Cambridge Wealth (Baker Street Fintech)
Sangeeta Bhagwat
Posted by Sangeeta Bhagwat
Pune
3 - 5 yrs
₹10L - ₹12L / yr
skill iconPython
SQL
skill iconAmazon Web Services (AWS)
skill iconPostgreSQL
pandas
+9 more


Department

Product & Technology

Location

On-site | Prabhat Road, Pune

Experience

3-5 Years in a Data Engineering or Analytics Role

Domain

Fintech / Wealth Management — non-negotiable

Compensation

11-12 LPA Fixed + Performance Bonus

Growth

Title upgrade + salary revision at 12–18 months for strong performers


Why this role is different from most Data Engineer postings

You will work directly with the founding team on a live wealth management platform used by HNI and NRI clients. You will not spend years in a queue waiting to matter your work ships to production, your analysis influences product decisions, and you will guide junior teammates from day one. If you perform, a raise and title upgrade are on the table within 1218 months. This is the kind of early-team role that defines careers.


About Cambridge Wealth

Cambridge Wealth is a fast-growing, award-winning Financial Services and Fintech firm obsessed with quality and exceptional client service. We serve a high-profile clientele NRI, Mass Affluent, HNI, and ultra-HNI professionals and have received multiple awards from major Mutual Fund houses and BSE. We are past the zero-to-one stage and now focused on scaling our features and intelligence layer. You will be joining at exactly the right time.


What You Will Be Doing

This is a central, hands-on data engineering role at the intersection of financial analytics and applied ML. You will own the data pipelines and analytical models that power investment insights for wealth management clients transforming transaction data and portfolio information into measurable, actionable intelligence.

We are not looking for someone who just keeps the lights on. We want someone who looks at a working system and immediately sees how to make it 10x faster, cleaner, and smarter using AI and automation wherever possible.


Key Responsibilities:


Data Engineering & Pipelines

  • Build and optimize PostgreSQL-based pipelines to process large volumes of investment transaction data.
  • Design and maintain database schemas, foreign tables, and analytical structures for performance at scale.
  • Write advanced SQL — window functions, stored procedures, query optimization, index design.
  • Build Python automation scripts for data ingestion, transformation, and scheduled pipeline runs.
  • Monitor AWS RDS workloads and troubleshoot performance issues proactively.


Financial Analytics & Modelling

  • Develop analytical frameworks to evaluate client portfolios against benchmarks and category averages.
  • Build data models covering mutual fund schemes, SIPs, redemptions, switches, and transfer lifecycles.
  • Create materialized views and derived tables optimized for dashboards and internal reporting tools.
  • Analyse client transaction history to surface patterns in investment behaviour and financial discipline.


Applied ML & AI-Driven Development

  • Use Python (Pandas, NumPy, Scikit-learn) for trend analysis, forecasting, and predictive modelling.
  • Implement classification or regression models to support financial pattern detection.
  • Use AI tools — LLMs, Copilots — to accelerate ETL development, code quality, and data cleaning.
  • Identify opportunities to automate repetitive data tasks and advocate for smarter tooling.


Data Quality & Governance

  • Own data integrity end-to-end in a live, high-stakes financial environment.
  • Build and maintain validation and cleaning protocols across all financial datasets.
  • Maintain Excel models, Power Query workflows, and structured reporting outputs.


Collaboration & Junior Mentorship

  • Work directly with Product, Investment Research, and Wealth Advisory teams.
  • Translate open-ended business questions into structured queries and measurable outputs.
  • Guide 1–2 junior trainees — review their work, set code quality standards, and help them grow.
  • Present findings clearly to non-technical stakeholders — no jargon, just clarity.


Skills — What We Need vs. What Helps

Skill / Tool

Requirement


Must-Haves:

SQL & PostgreSQL (window functions, stored procedures, optimization)

Python — Pandas, NumPy for data processing and automation

ML fundamentals — classification or regression (Scikit-learn)

AWS RDS or equivalent cloud database experience

Financial domain knowledge — mutual funds, SIPs, portfolio concepts

Python data visualization — Matplotlib, Seaborn, or Plotly

Strong Advantage

Excel — Power Query, advanced modelling

Materialized views, query planning, index optimization

Experience with BI/dashboard tools

Good to Have

NoSQL databases

Prior fintech or wealth management startup experience


Financial Domain — Non-Negotiable

This is a wealth management platform. You must come in with a working understanding of:

  • Mutual fund structures, scheme types, and NAV-based transactions
  • Investment lifecycle — SIPs, Lump Sum, Redemptions, Switches, and STPs
  • Portfolio allocation and benchmarking against indices (e.g. Nifty 50, category averages)
  • How HNI/NRI clients interact with financial products differently from retail investors

You do not need to be a CFA. But if mutual funds and portfolio analytics are completely new territory, this role is not the right fit right now.


The Culture Fit — Read This Carefully

We are a small, fast-moving team. This is not a place where you wait for a ticket to arrive in your queue. The right person for this role:

  • Has worked at a small startup before and is used to wearing multiple hats
  • Finds broken or slow data systems genuinely irritating and fixes them without being asked
  • Reaches for Python or an LLM when there is a repetitive task — automating is instinctive
  • Is comfortable saying 'I don't know but I'll find out' and follows through independently
  • Wants visibility and ownership, not just a well-defined job description
  • Is looking for a role where strong performance is directly visible and rewarded


Growth Path — What Happens If You Perform

This is not a vague 'growth opportunity' pitch.

If you hit the bar in your first 12–18 months, you will receive a salary revision and a title upgrade to Senior Data Engineer or Lead Data Engineer depending on team expansion. As we scale our Data and AI team, this role is the natural stepping stone to a team lead position. You will also gain direct exposure to founding-team decision-making — the kind of access that is hard to get at larger companies.


Preferred Background

  • 2–4 years in a data engineering or analytics role at a startup or small Fintech
  • Experience in a live product environment where data errors have real consequences
  • Exposure to portfolio analytics, investment research, or wealth management platforms
  • Has mentored or reviewed code for at least one junior team member


Hiring Process

We respect your time. The process is direct and moves fast.

  • Screening Questions — 5 minutes online
  • Online Challenge — MCQ(Data, SQL, AWS, etc), and one applied ML or analytics problem, Communication Skills and Personality (focused, not trick questions)
  • People Round — 30-minute video call, culture and communication
  • Technical Deep-Dive — 1 hour in person, live financial data problems and your past work
  • Founder's Interview — 1 hour in person, growth conversation and mutual fit
  • Offer & Background Verification


Read more
Deltek
Harsha Mehrotra
Posted by Harsha Mehrotra
Bengaluru (Bangalore)
5 - 7 yrs
Best in industry
skill iconPython
skill iconReact.js
Retrieval Augmented Generation (RAG)
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
+2 more

We are looking for a highly skilled AI Platform Engineer to build and scale agentic AI capabilities across our product suite. You’ll work on multi‑agent systems, orchestration platforms, RAG pipelines, and real‑time AI services used in enterprise workflows.


🔹 Key Responsibilities

  • Build and maintain AI platform services enabling agentic workflows
  • Develop domain-specific agents (proposal generation, compliance, data analysis)
  • Implement multi-agent orchestration using LangGraph and related frameworks
  • Build APIs, SDKs, and integration layers for product teams
  • Design and optimize RAG, GraphRAG, and knowledge ingestion pipelines
  • Enhance orchestration platforms, WebSocket communication, and error recovery
  • Optimize performance, latency (<3s), cost, and reliability of AI systems
  • Collaborate closely with ML engineers and data scientists on models, prompts, and A/B testing

🔹 Required Skills & Experience

  • 5+ years of software engineering experience (1–2+ years in AI/ML systems)
  • Strong Python (FastAPI, async, LangChain/LangGraph)
  • Experience with LLM APIs (OpenAI, Claude, Llama, Phi‑3)
  • Hands-on RAG, embeddings, vector databases, and hybrid search
  • React + TypeScript experience (WebSockets, hooks, real-time UI)
  • Knowledge of multi-agent systems, prompt engineering, and orchestration patterns
  • Solid backend fundamentals: REST APIs, databases, auth, testing, Git

🔹 Nice to Have

  • MLOps exposure (prompt versioning, monitoring, A/B testing)
  • Experience with semantic caching and context management
  • Docker and cloud deployment basics
Read more
VDart
Remote only
7 - 15 yrs
₹15L - ₹20L / yr
Test Automation (QA)
SaaS
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Large Language Models (LLM)
+7 more

Senior Quality Engineer – AI Products

Fulltime

Remote

Requirements

● 3-7 years of experience in software quality engineering, preferably in SaaS environments with a platform or infrastructure focus.

● Strong demonstrated experience testing distributed systems, APIs, data pipelines, or cloud-based infrastructure.

● Experience designing and executing test plans for AI/ML systems, data pipelines, or shared platform services.

● Familiarity with AI/LLM infrastructure concepts such as retrieval-augmented generation (RAG), vector search, model routing, and observability.

● Strong demonstrated proficiency in Linux distributions and CLI-based testing, including log file analysis and other troubleshooting tasks.

● Experience with AWS or other major cloud platforms.

● Basic Python/Shell scripting knowledge with ability to edit existing scripts and create new automation for pipeline validation.

● Advanced skills with API and SQL testing methodologies.

● Familiarity with test management tools such as TestRail; experience with Qase is a plus.

● Demonstrated experience leveraging Version Control Systems with a focus on GitHub.

● Experience with testing tools: Jira, Sentry, DataDog.

● Strong understanding of Agile/Scrum methodologies.

● Proven track record of mentoring junior engineers and contributing to process improvements.

● Excellent analytical and problem-solving abilities.

● Strong communication skills with ability to present to both technical and non-technical stakeholders.

● Proficiency in English (C1-C2 level).

● Most importantly: The courage to be vocal about quality concerns, platform risks, and testing impediments.

 

Preferred Qualifications

● Experience with AI/ML evaluation frameworks or tools (e.g., LLM-as-judge, Ragas, custom eval harnesses).

● Hands-on experience with document parsing, OCR, or unstructured data pipelines.

● Experience with observability tooling (e.g., Datadog, Grafana, OpenTelemetry) from a QA perspective.

● Experience testing SaaS products in regulated industries (such as PCI-compliant).

● Basic understanding of containerization, Kubernetes, and CI/CD pipelines (Jenkins, CircleCI).

● Experience with microservice architectures and distributed systems.

● Knowledge of basic non-functional testing (security, performance) with emphasis on AI-specific concerns.

● Background in security or compliance testing for AI systems.

● Certifications such as ISTQB or CSTE.

● Experience working in legal technology, fintech, or professional services software.

● Familiarity with AI-assisted testing tools and leveraging LLMs as a productivity-boosting tool.

● Experience evaluating and implementing new QE tools and processes

 

Read more
Remote only
0 - 1 yrs
₹12000 - ₹18000 / mo
Artificial Intelligence (AI)
skill iconMachine Learning (ML)

About The Nexora Group Inc.


The Nexora Group Inc. is a technology-driven organization focused on developing intelligent software solutions using Artificial Intelligence, Machine Learning, and advanced data technologies. Our teams work on innovative projects involving data analysis, predictive modeling, automation systems, and AI-powered applications designed to solve real-world business problems.

We are seeking passionate and motivated Artificial Intelligence & Machine Learning Interns who want to gain hands-on experience working on practical AI development projects.


Internship Responsibilities


  • Assist in developing and implementing machine learning models
  • Work on data preprocessing, data analysis, and model training
  • Support AI projects involving predictive analytics, automation, and intelligent systems
  • Use Python libraries such as NumPy, Pandas, Scikit-learn, or TensorFlow
  • Participate in testing and improving model performance
  • Collaborate with development teams on AI-based applications
  • Document project workflows and research findings


Required Skills



  • Basic knowledge of Python programming
  • Understanding of Machine Learning concepts
  • Familiarity with data analysis and statistics
  • Basic experience with libraries such as Pandas, NumPy, or Scikit-learn
  • Interest in Artificial Intelligence technologies
  • Good analytical and problem-solving skills


Preferred Qualifications

  • Students or recent graduates in Computer Science, Artificial Intelligence, Data Science, or related fields
  • Basic knowledge of Deep Learning or Neural Networks
  • Familiarity with TensorFlow, PyTorch, or similar frameworks is a plus
  • Understanding of data visualization tools is beneficial
  • Experience with Git or version control systems is an advantage


What Interns Will Gain

  • Hands-on experience working on AI and machine learning projects
  • Exposure to real-world datasets and model development
  • Opportunity to build a strong AI project portfolio
  • Mentorship from experienced developers and data scientists
  • Internship completion certificate based on performance and participation


Read more
CK-12 Foundation

at CK-12 Foundation

1 video
7 recruiters
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
4 - 10 yrs
Upto ₹70L / yr (Varies
)
Natural Language Processing (NLP)
Transformer
skill iconMachine Learning (ML)
skill iconPython

About CK-12 Foundation

CK-12’s mission is to provide free access to open-source content and technology tools that empower both students and teachers to enhance learning across different styles, resources, competence levels, and circumstances.


To achieve this ambitious vision, CK-12 challenges the traditional education model by leveraging technology to revolutionize learning for students, teachers, and parents.


CK-12 operates as a non-profit organization so it can experiment with bold ideas and focus on doing the right thing for education. The organization is backed by Vinod Khosla, a renowned technology venture capitalist.


At CK-12, you’ll work in a dynamic, entrepreneurial, and innovative environment where passionate individuals collaborate to disrupt traditional education through technology.


Technology is at the heart of scaling education, and CK-12 builds solutions on a cloud-based (AWS) and AI-first platform delivering rich and interactive learning experiences.


If you are a great technologist who enjoys challenging the status quo and building innovative products, this could be the place for you.


Together, we aim to transform education globally.

Product Offerings

Flexi 2.0 – AI-Powered Student Tutor

https://www.flexi.org/

AI-Powered Teacher Assistant

https://www.ck12.org/pages/teacher-assistant/


Core Responsibilities


• Translate high-level directions and open-ended product ideas into deliverable ML projects and drive their completion.

• Architect and implement highly scalable ML solutions for systems such as multimodal information retrieval, conversational chatbots, recommender systems, and ranking systems.

• Own end-to-end product delivery from research and experimentation to production deployment.

• Work closely with cross-functional teams including Product, Engineering, DevOps, QA, and Content teams.

• Manage ML workflows involving data gathering, working with annotators, and collaborating with ML researchers.

• Extract and analyze large volumes of data to generate insights about student and teacher behavior based on platform usage.

• Design and build innovative ML-driven solutions that can improve learning experiences in the EdTech space.

• Apply statistical hypothesis testing and experimentation to evaluate and improve models.

• Continuously innovate and challenge the traditional approach to education through ML solutions.


Requirements


• Bachelor’s degree or higher in Computer Science or a related quantitative discipline, or equivalent practical experience.

• 4+ years of hands-on development experience with strong programming skills, preferably in Python.

• Expertise in deep learning approaches for NLP including transformer-based models, predictive modeling, search and recommendation systems, and autoregressive models.

• 2+ years of experience in NLP applications such as information retrieval, chatbots, summarization, or generative models.

• Proven experience building scalable ML applications on cloud infrastructure such as AWS, GCP, or Azure.

• Strong understanding of trade-offs between model architecture, deployment costs, and model accuracy.

• Ability to manage multiple tasks and collaborate effectively with geographically distributed teams.

• Up-to-date knowledge of advancements in NLP and computer vision and the ability to apply them in the education domain.


Technical Skills

• Python, PyTorch, TorchServe

• Pandas

• SQL and NoSQL databases such as MySQL, MongoDB, Redis, and Redshift

• Cloud infrastructure (AWS / GCP / Azure)

• Vector databases and search technologies such as Elasticsearch

• Linux


Nice to Have

• Familiarity with Reinforcement Learning

• Experience with Deep Knowledge Tracing

Read more
Koramangala
10 - 12 yrs
₹24L - ₹36L / yr
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Robotics
Process automation
skill iconData Analytics
+1 more

Job Title:

AI Native Operations Expert – Director / AVP / VP

Company: EOSGlobe

CTC: ₹24 – ₹36 LPA

Open Positions: 3

Experience: 12 – 18 Years

Joining: Immediate Joiners Preferred


Role Overview

EOSGlobe is transforming into an AI-First organization and is looking for an AI Native Operations Expert to lead this transformation. The role focuses on driving automation, process re-engineering, and AI adoption across BPM operations to improve efficiency, scalability, and business impact.


Key Responsibilities

Lead AI-driven transformation initiatives across BPM operations.

Re-engineer processes using Artificial Intelligence, Machine Learning, and automation tools.

Collaborate with leadership and strategy teams to implement AI-first operational models.

Define and track KPIs, productivity metrics, and financial impact of transformation initiatives.

Partner with internal teams and clients to demonstrate AI-driven efficiency and revenue growth.

Identify opportunities for process automation and digital adoption across operations.

Required Skills

Strong expertise in Artificial Intelligence (AI), Machine Learning (ML), and RPA.

Experience in process transformation and digital automation initiatives.

Deep understanding of BPM operations and service delivery models.

Strong leadership and stakeholder management skills.

Analytical mindset with ability to measure financial impact and operational KPIs.


Preferred Qualifications

Experience leading large-scale automation or AI transformation projects.

Exposure to BPM, consulting, or operations leadership roles.

Excellent communication and strategic thinking skills.

Read more
NeoGenCode Technologies Pvt Ltd
Chandigarh
10 - 15 yrs
₹10L - ₹16L / yr
skill iconPHP
skill iconNodeJS (Node.js)
skill iconLaravel
MySQL
skill iconMongoDB
+7 more

Job Title : Principal Backend Engineer (AI-Driven)

Experience : 10+ Years

Location : Chandigarh

Tech Stack : PHP, Node.js, Laravel, MySQL, MongoDB

Additional Requirement : Hands-on experience with AI technologies, APIs, or ML integrations


Role Overview :

We're looking for a Principal Backend Engineer (AI-Driven) to design and lead scalable backend systems while driving AI adoption across products.

The role involves integrating AI-powered features, architecting intelligent systems, and mentoring engineering teams on modern backend and AI implementation.


Key Responsibilities :

  • Design and lead backend architecture using PHP (Laravel/CodeIgniter) and Node.js
  • Build scalable microservices / modular backend systems
  • Develop APIs and backend workflows for AI-driven features
  • Integrate AI APIs (OpenAI, LangChain or similar frameworks)
  • Work with LLMs, embeddings, vector databases, and AI pipelines
  • Ensure performance, scalability, and security of backend systems
  • Mentor engineering teams and drive backend + AI best practices

Requirements :

  • 10+ years of backend development experience
  • Strong expertise in PHP / Node.js, MySQL, MongoDB
  • Hands-on experience integrating AI/ML APIs or AI-powered features
  • Strong system design and architecture skills
  • Experience leading engineering teams

Good to Have :

  • Prompt engineering or AI cost optimization
  • Exposure to MLOps / ML pipelines
Read more
Mumbai
3 - 6 yrs
₹20L - ₹28L / yr
neural networks
skill iconDeep Learning
XGBoost
Linear regression
Long short-term memory (LSTM)
+5 more

What You’ll Do

● Partner with Product to spot high-leverage ML opportunities tied to business

metrics.

● Wrangle large structured and unstructured datasets; build reliable features and

data contracts.

● Build and ship models to:

○ Enhance customer experiences and personalization

○ Boost revenue via pricing/discount optimization

○ Power user-to-user discovery and ranking (matchmaking at scale)

○ Detect and block fraud/risk in real time

○ Score conversion/churn/acceptance propensity for targeted actions

● Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.

● Design and run A/B tests with guardrails.

● Build monitoring for model/data drift and business KPIs


What We’re Looking For

● 2–4 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.

● Proven, hands-on success in at least two (preferably 3–4) of the following:

○ Recommender systems (retrieval + ranking, NDCG/Recall, online lift;

bandits a plus)

○ Fraud/risk detection (severe class imbalance, PR-AUC)

○ Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs,

guardrails/simulation)

○ Propensity models (payment/churn)


● Programming: strong Python and SQL; solid git, Docker, CI/CD.

● Cloud and data: experience with AWS or GCP; familiarity with

warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).

● ML breadth: recommender systems, NLP or user profiling, anomaly detection.

● Communication: clear storytelling with data; can align stakeholders and drive decisions.


Read more
REConnect Energy

at REConnect Energy

4 candid answers
2 recruiters
Ariba Khan
Posted by Ariba Khan
Bengaluru (Bangalore)
4.5 - 7 yrs
Upto ₹30L / yr (Varies
)
skill iconPython
MLOps
skill iconMachine Learning (ML)
SQL
skill iconAmazon Web Services (AWS)

About Us:

REConnect Energy’s GRIDConnect platform helps integrate and manage energy generation and consumption for 1000s of renewable energy assets and grid operators. We are currently serving customers across India, Bhutan and the Middle East with expansion planned in US and European markets.


We are headquartered in Central Bangalore with a team of 150+ and growing. You will join the Bangalore based Engineering team as a senior member and work at the intersection of Energy, Weather & Climate Sciences and AI. 


Responsibilities:

● Engineering - Take complete ownership of engineering stacks including Data Engineering and MLOps. Define and maintain software systems architecture for high availability 24x7 systems.

● Leadership - Lead a team of engineers and analysts managing engineering development as well as round the clock service delivery. Provide mentorship and technical guidance to team members and contribute towards their professional growth. Manage weekly and monthly reviews with team members and senior management.

● Product Development - Contribute towards new product development through engineering solutions to product requirements. Interact with cross-functional teams to bring forward a technology perspective.

● Operations - Manage delivery of critical services to power utilities with expectations of zero downtime. Take ownership for uninterrupted product uptime. 


Requirements:

● 4-5 years of experience building highly available systems

● 2-3 years experience leading a team of engineers and analysts

● Bachelors or Master’s degree in Computer Science, Software Engineering, Electrical Engineering or equivalent

● Proficient in python programming skills and expertise with data engineering and machine learning deployment

● Experience in databases including MySQL and NoSQL

● Experience in developing and maintaining critical and high availability systems will be given strong preference

● Experience in software design using design principles and architectural modeling.

● Experience working with AWS cloud platform.

● Strong analytical and data driven approach to problem solving 

Read more
WINIT
Aishwarya SURENDRAN
Posted by Aishwarya SURENDRAN
Hyderabad
4 - 9 yrs
₹10L - ₹30L / yr
MLOps
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Generative AI
scaling

Role Overview

We are seeking an experienced Machine Learning Engineer to design, develop, deploy, and scale machine learning solutions in production environments. The ideal candidate will have strong expertise in model development, MLOps, backend integration, and scalable ML system architecture.

Key Responsibilities

  • Design, train, and deploy machine learning and deep learning models for real-world applications.
  • Build and scale ML models and pipelines to handle large datasets and high-traffic production workloads.
  • Perform data exploration, preprocessing, feature engineering, and dataset optimization.
  • Develop and integrate APIs for ML model consumption using frameworks such as FastAPI, Flask, or Django.
  • Implement end-to-end ML workflows, including experimentation, versioning, deployment, and monitoring.
  • Build scalable data pipelines and processing workflows for batch and real-time use cases.
  • Monitor model performance, drift, and system reliability in production.
  • Collaborate with data engineers, DevOps, and product teams to productionize ML solutions.

Required Skills & Qualifications

  • 5+ years of experience in Machine Learning Engineering or related roles.
  • Proven experience in training, developing, deploying, and scaling ML models in production.
  • Strong knowledge of data exploration, preprocessing, and feature engineering techniques.
  • Proficiency in Python and ML/Data Science libraries such as Pandas, NumPy, Scikit-learn, TensorFlow, and PyTorch.
  • Experience with API development and backend integration (FastAPI, Flask, Django).
  • Solid understanding of MLOps tools and practices, including MLflow, Kubeflow, Airflow, Prefect, Docker, and Kubernetes.
  • Experience building scalable data pipelines, processing workflows, and monitoring deployed models.
Read more
Generative AI Persona platform

Generative AI Persona platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 7 yrs
₹15L - ₹20L / yr
skill iconMachine Learning (ML)
skill iconPython
ETL
skill iconData Science
ELT
+6 more

Description

We are currently hiring for the position of Data Scientist/ Senior Machine Learning Engineer (6–7 years’ experience).

 

Please find the detailed Job Description attached for your reference. We are looking for candidates with strong experience in:

  • Machine Learning model development
  • Scalable data pipeline development (ETL/ELT)
  • Python and SQL
  • Cloud platforms such as Azure/AWS/Databricks
  • ML deployment environments (SageMaker, Azure ML, etc.)

 

Kindly note:

  • Location: Pune (Work From Office)
  • Immediate joiners preferred

 

While sharing profiles, please ensure the following details are included:

  • Current CTC
  • Expected CTC
  • Notice Period
  • Current Location
  • Confirmation on Pune WFO comfort

 

Must have skills

Machine Learning - 6 years

Python - 6 years

ETL(Extract, Transform, Load) - 6 years

SQL - 6 years

Azure - 6 years

 

Read more
Business Intelligence & Digital Consulting company

Business Intelligence & Digital Consulting company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
4 - 6 yrs
₹14L - ₹16L / yr
skill iconPython
skill iconData Science
skill iconMachine Learning (ML)
SQL
skill iconData Analytics
+6 more

Description

JOB DESCRIPTION – SENIOR ANALYST – DATA SCIENTIST 

 

Key Responsibilities ·       

Work with business stakeholders and cross-functional SMEs to deeply understand business context and key business questions·       

Advanced skills with statistical/programming in Python and data querying languages (e.g., SQL, Hadoop/Hive, Scala)·       

Solid understanding of time-series forecasting techniques·       

Good hands-on skills in both feature engineering and hyperparameter optimization·       

Able to write clean and tested code that can be maintained by other software engineers·       

Able to clearly summarize and communicate data analysis assumptions and results·       

Able to craft effective data pipelines to transform your analyses from offline to production systems·       

Self-motivated and a proactive problem solver who can work independently and in teams·       

Connects both externally and internally to understand industry trends, technology advances and outstanding processes or solutions·       

Is collaborative and engages (strategic & tactical. Able to influence without authority, handle complex issues and implement positive change·       

Work on multiple pillars of AI including cognitive engineering, conversational bots, and data science·       

Ensure that solutions exhibit high levels of performance, security, scalability, maintainability, repeatability, appropriate reusability, and reliability upon deployment ·       

Provide guidance and leadership to more junior data scientists, managing processes and flow of work, vetting designs, and mentoring team members to realize their full potential·       

Lead discussions at peer review and use interpersonal skills to positively influence decision making·       

Provide subject matter expertise in machine learning techniques, tools, and concepts; make impactful contributions to internal discussions on emerging practices·       

Facilitate cross-geography sharing of new ideas, learnings, and best-practices 

 

What We Are Looking For 

Required Qualifications ·       

Master's degree in a quantitative field such as Data Science, Statistics, Applied Mathematics or Bachelor's degree in engineering, computer science, or related field. ·       

4 – 6 years of total work experience as data scientist or analytical role, with at least 2-3 years of experience in time series forecasting·       

A combination of business focus, strong analytical and problem-solving skills, and programming knowledge to be able to quickly cycle hypothesis through the discovery phase of a project ·       

Strong experience in Time Series Forecasting and Demand Planning ·       

Advanced skills with statistical/programming software (e.g., R, Python) and data querying languages (e.g., SQL, Hadoop/Hive, Scala) ·       

Good hands-on skills in both feature engineering and hyperparameter optimization ·       

Experience producing high-quality code, tests, documentation·       

Understanding of descriptive and exploratory statistics, predictive modelling, evaluation metrics, decision trees, machine learning algorithms, optimization & forecasting techniques, and / or deep learning methodologies·       

Proficiency in statistical concepts and ML algorithms·       

Ability to lead, manage, build, and deliver customer business results through data scientists or professional services team·       

Ability to share ideas in a compelling manner, to clearly summarize and communicate data analysis assumptions and results·       

Self-motivated and a proactive problem solver who can work independently and in teams·       

Outstanding verbal and written communication skills with the ability to effectively advocate technical solutions to engineering and business teams 


Desired Qualifications ·      

Experience working in one or multiple supply chain functions (e.g., procurement, planning, manufacturing, quality, logistics) is strongly preferred ·       

Experience in applying AI/ML within a CPG or Healthcare business environment is strongly preferred ·       

Experience in creating CI/CD pipelines for deployment using Jenkins. ·       

Experience implementing MLOPs framework along with understanding of data security·       

Implementation on ML models·       

Exposure to visualization packages and Azure tech stack.  

 

Must have skills

Python - 2 years

Data Science - 4 years

SQL - 2 years

Machine Learning - 2 years

 

Nice to have skills

Data Analysis - 4 years

Time Series Forecasting - 2 years

Demand Planning - 2 years

Hadoop - 2 years

Statistical concepts - 2 years

Supply chain functions - 2 years

Read more
Ampera Technologies
Bengaluru (Bangalore)
5 - 10 yrs
₹15L - ₹40L / yr
Generative AI
Large Language Models (LLM)
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
MLOps

Hi ,



Title                            : Senior AI/ML Engineer

Experience                 : 5 – 10+ Yrs

Location                     : Bengaluru

Work Type                : Hybrid – 2 days Work from office

Type of hire               : PwD & Non-PwD Inclusive Hiring

Employment Type     : Full Time

Notice Period             : Immediate Joiner

Workdays                   : Mon - Fri

 

 

Role Overview

We are seeking an exceptional AI Engineer who can design and build production-grade AI systems that combine advanced machine learning, Generative AI, and scalable software engineering.

This role goes beyond traditional data science and focuses on building end-to-end AI platforms, autonomous AI agents, intelligent decision systems, and enterprise AI applications.

You will work on real-world enterprise problems across industries, developing AI systems that automate reasoning, prediction, and decision-making at scale.

 

What You Will Build

Examples of systems you may work on:

• AI Copilots for enterprise workflows

• Autonomous AI agents for automation

• Decision intelligence platforms

• Retrieval-Augmented Generation (RAG) systems

• Predictive ML systems for forecasting and anomaly detection

• AI-powered knowledge assistants

• Intelligent automation platforms

 

Key Responsibilities

1. Advanced Machine Learning & Predictive Systems

Design and implement ML models including:

• Time series forecasting

• Predictive modeling

• Anomaly detection

• Recommendation systems

• NLP / text intelligence

• Deep learning models

Develop models using:

• PyTorch

• TensorFlow

• Scikit-learn

• XGBoost / LightGBM

 

2. Generative AI & LLM Systems

Build enterprise-grade GenAI applications including:

• AI copilots

• conversational agents

• document intelligence systems

• enterprise knowledge assistants

Develop LLM systems using:

• OpenAI / Claude / Gemini / Llama

• prompt engineering techniques

• embeddings and semantic search

• RAG architectures

 

3. Agentic AI Systems

Design autonomous AI systems capable of reasoning and executing tasks.

Build multi-agent architectures using:

• LangGraph

• CrewAI

• AutoGen

• Semantic Kernel

Integrate agents with:

• APIs

• enterprise data systems

• internal workflows

 

4. AI Platform Engineering

Develop scalable AI services and applications using:

• Python

• FastAPI / Flask

• asynchronous processing

• distributed compute frameworks

Build production-grade APIs and AI services.

 

5. Enterprise AI Deployment & MLOps

Deploy AI models into scalable production environments.

Work with:

• Docker

• Kubernetes

• CI/CD pipelines

• MLflow / experiment tracking

• model monitoring and drift detection

Deploy AI solutions on:

• Azure

• AWS

• GCP

 

6. Data Integration & AI Systems

Work with enterprise data sources including:

• relational databases

• data warehouses (Snowflake, Redshift, BigQuery)

• data lakes (S3 / Azure Data Lake)

• vector databases (Pinecone, Weaviate, FAISS)

 

Required Skills:

Programming

Expert-level proficiency in:

• Python

• software engineering best practices

• data structures and algorithms

Experience building production-ready systems.

 

Machine Learning

Strong expertise in:

• supervised learning

• unsupervised learning

• deep learning

• time-series modelling

• model evaluation and optimization

 

Generative AI

Experience working with:

• LLM APIs

• prompt engineering

• RAG pipelines

• embeddings and vector search

 

AI Architecture

Ability to design:

• scalable AI systems

• distributed ML systems

• intelligent automation platforms

 

Preferred Experience

• Building enterprise AI products

• Developing AI copilots or agents

• Designing decision intelligence platforms

• Experience with large-scale data systems

 

Ideal Candidate Profile

The ideal candidate is:

• A strong ML engineer AND software engineer

• Comfortable building AI systems end-to-end

• Experienced in deploying models to production

• Passionate about next-generation AI architectures

We value builders who ship real systems, not just research prototypes.

 

Education

Bachelor’s / Master’s in:

Computer Science

Artificial Intelligence

Machine Learning

Data Science

or related field.

 

Why Join Ampera

At Ampera, we are building AI-native enterprise platforms that transform how organizations use data and intelligence.

Engineers at Ampera work on:

• real-world enterprise AI systems

• cutting-edge GenAI and agentic architectures

• global enterprise clients across industries

• high-impact AI platforms that scale.

 

What Makes This Role Unique

You will help build the next generation of enterprise AI systems — where AI moves beyond prediction and becomes an autonomous decision-making layer for organizations.

 

About Ampera:

Ampera Technologies, a purpose driven Digital IT Services with primary focus on supporting our client with their Data, AI / ML, Accessibility and other Digital IT needs. We also ensure that equal opportunities are provided to Persons with Disabilities Talent. Ampera Technologies has its Global Headquarters in Chicago, USA and its Global Delivery Center is based out of Chennai, India. We are actively expanding our Tech Delivery team in Chennai and across India. We offer exciting benefits for our teams, such as 1) Hybrid and Remote work options available, 2) Opportunity to work directly with our Global Enterprise Clients, 3) Opportunity to learn and implement evolving Technologies, 4) Comprehensive healthcare, and 5) Conducive environment for Persons with Disability Talent meeting Physical and Digital Accessibility standards

 

 

Accessibility & Inclusion Statement

We are committed to creating an inclusive environment for all employees, including persons with disabilities. Reasonable accommodations will be provided upon request.

Equal Opportunity Employer (EOE) Statement

Ampera Technologies is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.

.


Read more
MyOperator - VoiceTree Technologies

at MyOperator - VoiceTree Technologies

1 video
3 recruiters
Vijay Muthu
Posted by Vijay Muthu
Noida
2 - 5 yrs
₹7L - ₹10L / yr
Prompt engineering
Large Language Models (LLM)
Debugging
Integration
skill iconMachine Learning (ML)
+7 more

About MyOperator

MyOperator is a Business AI Operator and category leader that unifies WhatsApp, Calls, and AI-powered chatbots & voice bots into one intelligent business communication platform.Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single no-code platform.Trusted by 12,000+ brands including Amazon, Domino’s, Apollo, and Razor-pay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement


Role Summary

We’re hiring a Front Deployed Engineer (FDE)—a customer-facing, field-deployed engineer who owns the end-to-end delivery of AI bots/agents.

This role is “frontline”: you’ll work directly with customers (often onsite), translate business reality into bot workflows, do prompt engineering + knowledge grounding, ship deployments, and iterate until it works reliably in production.

Think: solutions engineer + implementation engineer + prompt engineer, with a strong bias for execution.


Responsibilities


Requirement Discovery & Stakeholder Interaction

  • Join customer calls alongside Sales and Revenue teams.
  • Ask targeted questions to understand business objectives, user journeys, automation expectations, and edge cases.
  • Identify data sources (CRM, APIs, Excel, SharePoint, etc.) required for the solution.
  • Act as the AI subject-matter expert during client discussions.

Use Case & Solution Documentation

  • Convert discussions into clear, structured use case documents, including:
  • Problem statement & goals.
  • Current vs. proposed conversational flows.
  • Chatbot conversation logic, integrations, and dependencies.
  • Assumptions, limitations, and success criteria.

Customer Delivery Ownership

Own deployment of AI bots for customer use-cases (lead qualification, support, booking, etc.). Run workshops to capture processes, FAQs, edge cases, and success metrics. Drive the go-live process: requirements through monitoring and improvement.


Prompt Engineering & Conversation Design

Craft prompts, tool instructions, guardrails, fallbacks, and escalation policies for stable behavior. Build structured conversational flows: intents, entities, routing, handoff, and compliant responses. Create reusable prompt patterns and "prompt packs."


Testing, Debugging & Iteration

Analyze logs to find failure modes (misclassification, hallucination, poor handling). Create test sets ("golden conversations"), run regressions, and measure improvements. Coordinate with Product/Engineering for platform needs.


Integrations & Technical Coordination

Integrate bots with APIs/webhooks (CRM, ticketing, internal tools) to complete workflows. Troubleshoot production issues and coordinate fixes/root-cause analysis.


What Success Looks Like

  • Customer bots go live quickly and show high containment + high task completion with low escalation.
  • You can diagnose failures from transcripts/logs and fix them with prompt/workflow/knowledge changes.
  • Customers trust you as the “AI delivery owner”—clear communication, realistic timelines, crisp execution.

Requirements (Must Have)

  • 2–5 years in customer-facing delivery roles: implementation, solutions engineering, customer success engineering, or similar.
  • Hands-on comfort with LLMs and prompt engineering (structured outputs, guardrails, tool use, iteration).
  • Strong communication: workshops, requirement capture, crisp documentation, stakeholder management.
  • Technical fluency: APIs/webhooks concepts, JSON, debugging logs, basic integration troubleshooting.
  • Willingness to be front deployed (customer calls/visits as needed).

Good to Have (Nice to Have)

  • Experience with chatbots/voicebots, IVR, WhatsApp automation, conversational AI platforms with at least a couple of projects. 
  • Understanding of metrics like containment, resolution rate, response latency, CSAT drivers.
  • Prior SaaS onboarding/delivery experience in mid-market or enterprises.

Working Style & Traits We Value

  • High agency: you don’t wait for perfect specs—you create clarity and ship.
  • Customer empathy + engineering discipline.
  • Strong bias for iteration: deploy → learn → improve.
  • Calm under ambiguity (real customer environments are chaotic by default).


Read more
Quanteon Solutions
DurgaPrasad Sannamuri
Posted by DurgaPrasad Sannamuri
Hyderabad
6 - 10 yrs
₹15L - ₹25L / yr
React JS
skill iconNodeJS (Node.js)
TypeScript
skill iconJavascript
skill iconReact Native
+10 more

Job Title: Tech Lead

Location: Gachibowli, Hyderabad


Required Skills/Experience:

• 6+ years of experience in designing and developing enterprise and/or consumer-facing applications using technologies and frameworks like JavaScript, Node.js, ReactJS, Angular, SCSS, CSS, and React Native.

• 2+ years of experience in leading teams (guiding, designing, and tracking tasks) and taking responsibility for delivering projects as per agreed schedules.

• Hands-on experience with SQL and NoSQL databases.

• Hands-on experience working in Linux OS environments.

• Strong debugging, troubleshooting, and problem-resolution skills.

• Experience in developing responsive and scalable web applications.

• Good communication skills (verbal and written) to effectively interact with customers and internal teams.

• Ability and interest in learning new technologies and adapting to evolving technical requirements.

• Experience working in the complete product development lifecycle (prototyping, development, hardening, testing, and deployment).

• Exposure to AI/ML concepts and ability to integrate AI-based features into applications.

• Experience using AI tools such as ChatGPT, GitHub Copilot, Gemini, or similar tools for improving development productivity, automation, and documentation.


Additional Skills/Experience:

• Working experience with Python and NoSQL databases such as MongoDB and Cassandra.

• Exposure to AI, Machine Learning (ML), Natural Language Processing (NLP), and Predictive Analytics domains.

• Familiarity with modern AI frameworks or APIs and experience integrating AI-powered capabilities into applications is a plus.

• Eagerness to participate in product functional design and user experience discussions.

• Familiarity with internationalization (i18n) and the latest trends in UI/UX design.

• Experience implementing payment gateways applicable across different countries.

• Experience with CI/CD pipelines and tools such as Jenkins, Nginx, and related DevOps practices.


Educational Qualification:

• B.Tech / M.Tech in Computer Science Engineering (CSE), Information Technology (IT), Electronics & Communication Engineering (ECE), Artificial Intelligence (AI), Machine Learning (ML), or Data Science (DS) from a recognized university.

Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Mumbai
0 - 4 yrs
₹1L - ₹9L / yr
Artificial Intelligence (AI)
Large Language Models (LLM)
AI Agents
Generative AI
skill iconMachine Learning (ML)
+3 more

Job Title : AI Analyst (Fresher / Associate)

Experience : 0 to 3 Years

Location : Andheri West, Mumbai (Onsite)

Reporting To : AI Architect

Employment Type : Full-Time


About the Role :

We are hiring an AI Analyst to work with enterprise clients on the assessment, design, and validation of AI systems. This is a hands-on role at the intersection of business, technology, and responsible AI, focused on building production-ready, scalable, and governed AI solutions aligned with real business outcomes.


Mandatory Skills :

Artificial Intelligence (AI), Large Language Models (LLM), AI Agents, Generative AI, Machine Learning basics, Python, Prompt Engineering, Analytical Thinking.


Key Responsibilities :

  • Review existing AI workflows, agents, and LLM usage to identify risks, gaps, and inefficiencies.
  • Support the design of AI agent workflows aligned with business requirements.
  • Help implement AI guardrails, governance frameworks, and safety mechanisms.
  • Design evaluation and validation frameworks to test accuracy, reliability, and cost efficiency.
  • Support AI pilot launches and production readiness.
  • Communicate AI system behavior and insights to technical and non-technical stakeholders.

Required Skills :

  • Strong analytical and systems thinking.
  • Exposure to LLMs, AI agents, or AI workflows.
  • Ability to translate business requirements into AI solutions.
  • Good problem-solving and communication skills.
  • Comfortable working in fast-paced environments.

Preferred :

  • Consulting or client-facing experience.
  • Exposure to enterprise AI deployments or regulated environments.

Education :

  • Degree in Computer Science, Engineering, AI, or Data Science preferred.
  • Strong practical AI skills are also valued.


Why Join Us :

  • Work on real-world AI systems with enterprise clients, gain exposure to production AI and responsible AI deployment, and build a strong foundation in Applied AI and AI Systems Architecture.
Read more
Remote only
0 - 0 yrs
₹0 / mo
skill iconHTML/CSS
skill iconJavascript
skill iconJava
skill iconPython
skill iconMachine Learning (ML)
+4 more

Roles:

-Working on the full stack development (Both Front-end and Back-end)

-Working on any one of the following technologies:

• Java Application Programming

• Web Development with PHP

• Python Application Programming with Django

• Machine Learning

• Data Science

• Artificial Intelligence

• Cyber Security


Eligibility: BCA/MCA 2026/2027 students can apply


Duration: 1-6 months


Perks:

Internship Experience Certificate

Letter of Recommendation


Mode of internship: Online/Offline

Read more
Digital solutions and services company

Digital solutions and services company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 7 yrs
₹17L - ₹23L / yr
skill iconMachine Learning (ML)
skill iconPython
ETL
skill iconData Science
SQL
+5 more

Data Scientist or Senior Machine Learning Engineer


We are currently hiring for the position of Data Scientist/ Senior Machine Learning Engineer (6–7 years' experience).


Please find the detailed Job Description attached for your reference.

We are looking for candidates with strong experience in:

  • Machine Learning model development
  • Scalable data pipeline development (ETL/ELT)
  • Python and SQL
  • Cloud platforms such as Azure/AWS/Databricks
  • ML deployment environments (SageMaker, Azure ML, etc.)


Kindly note:

  • Location: Pune (Work from Office)
  • Immediate joiners preferred


While sharing profiles, please ensure the following details are included:

  • Current CTC
  • Expected CTC
  • Notice Period
  • Current Location
  • Confirmation on Pune WFO comfort


Must have Skills

  • Machine Learning - 6 Years
  • Python - 6 Years
  • ETL (Extract, Transform, Load) - 6 Years
  • SQL - 6 Years
  • Azure - 6 Years


Request you to share relevant profiles at the earliest. Looking forward to your support.

Read more
Wipro

at Wipro

3 recruiters
Agency job
via Talent Divas Consulting Pvt Ltd by Rasshmi Manjunath
Bengaluru (Bangalore)
8 - 10 yrs
₹25L - ₹40L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
PyTorch
TensorFlow

Job Responsibilities

  • Help develop an analytics platform that integrates insights from diverse data sources.
  • Build, deploy, and test machine learning and classification models
  • Train and retrain systems when necessary
  • Design experiments, train and track performance of machine learning and AI models that meet specific business requirements
  • ML and data labelling: Identify ways to gather and build training data with automated data labelling techniques and the creation of highly accurate training datasets.
  • Automatic extraction of causal knowledge from diverse information sources such as databases, news, social media, videos and images etc.
  • Develop customized machine learning solutions including data querying and knowledge extraction.
  • Develop and implement approaches for extracting patterns and correlations from both internal and external data sources using time series machine learning models.
  • Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors.
  • Distil insights from complex data, communicating findings to technical and non-technical audiences.
  • Develop, improve, or expand in-house computational pipelines, algorithms, models, and services used in crop product development.
  • Constantly document and communicate results of research on data mining, analysis, and modelling approaches.
  • 5+ years of experience working with NLP and ML technologies.
  • Proven experience as a Machine Learning Engineer or similar role.
  • Experience with NLP/ML frameworks and libraries       
  • Proficient with Python scripting language
  • Background in machine learning frameworks like TensorFlow, PyTorch, Scikit Learn, etc.
  • Demonstrated experience developing and executing machine learning, deep learning, data mining and classification models. Conversant with the latest in NLP and NLU models including transformer architectures and in creating explainable AI
  • Ability to communicate the advantages and disadvantages of choosing specific models to various stakeholders
  • Proficient with relational databases and SQL. Ability to write efficient queries and optimize the storage and retrieval of data within the database. Experience with creating and working on APIs, Serverless architectures and Containers.
  • Creative, innovative, and strategic thinking; willingness to be bold and take risks on new ideas.

͏

͏

͏

͏

Mandatory Skills: AI : Artificial Intelligence .

 

Experience: 8-10 Years .

Read more
Nexuslink Services India Pvt ltd
Remote only
3 - 8 yrs
₹12L - ₹30L / yr
skill iconMachine Learning (ML)
skill iconDeep Learning
Large Language Models (LLM)
Computer Vision
OpenCV
+3 more

Job Summary


We are looking for a Data Scientist – AI/ML who has hands-on experience in building, training, fine-tuning, and deploying machine learning and deep learning models. The ideal candidate should be comfortable working with real-world datasets, collaborating with cross-functional teams, and communicating insights and solutions to clients.


Experience: Fresher to 5 Years

Location: Ahmedabad 

Employment Type: Full-Time


Key Responsibilities


Develop, train, and optimize Machine Learning and Deep Learning models


Perform data cleaning, preprocessing, and feature engineering


Fine-tune ML/DL models to improve accuracy and performance


Deploy models into production using APIs or cloud platforms


Monitor model performance and retrain models as required


Work closely with clients to understand business problems and translate them into AI/ML solutions


Present findings, model outcomes, and recommendations to stakeholders


Collaborate with data engineers, developers, and product teams


Document workflows, models, and deployment processes


 Required Skills & Qualifications


Strong understanding of Machine Learning concepts (Supervised, Unsupervised learning)


Hands-on experience with ML algorithms (Linear/Logistic Regression, Decision Trees, Random Forest, XGBoost, etc.)


Experience with Deep Learning frameworks (TensorFlow / PyTorch / Keras)


Proficiency in Python and AI/ML libraries (NumPy, Pandas, Scikit-learn)


Experience in model deployment using Flask/FastAPI, Docker, or cloud platforms (AWS/GCP/Azure)


Understanding of model fine-tuning and performance optimization


Basic knowledge of SQL and data handling


Good client communication and documentation skills


 Good to Have


Experience with NLP, Computer Vision, or Generative AI


Exposure to MLOps tools (MLflow, Airflow, CI/CD pipelines)


Experience working on live or client-based AI projects


Kaggle, GitHub, or portfolio showcasing AI/ML projects


 Education


Bachelor’s / Master’s degree in Computer Science, Data Science, AI/ML, or related field


Relevant certifications or project experience will be an added advantage


 What We Offer


Opportunity to work on real-world AI/ML projects


Mentorship from experienced AI/ML professionals


Career growth in Data Science & Artificial Intelligence


Collaborative and learning-driven work culture

Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
2 - 5 yrs
Best in industry
Data Structures
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Windows Azure
Scikit-Learn
+3 more

About NonStop io Technologies

NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.


Brief Description:

We're seeking an AI/ML Engineer to join our team. As AI/ML Engineer, you will be responsible for designing, developing, and implementing artificial intelligence (AI) and machine learning (ML) solutions to solve real-world business problems. You will work closely with engineering teams, including software engineers, domain experts, and product managers, to deploy and integrate Applied AI/ML solutions into the products that are being built at NonStop io. Your role will involve researching cutting-edge algorithms and data processing techniques, and implementing scalable solutions to drive innovation and improve the overall user experience.


Responsibilities

● Applied AI/ML engineering; Building engineering solutions on top of the AI/ML tooling available in the industry today. Eg: Engineering APIs around OpenAI

● AI/ML Model Development: Design, develop, and implement machine learning models and algorithms that address specific business challenges, such as natural language processing, computer vision, recommendation systems, anomaly detection, etc.

● Data Preprocessing and Feature Engineering: Cleanse, preprocess, and transform raw data into suitable formats for training and testing AI/ML models. Perform feature engineering to extract relevant features from the data

● Model Training and Evaluation: Train and validate AI/ML models using diverse datasets to achieve optimal performance. Employ appropriate evaluation metrics to assess model accuracy, precision, recall, and other relevant metrics

● Data Visualization: Create clear and insightful data visualizations to aid in understanding data patterns, model behaviour, and performance metrics

● Deployment and Integration: Collaborate with software engineers and DevOps teams to deploy AI/ML models into production environments and integrate them into various applications and systems

● Data Security and Privacy: Ensure compliance with data privacy regulations and implement security measures to protect sensitive information used in AI/ML processes

● Continuous Learning: Stay updated with the latest advancements in AI/ML research, tools, and technologies, and apply them to improve existing models and develop novel solutions

● Documentation: Maintain detailed documentation of the AI/ML development process, including code, models, algorithms, and methodologies for easy understanding and future reference.


Qualifications & Skills

● Bachelor's, Master's, or PhD in Computer Science, Data Science, Machine Learning, or a related field. Advanced degrees or certifications in AI/ML are a plus

● Proven experience as an AI/ML Engineer, Data Scientist, or related role, ideally with a strong portfolio of AI/ML projects

● Proficiency in programming languages commonly used for AI/ML. Preferably Python

● Familiarity with popular AI/ML libraries and frameworks, such as TensorFlow, PyTorch, scikit-learn, etc.

● Familiarity with popular AI/ML Models such as GPT3, GPT4, Llama2, BERT etc.

● Strong understanding of machine learning algorithms, statistics, and data structures

● Experience with data preprocessing, data wrangling, and feature engineering

● Knowledge of deep learning architectures, neural networks, and transfer learning

● Familiarity with cloud platforms and services (e.g., AWS, Azure, Google Cloud) for scalable AI/ML deployment

● Solid understanding of software engineering principles and best practices for writing maintainable and scalable code

● Excellent analytical and problem-solving skills, with the ability to think critically and propose innovative solutions

● Effective communication skills to collaborate with cross-functional teams and present complex technical concepts to non-technical stakeholders

Read more
Dolat Capital Market Private Ltd.
Bengaluru (Bangalore)
10 - 15 yrs
₹10L - ₹15L / yr
skill iconC++
skill iconPython
Artificial Intelligence (AI)
skill iconMachine Learning (ML)

Title: Quantitative Developer

Location : Mumbai

Candidates preferred with Master's


Who We Are

At Dolat Capital, we are a collective of traders, puzzle solvers, and tech enthusiasts passionate about decoding the intricacies of financial markets. From navigating volatile trading conditions with precision to continuously refining cutting-edge technologies and quantitative strategies, our work thrives at the intersection of finance and engineering.

We operate a robust, ultra-low latency infrastructure built for market-making and active trading across Equities, Futures, and Options—with some of the highest fill rates in the industry. If you're excited by technology, trading, and critical thinking, this is the place to evolve your skills into world class capabilities.


What You Will Do


This role offers a unique opportunity to work across both quantitative development and high frequency trading. You'll engineer trading systems, design and implement algorithmic strategies, and directly participate in live trading execution and strategy enhancement.

1. Quantitative Strategy & Trading Execution

  • Design, implement, and optimize quantitative strategies for trading derivatives, index options, and ETFs
  • Trade across options, equities, and futures, using proprietary HFT platforms
  • Monitor and manage PnL performance, targeting Sharpe ratios of 6+
  • Stay proactive in identifying market opportunities and inefficiencies in real-time HFT environments
  • Analyze market behavior, particularly in APAC indices, to adjust models and positions dynamically


2. Trading Systems Development

  • Build and enhance low-latency, high-throughput trading systems
  • Develop tools to simulate trading strategies and access historical market data
  • Design performance-optimized data structures and algorithms for fast execution
  • Implement real-time risk management and performance tracking systems


3. Algorithmic and Quantitative Analysis

  • Collaborate with researchers and traders to integrate strategies into live environments
  • Use statistical methods and data-driven analysis to validate and refine models
  • Work with large-scale HFT tick data using Python / C++


4. AI/ML Integration

  • Develop and train AI/ML models for market prediction, signal detection, and strategy enhancement
  • Analyze large datasets to detect patterns and alpha signals


5. System & Network Optimization

  • Optimize distributed and concurrent systems for high-transaction throughput
  • Enhance platform performance through network and systems programming
  • Utilize deep knowledge of TCP/UDP and network protocols


6. Collaboration & Mentorship

  • Collaborate cross-functionally with traders, engineers, and data scientists
  • Represent Dolat in campus recruitment and industry events as a technical mentor


What We Are Looking For:

  • Strong foundation in data structures, algorithms, and object-oriented programming (C++).
  • Experience with AI/ML frameworks like TensorFlow, PyTorch, or Scikit-learn.
  • Hands-on experience in systems programming within a Linux environment.
  • Proficient and Hands on programming using python/ C++
  • Familiarity with distributed computing and high-concurrency systems.
  • Knowledge of network programming, including TCP/UDP protocols.
  • Strong analytical and problem-solving skills.

A passion for technology-driven solutions in the financial markets.

Read more
Texila American University
S LakshmiMarudhu
Posted by S LakshmiMarudhu
Coimbatore
6 - 9 yrs
₹12L - ₹18L / yr
skill iconPython
skill iconAmazon Web Services (AWS)
GPU
Deployment management
skill iconMachine Learning (ML)

Job Title: Deployment Lead (Python, Linux, AWS) 

Location: Coimbatore



Overview 

We are seeking an experienced Deployment Lead to oversee the end-to-end deployment lifecycle of our applications and services. The ideal candidate will have deep expertise in Python, strong Linux administration skills, and hands-on experience with AWS cloud infrastructure. You will work closely with engineering, DevOps, QA, and product teams to ensure reliable, repeatable, and scalable deployments across multiple environments. 


 

Key Responsibilities 

  • Lead and manage deployment activities for all application releases across development, staging, and production environments. 
  • Develop and maintain deployment automation, scripts, and tools using Python and shell scripting. 
  • Own and optimize CI/CD pipelines (e.g., GitHub Actions, Jenkins, GitLab CI, or AWS CodePipeline). 
  • Oversee Linux server administration, including configuration, troubleshooting, performance optimization, and security hardening. 
  • Design, implement, and maintain AWS infrastructure (EC2, S3, Lambda, IAM, RDS, ECS/EKS, CloudFormation/Terraform). 
  • Ensure robust monitoring, logging, and alerting using tools such as CloudWatch, Grafana, Prometheus, or ELK. 
  • Collaborate with developers to improve code readiness for deployment and production reliability. 
  • Manage environment configurations and ensure consistency and version control across environments. 
  • Lead incident response during production issues; conduct root-cause analysis and implement long-term fixes. 
  • Establish and enforce best practices for deployment, configuration management, and operational excellence. 


 

Required Skills & Qualifications 

  • 5+ years of experience in deployment engineering, DevOps, or site reliability engineering roles. 
  • Strong proficiency in Python for automation and tooling. 
  • Advanced experience with Linux systems administration (Ubuntu, CentOS, Amazon Linux). 
  • Hands-on work with AWS cloud services and infrastructure-as-code (CloudFormation or Terraform). 
  • Experience with containerization technologies such as Docker and orchestration platforms like ECS, EKS, or Kubernetes
  • Strong understanding of CI/CD tools and automated deployment strategies. 
  • Familiarity with networking concepts: DNS, load balancers, VPCs, firewalls, VPN, and routing. 
  • Expertise with monitoring, alerting, and logging solutions. 
  • Strong problem-solving and analytical skills; able to lead troubleshooting efforts. 
  • Excellent communication and leadership abilities. 


Read more
Advertising Industry

Advertising Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Delhi
5 - 8 yrs
₹22L - ₹25L / yr
skill iconAndroid Development
skill iconAndroid Testing
skill iconKotlin
Debugging
Integration
+31 more

Job Details

Job Title: Android Developer

Industry: IT- Services

Function - Information technology (IT)

Experience Required: 5-8 years

Employment Type: Full Time

Job Location: Delhi

CTC Range: Best in Industry

 

Criteria:

· Strong technical background in Android application development and Kotlin

· Looking candidates having 5+ years of experience.

· Need candidates from Delhi NCR Only.

· All Academic backgrounds acceptable (except BCA).

· Immediate Joiners Preferred

· Candidate must have some experience working with IoT devices.

· Candidate should have experience working with Camera model X.

· Candidate's Academic scores must be 70% or above.

· Candidate having fluent communication will be an added advantage.

 

Job Description 

About the Role:

Senior Android Team Lead will be responsible for testing, QC, debugging support for various Android and Java software/servers for products developed or procured by the company. The role includes debugging integration issues, handling on-field deployment challenges, and suggesting improvements or structured solutions. The candidate will also be responsible for scaling the architecture. You will work closely with other team members including Web Developers, Software Developers, Application Engineers, and Product Managers to test and deploy existing products. You will act as a Team Lead to coordinate and organize team efforts toward successful completion or demo of applications. This includes implementing projects from conception to deployment.

 

Responsibilities:

● Working with the Android SDK, Java, Kotlin, NDK

● Handling different Android versions and screen sizes

● Applying Android UI design principles, patterns, and best practices

 

Requirements:

● Strong technical background in Android application development and Kotlin

● Solid programming skills

● Detail-oriented with strong attention to specifics

● Excellent written and verbal communication skills

● Strong analytical and quick problem-solving ability

● Ability to quickly document requirements from open discussions

● Fast typing skills for documentation and communication

● Familiarity with JIRA, EPICs, Excel, Google Sheets, and Agile methodologies

● Team player with leadership qualities

● Decision-making ability and team management skills

● Interest in working in a startup environment with cutting-edge products

● Experience with design and architecture patterns

● Understanding of testing processes, debugging, code versioning, and repositories

● UI/UX experience

● Strong knowledge of Java & Kotlin

● Software development experience with strong coding skills

● Experience building services for data delivery to mobile clients

● Experience with relational and non-relational databases

● Knowledge of REST and JSON data handling

● Experience with libraries like Retrofit, RxJava, Dagger 2, Lottie

● Server integration (REST endpoints)

● Experience with AWS stack and Linux

● Apps shipped and available on Google Play

● Backend API development

● Familiarity with Android Studio, Eclipse IDE

● Good knowledge of mobile hardware, software, and operating systems

● Willingness to work in a fast-paced startup environment

● Strong oral communication and presentation skills

● Team-oriented, with a positive approach to technology and engineering

● Result-oriented with a focus on efficiency and timeliness

● Strong self-awareness and ability to work under deadlines

● Proficiency in Microsoft Project, PowerPoint, Excel, Word

● Willingness to mentor and manage team members

● Willing to travel 5–10% of the time for demos, training, and collaboration

 

Preferred Background:

● Understanding of Artificial Intelligence and Machine Learning

● B.S. / M.S. in Computer Science, Electrical, or Electronics Engineering

● 5+ years’ experience with Android, Java Server, JSP

● Experience with Virtual Reality and Augmented Reality

● Familiarity with Test-Driven Development

● Background in CS or ECE

● Python experience is a big plus

● iOS development knowledge (not mandatory)

● Strong foundation in data structures and algorithms

 

 

Read more
Hyderabad
4 - 8 yrs
₹20L - ₹30L / yr
Generative AI
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Large Language Models (LLM)
Retrieval Augmented Generation (RAG)
+8 more

We are seeking a talented AI/ML Engineer with strong hands-on experience in Generative AI and Large Language Models (LLMs) to join our Business Intelligence team. The role involves designing, developing, and deploying advanced AI/ML and GenAI-driven solutions to unlock business insights and enhance data-driven decision-making.


Key Responsibilities:

• Collaborate with business analysts and stakeholders to identify AI/ML and Generative AI use cases.

• Design and implement ML models for predictive analytics, segmentation, anomaly detection, and forecasting.

• Develop and deploy Generative AI solutions using LLMs (GPT, LLaMA, Mistral, etc.).

• Build and maintain Retrieval-Augmented Generation (RAG) pipelines and semantic search systems.

• Work with vector databases (FAISS, Pinecone, ChromaDB) for embedding storage and retrieval.

• Develop end-to-end AI/ML pipelines from data preprocessing to deployment.

• Integrate AI/ML and GenAI solutions into BI dashboards and reporting tools.

• Optimize models for performance, scalability, and reliability.

• Maintain documentation and promote knowledge sharing within the team.


Mandatory Requirements:

• 4+ years of relevant experience as an AI/ML Engineer.

• Hands-on experience in Generative AI and Large Language Models (LLMs) – Mandatory.

• Experience implementing RAG pipelines and prompt engineering techniques.

• Strong programming skills in Python.

• Experience with ML frameworks (TensorFlow, PyTorch, scikit-learn).

• Experience with vector databases (FAISS, Pinecone, ChromaDB).

• Strong understanding of SQL and database systems.

• Experience integrating AI solutions into BI tools (Power BI, Tableau).

• Strong analytical, problem-solving, and communication skills. Good to Have

• Experience with cloud platforms (AWS, Azure, GCP).

• Experience with Docker or Kubernetes.

• Exposure to NLP, computer vision, or deep learning use cases.

• Experience in MLOps and CI/CD pipelines

Read more
Deqode

at Deqode

1 recruiter
Shubham Das
Posted by Shubham Das
Remote only
5 - 8 yrs
₹20L - ₹24L / yr
skill iconMachine Learning (ML)
Windows Azure
Microsoft Visual Studio
  1. Strong experience in Azure – mainly Azure ML Studio, AKS, Blob Storage, ADF, ADO Pipelines.
  2. Ability and experience to register and deploy ML/AI/GenAI models via Azure ML Studio.
  3. Working knowledge of deploying models in AKS clusters.
  4. Design and implement data processing, training, inference, and monitoring pipelines using Azure ML.
  5. Excellent Python skills – environment setup and dependency management, coding as per best practices, and knowledge of automatic code review tools like linting and Black.
  6. Experience with MLflow for model experiments, logging artifacts and models, and monitoring.
  7. Experience in orchestrating machine learning pipelines using MLOps best practices.
  8. Experience in DevOps with CI/CD knowledge (Git in Azure DevOps).
  9. Experience in model monitoring (drift detection and performance monitoring).
  10. Fundamentals of data engineering.
  11. Docker-based deployment is good to have.


Read more
This is for one of our Reputed Entertainment organisation

This is for one of our Reputed Entertainment organisation

Agency job
EcoSpace, Bellandur, Bangalore
3 - 8 yrs
₹15L - ₹30L / yr
Generative AI
skill iconMachine Learning (ML)
skill iconPython
Large Language Models (LLM)
Natural Language Processing (NLP)
+1 more

Key Responsibilities

· Advanced ML & Deep Learning: Design, develop, and deploy end-to-end Machine Learning models for Content Recommendation Engines, Churn Prediction, and Customer Lifetime Value (CLV).

· Generative AI Implementation: Prototype and integrate GenAI solutions (using LLMs like Gemini/GPT) for automated Metadata Tagging, Script Summarization, or AI-driven Chatbots for viewer engagement.

· Develop and maintain high-scale video processing pipelines using Python, OpenCV, and FFmpeg to automate scene detection, ad-break identification, and visual feature extraction for content enrichment

· Cloud Orchestration: Utilize GCP (Vertex AI, BigQuery, Dataflow) to build scalable data pipelines and manage the full ML lifecycle (MLOps).

· Business Intelligence & Storytelling: Create high-impact, automated dashboards in to track KPIs for data-driven decision making

· Cross-functional Collaboration: Work closely with Product, Design, Engineering, Content, and Marketing teams to translate "viewership data" into "strategic growth."

Preferred Qualifications

· Experience in Media/OTT: Prior experience working with large scale data from broadcast channels, videos, streaming platforms or digital ad-tech.

· Education: Master’s/Bachelor’s degree in a quantitative field (Computer Science, Statistics, Mathematics, or Data Science).

· Product Mindset: Ability to not just build a model, but to understand the business implications of the solution.

· Communication: Exceptional ability to explain "Neural Network outputs" to a "Creative Content Producer" in simple terms.

Read more
MyOperator - VoiceTree Technologies

at MyOperator - VoiceTree Technologies

1 video
3 recruiters
Vijay Muthu
Posted by Vijay Muthu
Noida
2 - 5 yrs
₹8L - ₹10L / yr
Prompt engineering
Integration
Debugging
Documentation
Stakeholder management
+9 more

About MyOperator

MyOperator is a Business AI Operator and category leader that unifies WhatsApp, Calls, and AI-powered chatbots & voice bots into one intelligent business communication platform.Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single no-code platform.Trusted by 12,000+ brands including Amazon, Domino’s, Apollo, and Razor-pay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement


Role Summary

We’re hiring a Front Deployed Engineer (FDE)—a customer-facing, field-deployed engineer who owns the end-to-end delivery of AI bots/agents.

This role is “frontline”: you’ll work directly with customers (often onsite), translate business reality into bot workflows, do prompt engineering + knowledge grounding, ship deployments, and iterate until it works reliably in production.

Think: solutions engineer + implementation engineer + prompt engineer, with a strong bias for execution.


Responsibilities


Requirement Discovery & Stakeholder Interaction

  • Join customer calls alongside Sales and Revenue teams.
  • Ask targeted questions to understand business objectives, user journeys, automation expectations, and edge cases.
  • Identify data sources (CRM, APIs, Excel, SharePoint, etc.) required for the solution.
  • Act as the AI subject-matter expert during client discussions.

Use Case & Solution Documentation

  • Convert discussions into clear, structured use case documents, including:
  • Problem statement & goals.
  • Current vs. proposed conversational flows.
  • Chatbot conversation logic, integrations, and dependencies.
  • Assumptions, limitations, and success criteria.

Customer Delivery Ownership

Own deployment of AI bots for customer use-cases (lead qualification, support, booking, etc.). Run workshops to capture processes, FAQs, edge cases, and success metrics. Drive the go-live process: requirements through monitoring and improvement.


Prompt Engineering & Conversation Design

Craft prompts, tool instructions, guardrails, fallbacks, and escalation policies for stable behavior. Build structured conversational flows: intents, entities, routing, handoff, and compliant responses. Create reusable prompt patterns and "prompt packs."


Testing, Debugging & Iteration

Analyze logs to find failure modes (misclassification, hallucination, poor handling). Create test sets ("golden conversations"), run regressions, and measure improvements. Coordinate with Product/Engineering for platform needs.


Integrations & Technical Coordination

Integrate bots with APIs/webhooks (CRM, ticketing, internal tools) to complete workflows. Troubleshoot production issues and coordinate fixes/root-cause analysis.


What Success Looks Like

  • Customer bots go live quickly and show high containment + high task completion with low escalation.
  • You can diagnose failures from transcripts/logs and fix them with prompt/workflow/knowledge changes.
  • Customers trust you as the “AI delivery owner”—clear communication, realistic timelines, crisp execution.

Requirements (Must Have)

  • 2–5 years in customer-facing delivery roles: implementation, solutions engineering, customer success engineering, or similar.
  • Hands-on comfort with LLMs and prompt engineering (structured outputs, guardrails, tool use, iteration).
  • Strong communication: workshops, requirement capture, crisp documentation, stakeholder management.
  • Technical fluency: APIs/webhooks concepts, JSON, debugging logs, basic integration troubleshooting.
  • Willingness to be front deployed (customer calls/visits as needed).

Good to Have (Nice to Have)

  • Experience with chatbots/voicebots, IVR, WhatsApp automation, conversational AI platforms with at least a couple of projects. 
  • Understanding of metrics like containment, resolution rate, response latency, CSAT drivers.
  • Prior SaaS onboarding/delivery experience in mid-market or enterprises.

Working Style & Traits We Value

  • High agency: you don’t wait for perfect specs—you create clarity and ship.
  • Customer empathy + engineering discipline.
  • Strong bias for iteration: deploy → learn → improve.
  • Calm under ambiguity (real customer environments are chaotic by default).


Read more
Proximity Works

at Proximity Works

1 video
5 recruiters
Eman Khan
Posted by Eman Khan
Remote only
4 - 10 yrs
₹30L - ₹55L / yr
skill iconMachine Learning (ML)
MLOps
skill iconDeep Learning
Large Language Models (LLM)
Generative AI

About the Role

We're looking for a hands-on ML Engineer who combines strong machine learning fundamentals with the backend infrastructure instincts needed to take models from research into reliable, scalable production systems. This role sits at the intersection of cutting-edge generative video AI and the MLOps discipline required to run it at scale.

You'll work closely with backend, platform, and content teams to deliver high-performance ML components under strict quality, latency, and throughput requirements.


Key Responsibilities

  • Train, fine-tune, and evaluate generative video and multimodal models (image-to-video, text-to-video, lip-sync, character consistency)
  • Build and manage end-to-end ML pipelines: data ingestion, preprocessing, training, evaluation, and versioning
  • Own model deployment and serving infrastructure — containerization, GPU-optimized inference, model registries, and rollout strategies
  • Implement MLOps best practices: experiment tracking, model monitoring, drift detection, A/B testing, and observability
  • Design and maintain scalable inference systems optimized for low latency, high throughput, and cost-efficient GPU utilization
  • Develop caching and batching strategies to meet SLA targets in production video generation workflows
  • Collaborate with backend engineering teams on integrating ML services into distributed systems
  • Contribute to long-term roadmap: foundational model training strategies, LoRA fine-tuning pipelines, and multi-character generalization


Requirements

Required Qualifications

  • 4-10 years of experience in Machine Learning / Applied ML Engineering
  • Strong fundamentals in deep learning, Transformers, and generative model architectures
  • Hands-on experience with model training at scale — including fine-tuning large models (LoRA, full fine-tune) on custom datasets
  • Solid MLOps experience: experiment tracking (MLflow, W&B), CI/CD for ML, model versioning, and serving frameworks (Triton, TorchServe, vLLM, or equivalent)
  • Strong Python skills and fluency with PyTorch and the modern ML stack
  • Experience deploying and operating ML systems in distributed cloud environments (GCP, AWS, or Azure) — GPU provisioning, autoscaling, and cost management
  • Comfort working on ambiguous, high-impact problems with cross-functional teams


Preferred Qualifications

  • Experience with video generation, diffusion models, or multimodal architectures (DiT, U-Net, audio-video joint models)
  • Familiarity with LoRA/IC-LoRA fine-tuning workflows for character or identity consistency
  • Experience in media, OTT, sports, or large-scale content platforms
  • Knowledge of inference optimization techniques: quantization (FP8/INT8), batching, async orchestration, and GPU memory management
  • Exposure to TTS/voice cloning systems or audio-video synchronization pipelines


Benefits

What you get

  • Best in class salary: We hire only the best, and we pay accordingly.
  • Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field.
  • Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day.


About us

Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media, and Entertainment companies in the world! We’re headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies.


Today, we are a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting-edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company’s success will be huge. You’ll have the chance to work with experienced leaders who have built and led multiple tech, product, and design teams.

Read more
PGAGI
Javeriya Shaik
Posted by Javeriya Shaik
Remote only
0.6 - 1 yrs
₹8000 - ₹15000 / mo
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Generative AI
Data Structures
Large Language Models (LLM)


We're at the forefront of creating advanced AI systems, from fully autonomous agents that provide intelligent customer interaction to data analysis tools that offer insightful business solutions. We are seeking enthusiastic interns who are passionate about AI and ready to tackle real-world problems using the latest technologies.


We are looking for a passionate AI/ML Intern with hands-on exposure to Large Language Models (LLMs), fine-tuning techniques like LoRA, and strong fundamentals in Data Structures & Algorithms (DSA). This role is ideal for someone eager to work on real-world AI applications, experiment with open-source models, and contribute to production-ready AI systems.


Duration: 6 months

Perks:

- Hands-on experience with real AI projects.

- Mentoring from industry experts.

- A collaborative, innovative and flexible work environment

After completion of the internship period, there is a chance to get a full-time opportunity as AI/ML engineer (6-8 LPA).

Compensation:

- Stipend: Base is INR 8000/- & can increase up to 20000/- depending upon performance matrix.


Key Responsibilities

  • Work on Large Language Models (LLMs) for real-world AI applications.
  • Implement and experiment with LoRA (Low-Rank Adaptation) and other parameter-efficient fine-tuning techniques.
  • Perform model fine-tuning, evaluation, and optimization.
  • Engage in prompt engineering to improve model outputs and performance.
  • Develop backend services using Python for AI-powered applications.
  • Utilize GitHub for version control, including managing branches, pull requests, and code reviews.
  • Work with AI platforms such as Hugging Face and OpenAI to deploy and test models.
  • Collaborate with the team to build scalable and efficient AI solutions.

Must-Have Skills

  • Strong proficiency in Python.
  • Hands-on experience with LLMs (open-source or API-based).
  • Practical knowledge of LoRA or other parameter-efficient fine-tuning techniques.
  • Solid understanding of Data Structures & Algorithms (DSA).
  • Experience with GitHub and version control workflows.
  • Familiarity with Hugging Face Transformers and/or OpenAI APIs.
  • Basic understanding of Deep Learning and NLP concepts.
Read more
Pentabay Softwares

at Pentabay Softwares

1 recruiter
Sandhiya M
Posted by Sandhiya M
Chennai
0 - 3 yrs
₹2.5L - ₹7L / yr
Robotics
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
IBM Rational Robot
Root cause analysis

🤖 Robotics Engineer


Company: Pentabay Softwares

Location: Anna Salai (Mount Road), Chennai

Employment Type: Full-Time


🔹 Job Summary


Pentabay Softwares is seeking a highly skilled and innovative Robotics Engineer to design, develop, test, and implement robotic systems and automation solutions. The ideal candidate will have strong technical expertise in robotics programming, control systems, and hardware integration, along with a passion for building intelligent and efficient systems.


🔹 Key Responsibilities


Design, develop, and test robotic systems and automation solutions


Develop and implement control algorithms and motion planning systems


Integrate sensors, actuators, and embedded systems


Program robots using languages such as Python, C++, or ROS


Troubleshoot, debug, and optimize robotic applications


Collaborate with cross-functional teams including software, hardware, and AI engineers


Ensure compliance with safety and quality standards


Document system architecture, processes, and technical specifications


🔹 Required Qualifications


Bachelor’s or Master’s degree in Robotics, Mechatronics, Mechanical, Electronics, or related field


2+ years of experience in robotics development (preferred)


Strong knowledge of robotics frameworks (e.g., ROS)


Experience with microcontrollers, embedded systems, and sensor integration


Familiarity with AI/ML concepts is a plus


Strong analytical and problem-solving skills


🔹 Preferred Skills


Experience with computer vision systems


Knowledge of SLAM, kinematics, and motion planning


Experience with industrial automation or autonomous systems


Strong teamwork and communication skills


🌟 Why Join Pentabay Softwares?


Work on innovative and future-focused technologies


Collaborative and growth-oriented work culture


Opportunities for skill development and career advancement


Exposure to cutting-edge automation and AI-driven projects

Read more
Remote, Hyderabad
3 - 5 yrs
₹20L - ₹35L / yr
Natural Language Processing (NLP)
Large Language Models (LLM) tuning
Data Structures
Algorithms
skill iconPython
+9 more

In this role, you'll be responsible for building machine learning based systems and conduct data analysis that improves the quality of our large geospatial data. You’ll be developing NLP models to extract information, using outlier detection to identifying anomalies and applying data science methods to quantify the quality of our data. You will take part in the development, integration, productionisation and deployment of the models at scale, which would require a good combination of data science and software development.


Responsibilities


  • Development of machine learning models
  • Building and maintaining software development solutions
  • Provide insights by applying data science methods
  • Take ownership of delivering features and improvements on time


Must-have Qualifications


  • 4 year's experience 
  • Senior data scientist preferable with knowledge of NLP
  • Strong programming skills and extensive experience with Python
  • Professional experience working with LLMs, transformers and open-source models from HuggingFace
  • Professional experience working with machine learning and data science, such as classification, feature engineering, clustering, anomaly detection and neural networks
  • Knowledgeable in classic machine learning algorithms (SVM, Random Forest, Naive Bayes, KNN etc.).
  • Experience using deep learning libraries and platforms, such as PyTorch
  • Experience with frameworks such as Sklearn, Numpy, Pandas, Polars
  • Excellent analytical and problem-solving skills
  • Excellent oral and written communication skills


Extra Merit Qualifications


  • Knowledge in at least one of the following: NLP, information retrieval, data mining
  • Ability to do statistical modeling and building predictive models
  • Programming skills and experience with Scala and/or Java
Read more
Deqode

at Deqode

1 recruiter
Samiksha Agrawal
Posted by Samiksha Agrawal
Remote only
4 - 8 yrs
₹4L - ₹12L / yr
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Retrieval Augmented Generation (RAG)
Generative AI

Job Description -

Profile: AI/ML

Experience: 4-8 Years

Mode: Remote

Mandatory Skills - AI/ML, LLM, RAG, Agentic AI, Traditional ML,  GCP



Must-Have:


● Proven experience as an AI/ML specifically with a focus on Generative AI and Large Language Models (LLMs) in production.

● Deep expertise in building Agentic Workflows using frameworks like LangChain, LangGraph, or AutoGen.

● Strong proficiency in designing RAG (Retrieval-Augmented Generation) 

● Experience with Function Calling/Tool Use in LLMs to connect AI models with external APIs (REST/gRPC) for transactional tasks

● Hands-on experience with Google Cloud Platform (GCP), specifically Vertex AI, Model Garden, and deploying models on GPUs

● Proficiency in Python and deep learning frameworks (PyTorch or TensorFlow).

Read more
Deqode

at Deqode

1 recruiter
Samiksha Agrawal
Posted by Samiksha Agrawal
Remote only
9 - 15 yrs
₹9L - ₹16L / yr
skill iconMachine Learning (ML)
MLOps
CI/CD
skill iconPython
Generative AI
+1 more

Job Description -

Profile: Senior ML Lead

Experience Required: 10+ Years

Work Mode: Remote

Key Responsibilities:

  • Design end-to-end AI/ML architectures including data ingestion, model development, training, deployment, and monitoring
  • Evaluate and select appropriate ML algorithms, frameworks, and cloud platforms (Azure, Snowflake)
  • Guide teams in model operationalization (MLOps), versioning, and retraining pipelines
  • Ensure AI/ML solutions align with business goals, performance, and compliance requirements
  • Collaborate with cross-functional teams on data strategy, governance, and AI adoption roadmap

Required Skills:

  • Strong expertise in ML algorithms, Linear Regression, and modeling fundamentals
  • Proficiency in Python with ML libraries and frameworks
  • MLOps: CI/CD/CT pipelines for ML deployment with Azure
  • Experience with OpenAI/Generative AI solutions
  • Cloud-native services: Azure ML, Snowflake
  • 8+ years in data science with at least 2 years in solution architecture role
  • Experience with large-scale model deployment and performance tuning

Good-to-Have:

  • Strong background in Computer Science or Data Science
  • Azure certifications
  • Experience in data governance and compliance


Read more
Healthcare Industry

Healthcare Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 10 yrs
₹25L - ₹30L / yr
MLOps
Generative AI
skill iconPython
Natural Language Processing (NLP)
skill iconMachine Learning (ML)
+22 more

JOB DETAILS:

* Job Title: Principal Data Scientist

* Industry: Healthcare

* Salary: Best in Industry

* Experience: 6-10 years

* Location: Bengaluru

 

Preferred Skills: Generative AI, NLP & ASR, Transformer Models, Cloud Deployment, MLOps

 

Criteria:

  1. Candidate must have 7+ years of experience in ML, Generative AI, NLP, ASR, and LLMs (preferably healthcare).
  2. Candidate must have strong Python skills with hands-on experience in PyTorch/TensorFlow and transformer model fine-tuning.
  3. Candidate must have experience deploying scalable AI solutions on AWS/Azure/GCP with MLOps, Docker, and Kubernetes.
  4. Candidate must have hands-on experience with LangChain, OpenAI APIs, vector databases, and RAG architectures.
  5. Candidate must have experience integrating AI with EHR/EMR systems, ensuring HIPAA/HL7/FHIR compliance, and leading AI initiatives.

 

Job Description

Principal Data Scientist

(Healthcare AI | ASR | LLM | NLP | Cloud | Agentic AI)

 

Job Details

  • Designation: Principal Data Scientist (Healthcare AI, ASR, LLM, NLP, Cloud, Agentic AI)
  • Location: Hebbal Ring Road, Bengaluru
  • Work Mode: Work from Office
  • Shift: Day Shift
  • Reporting To: SVP
  • Compensation: Best in the industry (for suitable candidates)

 

Educational Qualifications

  • Ph.D. or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field
  • Technical certifications in AI/ML, NLP, or Cloud Computing are an added advantage

 

Experience Required

  • 7+ years of experience solving real-world problems using:
  • Natural Language Processing (NLP)
  • Automatic Speech Recognition (ASR)
  • Large Language Models (LLMs)
  • Machine Learning (ML)
  • Preferably within the healthcare domain
  • Experience in Agentic AI, cloud deployments, and fine-tuning transformer-based models is highly desirable

Role Overview

This position is part of company, a healthcare division of Focus Group specializing in medical coding and scribing.

We are building a suite of AI-powered, state-of-the-art web and mobile solutions designed to:

  • Reduce administrative burden in EMR data entry
  • Improve provider satisfaction and productivity
  • Enhance quality of care and patient outcomes

Our solutions combine cutting-edge AI technologies with live scribing services to streamline clinical workflows and strengthen clinical decision-making.

The Principal Data Scientist will lead the design, development, and deployment of cognitive AI solutions, including advanced speech and text analytics for healthcare applications. The role demands deep expertise in generative AI, classical ML, deep learning, cloud deployments, and agentic AI frameworks.

 

Key Responsibilities

AI Strategy & Solution Development

  • Define and develop AI-driven solutions for speech recognition, text processing, and conversational AI
  • Research and implement transformer-based models (Whisper, LLaMA, GPT, T5, BERT, etc.) for speech-to-text, medical summarization, and clinical documentation
  • Develop and integrate Agentic AI frameworks enabling multi-agent collaboration
  • Design scalable, reusable, and production-ready AI frameworks for speech and text analytics

Model Development & Optimization

  • Fine-tune, train, and optimize large-scale NLP and ASR models
  • Develop and optimize ML algorithms for speech, text, and structured healthcare data
  • Conduct rigorous testing and validation to ensure high clinical accuracy and performance
  • Continuously evaluate and enhance model efficiency and reliability

Cloud & MLOps Implementation

  • Architect and deploy AI models on AWS, Azure, or GCP
  • Deploy and manage models using containerization, Kubernetes, and serverless architectures
  • Design and implement robust MLOps strategies for lifecycle management

Integration & Compliance

  • Ensure compliance with healthcare standards such as HIPAA, HL7, and FHIR
  • Integrate AI systems with EHR/EMR platforms
  • Implement ethical AI practices, regulatory compliance, and bias mitigation techniques

Collaboration & Leadership

  • Work closely with business analysts, healthcare professionals, software engineers, and ML engineers
  • Implement LangChain, OpenAI APIs, vector databases (Pinecone, FAISS, Weaviate), and RAG architectures
  • Mentor and lead junior data scientists and engineers
  • Contribute to AI research, publications, patents, and long-term AI strategy

 

Required Skills & Competencies

  • Expertise in Machine Learning, Deep Learning, and Generative AI
  • Strong Python programming skills
  • Hands-on experience with PyTorch and TensorFlow
  • Experience fine-tuning transformer-based LLMs (GPT, BERT, T5, LLaMA, etc.)
  • Familiarity with ASR models (Whisper, Canary, wav2vec, DeepSpeech)
  • Experience with text embeddings and vector databases
  • Proficiency in cloud platforms (AWS, Azure, GCP)
  • Experience with LangChain, OpenAI APIs, and RAG architectures
  • Knowledge of agentic AI frameworks and reinforcement learning
  • Familiarity with Docker, Kubernetes, and MLOps best practices
  • Understanding of FHIR, HL7, HIPAA, and healthcare system integrations
  • Strong communication, collaboration, and mentoring skills

 

 

Read more
A robotics company working on Industrial Robotics

A robotics company working on Industrial Robotics

Agency job
via RedString by Kaushik Reddyshetty
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
3 - 10 yrs
₹10L - ₹20L / yr
ASNT
ISO 9712
PCN
PAUT
NDT
+10 more

Position: NDT Applications Engineer – PAUT ( Corrosion Mapping/ Weld Inspection / PWI/Advance FMC-TFM)

Location: Noida

Job Type: Full-time

Experience Level: Mid-Level / Senior-Level

Industry: Non-Destructive Testing (NDT), PAUT.


We are specifically looking for candidates with hands-on experience as an NDT Senior Engineer; only those with relevant NDT expertise should apply.


We are specifically looking for candidates with hands-on experience as an NDT Senior Engineer; only those with relevant NDT expertise should apply."

Job Summary

We are seeking a highly skilled NDT-Engineer with expertise in Ultrasonic Testing (UT) and Phased Array Ultrasonic Testing (PAUT) for robotic integration. This role involves coordinating with software and control system teams to integrate UT & PAUT ( Corrosion Mapping/Weld Inspection) into robotic NDT systems, ensuring optimal inspection performance.

The engineer will focus on sensor selection, ultrasonic parameter optimization, calibration, and data interpretation, while the software team handles control algorithms and motion planning. The ideal candidate should have strong experience in NDT automation, probe and frequency selection, phased array data acquisition, and defect characterization.


Key Responsibilities

1. NDT Inspection & Signal Optimization

• Optimize probe selection, wedge design, and beam focusing to achieve high-resolution imaging.

• Define scanning techniques (sectorial, linear, and compound scans) to detect various defect types.

• Analyse UT & PAUT signals, ensuring accurate defect detection, sizing, and characterization.

• Implement Time-of-Flight Diffraction (TOFD) and Full Matrix Capture (FMC) techniques to enhance detection capabilities.

• Address electromagnetic interference (EMI) and signal noise issues affecting robotic UT/PAUT.

• Develop procedures for coupling enhancement, including the use of water column, dry coupling, and adaptive surface-following mechanisms for robotic probes.

• Evaluate attenuation, beam divergence, and wave mode conversion for different material types.

• Work with AI-based defect recognition systems to automate data processing and anomaly detection.

• Test different scanning configurations for challenging surfaces, curved geometries, and weld seams.

• Optimize gain, pulse repetition frequency (PRF), and filtering settings to ensure the highest signal clarity.

• Implement phased array data interpretation techniques to differentiate between false indications and real defects.

• Develop and refine automated thickness gauging algorithms for robotic NDT systems.

• Ensure the compatibility of PAUT imaging with robotic motion constraints to avoid signal distortion.


2. NDT-Integration for Robotics (UT & PAUT)

•Select, integrate, and optimize ultrasonic transducers and phased array probes for robotic inspection systems.

•Define NDT scanning parameters (frequency, angle, probe type, and scanning speed) for robotic UT/PAUT applications.

•Ensure seamless coordination with control system and software teams for planning and automation.

•Work with robotic hardware teams to mount, position, and align UT/PAUT probes accurately.

•Conduct system calibration and validate UT/PAUT performance on robotic platforms.


3. Data Analysis & Reporting

•Interpret PAUT sectorial scans, full matrix capture (FMC), and total focusing method (TFM) data.

•Assist the software team in processing PAUT data for defect characterization and AI-based analysis.

•Validate robotic UT/PAUT inspection results and generate detailed technical reports.

•Ensure compliance with NDT standards (ASME, ISO 9712, ASTM, API 510/570) for ultrasonic inspections.


4. Coordination with Software & Control System Teams

•Work closely with the software team to define scan path strategies and automation logic.

•Collaborate with control engineers to ensure precise probe movement and stability.

•Provide technical input on robotic payload capacity, motion constraints, and scanning efficiency.

•Assist in integration of AI-driven defect recognition for automated data interpretation.


5. Field Deployment & Validation

•Supervise robotic UT/PAUT system trials in real-world inspection environments.

•Ensure compliance with safety regulations and industry best practices.

•Support on-site troubleshooting and optimization of robotic NDT performance.

•Train operators on robot-assisted ultrasonic testing procedures.


Required Qualifications & Skills

1. Educational Background

•Master’s Degree in Metallurgy/NDT/Mechanical.

•ASNT-Level II/III, ISO 9712, PCN, AWS CWI, or API 510/570 certifications in UT & PAUT preferred.


2. Technical Skills & Experience

•3-10 years of experience in Ultrasonic Testing (UT) and Phased Array Ultrasonic Testing (PAUT).

•Strong understanding of probe selection, frequency tuning, and phased array beamforming.

•Experience with NDT software

•Knowledge of electromagnetic shielding, signal integrity, and noise reduction techniques in ultrasonic systems.

•Ability to collaborate with software and control teams for robotic NDT development.


3. Soft Skills

•Strong problem-solving and analytical abilities.

•Excellent technical communication and coordination skills.

•Ability to work in cross-functional teams with robotics, software, and NDT specialists.

•Willingness to travel for on-site robotic NDT deployments.


Work Conditions

•Lab – Hands-on testing and robotic system deployment.

•Flexible Work Hours – Based on project


Benefits & Perks

•Competitive salary & performance incentives.

•Exposure to cutting-edge robotic and AI-driven NDT innovations.

•Training & certification support for career growth.

•Opportunities to work on pioneering robotic NDT projects.





Read more
Chennai, Bengaluru (Bangalore)
4 - 10 yrs
Best in industry
skill iconData Science
skill iconPython
Forecasting
skill iconMachine Learning (ML)

Hi,


Greetings from Ampera!


we are looking for a Data Scientist with strong Python & Forecasting experience.


Title                               : Data Scientist – Python & Forecasting

Experience                   : 4 to 7 Yrs

Location                       : Chennai/Bengaluru

Type of hire                  : PWD and Non PWD

Employment Type     : Full Time

Notice Period             : Immediate Joiner

Working hours           : 09:00 a.m. to 06:00 p.m.

Workdays                   : Mon - Fri

 

 

Job Description:

 

We are looking for an experienced Data Scientist with strong expertise in Python programming and forecasting techniques. The ideal candidate should have hands-on experience building predictive and time-series forecasting models, working with large datasets, and deploying scalable solutions in production environments.


Key Responsibilities

  • Develop and implement forecasting models (time-series and machine learning based).
  • Perform exploratory data analysis (EDA), feature engineering, and model validation.
  • Build, test, and optimize predictive models for business use cases such as demand forecasting, revenue prediction, trend analysis, etc.
  • Design, train, validate, and optimize machine learning models for real-world business use cases.
  • Apply appropriate ML algorithms based on business problems and data characteristics
  • Write clean, modular, and production-ready Python code.
  • Work extensively with Python Packages & libraries for data processing and modelling.
  • Collaborate with Data Engineers and stakeholders to deploy models into production.
  • Monitor model performance and improve accuracy through continuous tuning.
  • Document methodologies, assumptions, and results clearly for business teams.

 

Technical Skills Required:

Programming

  • Strong proficiency in Python
  • Experience with Pandas, NumPy, Scikit-learn

Forecasting & Modelling

  • Hands-on experience in Time Series Forecasting (ARIMA, SARIMA, Prophet, etc.)
  • Experience with ML-based forecasting models (XGBoost, LightGBM, Random Forest, etc.)
  • Understanding of seasonality, trend decomposition, and statistical modeling

Data & Deployment

  • Experience handling structured and large datasets
  • SQL proficiency
  • Exposure to model deployment (API-based deployment preferred)
  • Knowledge of MLOps concepts is an added advantage

Tools (Preferred)

  • TensorFlow / PyTorch (optional)
  • Airflow / MLflow
  • Cloud platforms (AWS / Azure / GCP)


Educational Qualification

  • Bachelor’s or Master’s degree in Data Science, Statistics, Computer Science, Mathematics, or related field.


Key Competencies

  • Strong analytical and problem-solving skills
  • Ability to communicate insights to technical and non-technical stakeholders
  • Experience working in agile or fast-paced environments


Accessibility & Inclusion Statement

We are committed to creating an inclusive environment for all employees, including persons with disabilities. Reasonable accommodations will be provided upon request.

Equal Opportunity Employer (EOE) Statement

Ampera Technologies is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.

Read more
KG Agile
Hiring HR
Posted by Hiring HR
Coimbatore
0 - 15 yrs
₹3.6L - ₹6.8L / yr
skill iconPython
skill iconJava
skill iconC++
Data Structures
skill iconMachine Learning (ML)
+5 more

Position: Assistant Professor


Department: CSE / IT

Experience: 0 – 15 Years

Joining: Immediate / Within 1 Month

Salary: As per norms and experience


🎓 Qualification:


ME / M.Tech in Computer Science Engineering / Information Technology


Ph.D. (Preferred but not mandatory)


First Class in UG & PG as per AICTE norms


🔍 Roles & Responsibilities:


Deliver high-quality lectures for UG / PG programs


Prepare lesson plans, course materials, and academic content


Guide student projects and internships


Participate in curriculum development and academic planning


Conduct internal assessments, evaluations, and result analysis


Mentor students for academic and career growth


Participate in departmental research activities


Publish research papers in reputed journals (Scopus/SCI preferred)


Attend Faculty Development Programs (FDPs), workshops, and conferences


Contribute to NAAC / NBA accreditation processes


Support institutional administrative responsibilities


💡 Required Skills:


Strong subject knowledge in CSE / IT domains


Programming proficiency (Python, Java, C++, Data Structures, AI/ML, Cloud, etc.)


Excellent communication and presentation skills


Research orientation and academic enthusiasm


Team collaboration and mentoring ability

Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
7 - 12 yrs
₹40L - ₹80L / yr
skill iconMachine Learning (ML)
Apache Spark
Apache Airflow
skill iconPython
skill iconAmazon Web Services (AWS)
+23 more

Review Criteria:

  • Strong MLOps profile
  • 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
  • 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
  • 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
  • Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
  • Must have hands-on Python for pipeline & automation development
  • 4+ years of experience in AWS cloud, with recent companies
  • (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth

 

Preferred:

  • Hands-on in Docker deployments for ML workflows on EKS / ECS
  • Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
  • Experience with CI / CD / CT using GitHub Actions / Jenkins.
  • Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
  • Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.

 

Job Specific Criteria:

  • CV Attachment is mandatory
  • Please provide CTC Breakup (Fixed + Variable)?
  • Are you okay for F2F round?
  • Have candidate filled the google form?

 

Role & Responsibilities:

We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.

 

Key Responsibilities:

  • Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
  • Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
  • Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
  • Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
  • Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
  • Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
  • Collaborate with data scientists to productionize notebooks, experiments, and model deployments.

 

Ideal Candidate:

  • 8+ years in MLOps/DevOps with strong ML pipeline experience.
  • Strong hands-on experience with AWS:
  • Compute/Orchestration: EKS, ECS, EC2, Lambda
  • Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
  • Workflow: MWAA/Airflow, Step Functions
  • Monitoring: CloudWatch, OpenSearch, Grafana
  • Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
  • Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
  • Strong Linux, scripting, and troubleshooting skills.
  • Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.

 

Education:

  • Master’s degree in computer science, Machine Learning, Data Engineering, or related field. 


Read more
ProductNova
Vidhya Vijay
Posted by Vidhya Vijay
Bengaluru (Bangalore)
10 - 12 yrs
₹28L - ₹32L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
skill iconPython
skill iconNodeJS (Node.js)
skill icon.NET
+9 more

ROLE - TECH LEAD/ARCHITECT with AI Expertise

 

Experience: 10–15 Years

Location: Bangalore (Onsite)

Company Type: Product-based | AI B2B SaaS

 

About ProductNova

ProductNova is a fast-growing product development organization that partners with

ambitious companies to build, modernize, and scale high-impact digital products. Our teams

of product leaders, engineers, AI specialists, and growth experts work at the intersection of

strategy, technology, and execution to help organizations create differentiated product

portfolios and accelerate business outcomes.

 

Founded in early 2023, ProductNova has successfully designed, built, and launched 20+

large-scale, AI-powered products and platforms across industries. We specialize in solving

complex business problems through thoughtful product design, robust engineering, and

responsible use of AI.

 

Product Development

We design and build user-centric, scalable, AI-native B2B SaaS products that are deeply

aligned with business goals and long-term value creation.

Our end-to-end product development approach covers the full lifecycle:

 

1. Product discovery and problem definition

2. User research and product strategy

3. Experience design and rapid prototyping

4. AI-enabled engineering, testing, and platform architecture

5. Product launch, adoption, and continuous improvement

 

From early concepts to market-ready solutions, we focus on building products that are

resilient, scalable, and ready for real-world adoption. Post-launch, we work closely with

customers to iterate based on user feedback and expand products across new use cases,

customer segments, and markets.

 

Growth & Scale

For early-stage companies and startups, we act as product partners—shaping ideas into

viable products, identifying target customers, achieving product-market fit, and supporting

go-to-market execution, iteration, and scale.

For established organizations, we help unlock the next phase of growth by identifying

opportunities to modernize and scale existing products, enter new geographies, and build

entirely new product lines. Our teams enable innovation through AI, platform re-

architecture, and portfolio expansion to support sustained business growth.

 

 

 

Role Overview

We are looking for a Tech Lead / Architect to drive the end-to-end technical design and

development of AI-powered B2B SaaS products. This role requires a strong hands-on

technologist who can work closely with ML Engineers and Full Stack Development teams,

own the product architecture, and ensure scalability, security, and compliance across the

platform.

 

Key Responsibilities

• Lead the end-to-end architecture and development of AI-driven B2B SaaS products

• Collaborate closely with ML Engineers, Data Scientists, and Full Stack Developers to

integrate AI/ML models into production systems

• Define and own the overall product technology stack, including backend, frontend,

data, and cloud infrastructure

• Design scalable, resilient, and high-performance architectures for multi-tenant SaaS

platforms

• Drive cloud-native deployments (Azure) using modern DevOps and CI/CD practices

• Ensure data privacy, security, compliance, and governance (SOC2, GDPR, ISO, etc.)

across the product

• Take ownership of application security, access controls, and compliance

requirements

• Actively contribute hands-on through coding, code reviews, complex feature development and architectural POCs

• Mentor and guide engineering teams, setting best practices for coding, testing, and

system design

• Work closely with Product Management and Leadership to translate business

requirements into technical solutions

 

Qualifications:

• 10–15 years of overall experience in software engineering and product

development

• Strong experience building B2B SaaS products at scale

• Proven expertise in system architecture, design patterns, and distributed systems

• Hands-on experience with cloud platforms (Azure, AWS/GCP)

• Solid background in backend technologies (Python/ .NET / Node.js / Java) and

modern frontend frameworks like (React, JS, etc.)

• Experience working with AI/ML teams in deploying and tuning ML models into production

environments

• Strong understanding of data security, privacy, and compliance frameworks

• Experience with microservices, APIs, containers, Kubernetes, and cloud-native

architectures

• Strong working knowledge of CI/CD pipelines, DevOps, and infrastructure as code

• Excellent communication and leadership skills with the ability to work cross-

functionally

• Experience in AI-first or data-intensive SaaS platforms

• Exposure to MLOps frameworks and model lifecycle management

• Experience with multi-tenant SaaS security models

• Prior experience in product-based companies or startups

 

Why Join Us

• Build cutting-edge AI-powered B2B SaaS products

• Own architecture and technology decisions end-to-end

• Work with highly skilled ML and Full Stack teams

• Be part of a fast-growing, innovation-driven product organization

 

If you are a results-driven Technical Lead with a passion for developing innovative products that drives business growth, we invite you to join our dynamic team at ProductNova.

 

Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Pune
3 - 8 yrs
₹12L - ₹25L / yr
Large Language Models (LLM)
Retrieval Augmented Generation (RAG)
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Software Testing (QA)
+9 more

Job Title : QA Lead (AI/ML Products)

Employment Type : Full Time

Experience : 4 to 8 Years

Location : On-site

Mandatory Skills : Strong hands-on experience in testing AI/ML (LLM, RAG) applications with deep expertise in API testing, SQL/NoSQL database validation, and advanced backend functional testing.


Role Overview :

We are looking for an experienced QA Lead who can own end-to-end quality for AI-influenced products and backend-heavy systems. This role requires strong expertise in advanced functional testing, API validation, database verification, and AI model behavior testing in non-deterministic environments.


Key Responsibilities :

  • Define and implement comprehensive test strategies aligned with business and regulatory goals.
  • Validate AI/ML and LLM-driven applications, including RAG pipelines, hallucination checks, prompt injection scenarios, and model response validation.
  • Perform deep API testing using Postman/cURL and validate JSON/XML payloads.
  • Execute complex SQL queries (MySQL/PostgreSQL) and work with MongoDB for backend and data integrity validation.
  • Analyze server logs and transactional flows to debug issues and ensure system reliability.
  • Conduct risk analysis and report key QA metrics such as defect leakage and release readiness.
  • Establish and refine QA processes, templates, standards, and agile testing practices.
  • Identify performance bottlenecks and basic security vulnerabilities (e.g., IDOR, data exposure).
  • Collaborate closely with developers, product managers, and domain experts to translate business requirements into testable scenarios.
  • Own feature quality independently from conception to release.

Required Skills & Experience :

  • 4+ years of hands-on experience in software testing and QA.
  • Strong understanding of testing AI/ML products, LLM validation, and non-deterministic behavior testing.
  • Expertise in API Testing, server log analysis, and backend validation.
  • Proficiency in SQL (MySQL/PostgreSQL) and MongoDB.
  • Deep knowledge of SDLC and Bug Life Cycle.
  • Strong problem-solving ability and structured approach to ambiguous scenarios.
  • Awareness of performance testing and basic security testing practices.
  • Excellent communication skills to articulate defects and QA strategies.

What We’re Looking For :

A proactive QA professional who can go beyond UI testing, understands backend systems deeply, and can confidently test modern AI-driven applications while driving quality standards across the team.

Read more
Remote only
0 - 5 yrs
₹1.5L - ₹3L / yr
AWS SageMaker
skill iconMachine Learning (ML)
skill iconPython
skill iconAmazon Web Services (AWS)

We are building VALLI AI SecurePay, an AI-powered fintech and cybersecurity platform focused on fraud detection and transaction risk scoring.


We are looking for an AI / Machine Learning Engineer with strong AWS experience to design, develop, and deploy ML models in a cloud-native environment.


Responsibilities:

- Build ML models for fraud detection and anomaly detection

- Work with transactional and behavioral data

- Deploy models on AWS (S3, SageMaker, EC2/Lambda)

- Build data pipelines and inference workflows

- Integrate ML models with backend APIs


Requirements:

- Strong Python and Machine Learning experience

- Hands-on AWS experience

- Experience deploying ML models in production

- Ability to work independently in a remote setup


Job Type: Contract / Freelance  

Duration: 3–6 months (extendable)  

Location: Remote (India)


Read more
Bengaluru (Bangalore), Chennai
5 - 15 yrs
Best in industry
skill iconData Science
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Survival analysis

Hi,


PFB the Job Description for Data Science with ML

 


Type of hire                  : PWD and Non PWD

Employment Type    : Full Time

Notice Period            : Immediate Joiner

Work Days                    : Mon - Fri

 

 


About Ampera:

Ampera Technologies, a purpose driven Digital IT Services with primary focus on supporting our client with their Data, AI / ML, Accessibility and other Digital IT needs. We also ensure that equal opportunities are provided to Persons with Disabilities Talent. Ampera Technologies has its Global Headquarters in Chicago, USA and its Global Delivery Center is based out of Chennai, India. We are actively expanding our Tech Delivery team in Chennai and across India. We offer exciting benefits for our teams, such as 1) Hybrid and Remote work options available, 2) Opportunity to work directly with our Global Enterprise Clients, 3) Opportunity to learn and implement evolving Technologies, 4) Comprehensive healthcare, and 5) Conducive environment for Persons with Disability Talent meeting Physical and Digital Accessibility standards


About the Role

 

We are looking for a skilled Data Scientist with strong Machine Learning experience to design, develop, and deploy data-driven solutions. The role involves working with large datasets, building predictive and ML models, and collaborating with cross-functional teams to translate business problems into analytical solutions.

 

Key Responsibilities

 

  • Analyze large, structured and unstructured datasets to derive actionable insights.
  • Design, build, validate, and deploy Machine Learning models for prediction, classification, recommendation, and optimization.
  • Apply statistical analysis, feature engineering, and model evaluation techniques.
  • Work closely with business stakeholders to understand requirements and convert them into data science solutions.
  • Develop end-to-end ML pipelines including data preprocessing, model training, testing, and deployment.
  • Monitor model performance and retrain models as required.
  • Document assumptions, methodologies, and results clearly.
  • Collaborate with data engineers and software teams to integrate models into production systems.
  • Stay updated with the latest advancements in data science and machine learning.

 

Required Skills & Qualifications

 

  • Bachelor’s or Master’s degree in computer science, Data Science, Statistics, Mathematics, or related fields.
  • 5+ years of hands-on experience in Data Science and Machine Learning.
  • Strong proficiency in Python (NumPy, Pandas, Scikit-learn).
  • Experience with ML algorithms:
  • Regression, Classification, Clustering
  • Decision Trees, Random Forest, Gradient Boosting
  • SVM, KNN, Naïve Bayes
  • Solid understanding of statistics, probability, and linear algebra.
  • Experience with data visualization tools (Matplotlib, Seaborn, Power BI, Tableau – preferred).
  • Experience working with SQL and relational databases.
  • Knowledge of model evaluation metrics and optimization techniques.


Preferred / Good to Have

 

  • Experience with Deep Learning frameworks (TensorFlow, PyTorch, Keras).
  • Exposure to NLP, Computer Vision, or Time Series forecasting.
  • Experience with big data technologies (Spark, Hadoop).
  • Familiarity with cloud platforms (AWS, Azure, GCP).
  • Experience with MLOps, CI/CD pipelines, and model deployment.

 

Soft Skills

 

  • Strong analytical and problem-solving abilities.
  • Excellent communication and stakeholder interaction skills.
  • Ability to work independently and in cross-functional teams.
  • Curiosity and willingness to learn new tools and techniques.


 

Accessibility & Inclusion Statement

We are committed to creating an inclusive environment for all employees, including persons with disabilities. Reasonable accommodations will be provided upon request.

Equal Opportunity Employer (EOE) Statement

Ampera Technologies is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.

.


Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort