50+ Machine Learning (ML) Jobs in India
Apply to 50+ Machine Learning (ML) Jobs on CutShort.io. Find your next job, effortlessly. Browse Machine Learning (ML) Jobs and apply today!
Company Description
Appiness Interactive Pvt. Ltd. is a Bangalore-based product development and UX firm that specializes in digital services for startups to fortune-500s. We work closely with our clients to create a comprehensive soul for their brand in the online world, engaged through multiple platforms of digital media. Our team is young, passionate, and aggressive, not afraid to think out of the box or tread the un-trodden path in order to deliver the best results for our clients. We pride ourselves on Practical Creativity where the idea is only as good as the returns it fetches for our clients.
Position Overview:
Senior backend engineering role focused on building and operating ML-backed backend systems powering a large-scale AI product. This is a core foundation/platform role with end-to-end system ownership in a fast-moving, ambiguous environment within a high-intent foundation
engineering pod of 10 engineers.
Key Responsibilities:
● Design, build, and operate ML-backed backend systems at scale
● Own runtime orchestration, session/state management, and retrieval/memory pipelines (chunking, embeddings, indexing, vector search, re-ranking, caching, freshness & deletion)
● Productionize ML workflows: feature/metadata services, model integration contracts, offline/online parity, and evaluation instrumentation
● Drive performance, reliability, and cost efficiency across latency, throughput, infra usage,
and token economics
● Build observability-first systems with tracing, metrics, logs, guardrails, and fallback paths
● Partner closely with applied ML teams on prompt/tool schemas, routing, evaluation
datasets, and safe releases
● Ship independently and own systems end-to-end
Required Skills:
● 6+ years of backend/platform engineering experience
● Strong experience building distributed, production-grade systems
● Hands-on exposure to ML-adjacent systems (serving, retrieval, orchestration, inference pipelines)
● Proven ownership of reliability, performance, and cost optimization in production
● Must be based in Mumbai or Bangalore
● Ability to work mandatory in-office
Preferred (Bonus) Skills:
● Experience with greenfield AI platform development
● Already based in Mumbai
● Experience working with US enterprise clients
● Foundation/platform engineering background
Job Title : Senior Software Engineer (Full Stack — AI/ML & Data Applications)
Experience : 5 to 10 Years
Location : Bengaluru, India
Employment Type : Full-Time | Onsite
Role Overview :
We are seeking a Senior Full Stack Software Engineer with strong technical leadership and hands-on expertise in AI/ML, data-centric applications, and scalable full-stack architectures.
In this role, you will design and implement complex applications integrating ML/AI models, lead full-cycle development, and mentor engineering teams.
Mandatory Skills :
Full Stack Development (React/Angular/Vue + Node.js/Python/Java), Data Engineering (Spark/Kafka/ETL), ML/AI Model Integration (TensorFlow/PyTorch/scikit-learn), Cloud & DevOps (AWS/GCP/Azure, Docker, Kubernetes, CI/CD), SQL/NoSQL Databases (PostgreSQL/MongoDB).
Key Responsibilities :
- Architect, design, and develop scalable full-stack applications for data and AI-driven products.
- Build and optimize data ingestion, processing, and pipeline frameworks for large datasets.
- Deploy, integrate, and scale ML/AI models in production environments.
- Drive system design, architecture discussions, and API/interface standards.
- Ensure engineering best practices across code quality, testing, performance, and security.
- Mentor and guide junior developers through reviews and technical decision-making.
- Collaborate cross-functionally with product, design, and data teams to align solutions with business needs.
- Monitor, diagnose, and optimize performance issues across the application stack.
- Maintain comprehensive technical documentation for scalability and knowledge-sharing.
Required Skills & Experience :
- Education : B.E./B.Tech/M.E./M.Tech in Computer Science, Data Science, or equivalent fields.
- Experience : 5+ years in software development with at least 2+ years in a senior or lead role.
- Full Stack Proficiency :
- Front-end : React / Angular / Vue.js
- Back-end : Node.js / Python / Java
- Data Engineering : Experience with data frameworks such as Apache Spark, Kafka, and ETL pipeline development.
- AI/ML Expertise : Practical exposure to TensorFlow, PyTorch, or scikit-learn and deploying ML models at scale.
- Databases : Strong knowledge of SQL & NoSQL systems (PostgreSQL, MongoDB) and warehousing tools (Snowflake, BigQuery).
- Cloud & DevOps : Working knowledge of AWS, GCP, or Azure; containerization & orchestration (Docker, Kubernetes); CI/CD; MLflow/SageMaker is a plus.
- Visualization : Familiarity with modern data visualization tools (D3.js, Tableau, Power BI).
Soft Skills :
- Excellent communication and cross-functional collaboration skills.
- Strong analytical mindset with structured problem-solving ability.
- Self-driven with ownership mentality and adaptability in fast-paced environments.
Preferred Qualifications (Bonus) :
- Experience deploying distributed, large-scale ML or data-driven platforms.
- Understanding of data governance, privacy, and security compliance.
- Exposure to domain-driven data/AI use cases in fintech, healthcare, retail, or e-commerce.
- Experience working in Agile environments (Scrum/Kanban).
- Active open-source contributions or a strong GitHub technical portfolio.
Lead AI Engineer
Location: Bengaluru, Hybrid | Type: Full-time
About Newpage Solutions
Newpage Solutions is a global digital health innovation company helping people live longer, healthier lives. We partner with life sciences organisations—which include pharmaceutical, biotech, and healthcare leaders—to build transformative AI and data-driven technologies addressing real-world health challenges.
From strategy and research to UX design and agile development, we deliver and validate impactful solutions using lean, human-centered practices.
We are proud to be a ‘Great Place to Work®’ certified company for the last three consecutive years. We also hold a top Glassdoor rating and are named among the "Top 50 Most Promising Healthcare Solution Providers" by CIOReview.
As an organisation, we foster creativity, continuous learning, and inclusivity, creating an environment where bold ideas thrive and make a measurable difference in people’s lives.
Your Mission
We’re seeking a highly experienced, technically exceptional Lead AI Engineer to architect and deliver next-generation Generative AI and Agentic systems. You will drive end-to-end innovation, from model selection and orchestration design to scalable backend implementation, all while collaborating with cross-functional teams to transform AI research into production-ready solutions.
This is an individual-contributor leadership role for someone who thrives on ownership, fast execution, and technical excellence. You will define the standards for quality, scalability, and innovation across all AI initiatives.
What You’ll Do
Develop AI Applications & Agentic Systems
- Architect, build, and optimise production-grade Generative AI and agentic applications using frameworks such as LangChain, LangGraph, LlamaIndex, Semantic Kernel, n8n, Pydantic AI or custom orchestration layers integrating with LLMs such as GPT, Claude, Gemini as well as self-hosted LLMs along with MCP integrations.
- Implement Retrieval-Augmented Generation (RAG) techniques leveraging vector databases (Pinecone, ChromaDB, Weaviate, pgvector, etc.) and search engines such as ElasticSearch / Solr using both TF/IDF BM25-based full-text search and similarity search techniques.
- Implement guardrails, observability, fine-tune and train models for industry or domain-specific use cases.
- Build multi-modal workflows using text, image, voice, and video.
- Design robust prompt & context engineering frameworks to improve accuracy, repeatability, quality, cost, and latency.
- Build supporting microservices and modular backends using Python, JavaScript, or Java aligned with domain-driven design, SOLID principles, OOP, and clean architecture, using various databases including relational, document, Key-Value, Graph, and event-driven systems using Kafka / MSK, SQS, etc.
- Deploy cloud-native applications in hyper-scalers such as AWS / GCP / Azure using containerisation and orchestration with Docker / Kubernetes or serverless architecture.
- Apply industry best engineering practices: TDD, well-structured and clean code with linting, domain-driven design, security-first design (secrets management, rotation, SAST, DAST), comprehensive observability (structured logging, metrics, tracing), containerisation & orchestration (Docker, Kubernetes), automated CI/CD pipelines (GitHub Actions, Jenkins).
AI-Assisted Development, Context Engineering & Innovation
- Use AI-assisted development tools such as Claude Code, GitHub Copilot, Codex, Roo Code, Cursor to accelerate development while maintaining code quality and maintainability.
- Utilise coding assistant tools with native instructions, templates, guides, workflows, sub-agents, and more to create developer workflows that improve development velocity, standardisation, and reliability across AI teams.
- Ensure industry best practices to develop well-structured code that is testable, maintainable, performant, scalable, and secure.
- Partner with Product, Design, and ML teams to translate conceptual AI features into scalable user-facing products.
- Provide technical mentorship and guide team members in system design, architecture reviews, and AI best practices.
- Lead POCs, internal research experiments, and innovation sprints to explore and validate emerging AI techniques.
What You Bring
- 7–12 years of total experience in software development, with at least 3 years in AI/ML systems engineering or Generative AI.
- Experience with cloud-native deployments and services in AWS / GCP / Azure, with the ability to architect distributed systems.
- A ‘no-compromise’ attitude with engineering best practices such as clean code, TDD, containerisation, security, CI/CD, scalability, performance, and cost optimisation.
- Active user of AI-assisted development tools (Claude Code, GitHub Copilot, Cursor) with demonstrable experience using structured workflows and sub-agents.
- A deep understanding of LLMs, context engineering approaches, and best practices, with the ability to optimise accuracy, latency, and cost.
- Python or JavaScript experience with strong grasp of OOP, SOLID principles, 12-factor application development, and scalable microservice architecture.
- Proven track record developing and deploying GenAI/LLM-based systems in production.
- Advanced understanding of context engineering, prompt construction, optimisation, and evaluation techniques.
- End-to-end implementation experience using vector databases and retrieval pipelines.
- Experience with GitHub Actions, Docker, Kubernetes, and cloud-native deployments.
- Obsession with clean code, system scalability, and performance optimisation.
- Ability to balance rapid prototyping with long-term maintainability.
- Excel at working independently while collaborating effectively across teams.
- Stay ahead of the curve on new AI models, frameworks, and best practices.
- Have a founder’s mindset and love solving ambiguous, high-impact technical challenges.
- Bachelor’s or Master’s degree in Computer Science, Machine Learning, or a related technical discipline.
Bonus Skills / Experience
- Understanding of MLOps, model serving, scaling, and monitoring workflows (e.g., BentoML, MLflow, Vertex AI, AWS Sagemaker).
- Experience building streaming + batch data ingestion and transformation pipelines (Spark / Airflow / Beam).
- Mobile and front-end web application development experience.
What We Offer
- A people-first culture – Supportive peers, open communication, and a strong sense of belonging.
- Smart, purposeful collaboration – Work with talented colleagues to create technologies that solve meaningful business challenges.
- Balance that lasts – We respect your time and support a healthy integration of work and life.
- Room to grow – Opportunities for learning, leadership, and career development, shaped around you.
- Meaningful rewards – Competitive compensation that recognises both contribution and potential.
We are looking for an AI Engineer (Computer Vision) to design and deploy intelligent video analytics solutions using CCTV feeds. The role focuses on analyzing real-time and recorded video to extract insights such as attention levels, engagement, movement patterns, posture, and overall group behavior. You will work closely with data scientists, backend teams, and product managers to build scalable, privacy-aware AI systems.
Key Responsibilities
- Develop and deploy computer vision models for CCTV-based video analytics
- Analyze gaze, posture, facial expressions, movement, and crowd behavior
- Build real-time and batch video processing pipelines
- Train, fine-tune, and optimize deep learning models for production
- Convert visual signals into actionable insights & dashboards
- Ensure privacy, security, and ethical AI compliance
- Improve model accuracy, latency, and scalability
- Collaborate with engineering teams for end-to-end deployment
Required Skills
- Strong experience in Computer Vision & Deep Learning
- Proficiency in Python
- Hands-on experience with OpenCV, TensorFlow, PyTorch
- Knowledge of CNNs, object detection, tracking, pose estimation
- Experience with video analytics & CCTV data
- Understanding of model optimization and deployment
Good to Have
- Experience with real-time video streaming (RTSP, CCTV feeds)
- Familiarity with edge AI or GPU optimization
- Exposure to education analytics or surveillance systems
- Knowledge of cloud deployment (AWS/GCP/Azure)

is a global digital solutions partner trusted by leading Fortune 500 companies in industries such as pharma & healthcare, retail, and BFSI.It is expertise in data and analytics, data engineering, machine learning, AI, and automation help companies streamline operations and unlock business value.
Skills: Gen AI, Machine learning Models, AWS/ Azure, redshift, Python, Apachi, Airflow, Devops, minimum 4-5years experience as Architect, should be from Data Engineering background.
• 8+ years of experience in data engineering, data science, or architecture roles.
• Experience designing enterprise-grade AI platforms.
• Certification in major cloud platforms (AWS/Azure/GCP).
• Experience with governance tooling (Collibra, Alation) and lineage systems
• Strong hands-on background in data engineering, analytics, or data science.
• Expertise in building data platforms using:
o Cloud: AWS (Glue, S3, Redshift), Azure (Data Factory, Synapse), GCP (BigQuery,
Dataflow).
o Compute: Spark, Databricks, Flink.
o Data modelling: dimensional, relational, NoSQL, graph.
• Proficiency with Python, SQL, and data pipeline orchestration tools.
• Understanding of ML frameworks and tools: TensorFlow, PyTorch, Scikit-learn, MLflow, etc.
• Experience implementing MLOps, model deployment, monitoring, logging, and versioning.
Review Criteria:
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred:
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria:
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities:
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate:
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
Experience: 3+ years
Responsibilities:
- Build, train and fine tune ML models
- Develop features to improve model accuracy and outcomes.
- Deploy models into production using Docker, kubernetes and cloud services.
- Proficiency in Python, MLops, expertise in data processing and large scale data set.
- Hands on experience in Cloud AI/ML services.
- Exposure in RAG Architecture
Senior Machine Learning Engineer
About the Role
We are looking for a Senior Machine Learning Engineer who can take business problems, design appropriate machine learning solutions, and make them work reliably in production environments.
This role is ideal for someone who not only understands machine learning models, but also knows when and how ML should be applied, what trade-offs to make, and how to take ownership from problem understanding to production deployment.
Beyond technical skills, we need someone who can lead a team of ML Engineers, design end-to-end ML solutions, and clearly communicate decisions and outcomes to both engineering teams and business stakeholders. If you enjoy solving real problems, making pragmatic decisions, and owning outcomes from idea to deployment, this role is for you.
What You’ll Be Doing
Building and Deploying ML Models
- Design, build, evaluate, deploy, and monitor machine learning models for real production use cases.
- Take ownership of how a problem is approached, including deciding whether ML is the right solution and what type of ML approach fits the problem.
- Ensure scalability, reliability, and efficiency of ML pipelines across cloud and on-prem environments.
- Work with data engineers to design and validate data pipelines that feed ML systems.
- Optimize solutions for accuracy, performance, cost, and maintainability, not just model metrics.
Leading and Architecting ML Solutions
- Lead a team of ML Engineers, providing technical direction, mentorship, and review of ML approaches.
- Architect ML solutions that integrate seamlessly with business applications and existing systems.
- Ensure models and solutions are explainable, auditable, and aligned with business goals.
- Drive best practices in MLOps, including CI/CD, model monitoring, retraining strategies, and operational readiness.
- Set clear standards for how ML problems are framed, solved, and delivered within the team.
Collaborating and Communicating
- Work closely with business stakeholders to understand problem statements, constraints, and success criteria.
- Translate business problems into clear ML objectives, inputs, and expected outputs.
- Collaborate with software engineers, data engineers, platform engineers, and product managers to integrate ML solutions into production systems.
- Present ML decisions, trade-offs, and outcomes to non-technical stakeholders in a simple and understandable way.
What We’re Looking For
Machine Learning Expertise
- Strong understanding of supervised and unsupervised learning, deep learning, NLP techniques, and large language models (LLMs).
- Experience choosing appropriate modeling approaches based on the problem, available data, and business constraints.
- Experience training, fine-tuning, and deploying ML and LLM models for real-world use cases.
- Proficiency in common ML frameworks such as TensorFlow, PyTorch, Scikit-learn, etc.
Production and Cloud Deployment
- Hands-on experience deploying and running ML systems in production environments on AWS, GCP, or Azure.
- Good understanding of MLOps practices, including CI/CD for ML models, monitoring, and retraining workflows.
- Experience with Docker, Kubernetes, or serverless architectures is a plus.
- Ability to think beyond deployment and consider operational reliability and long-term maintenance.
Data Handling
- Strong programming skills in Python.
- Proficiency in SQL and working with large-scale datasets.
- Ability to reason about data quality, data limitations, and how they impact ML outcomes.
- Familiarity with distributed computing frameworks like Spark or Dask is a plus.
Leadership and Communication
- Ability to lead and mentor ML Engineers and work effectively across teams.
- Strong communication skills to explain ML concepts, decisions, and limitations to business teams.
- Comfortable taking ownership and making decisions in ambiguous problem spaces.
- Passion for staying updated with advancements in ML and AI, with a practical mindset toward adoption.
Experience Needed
- 6+ years of experience in machine learning engineering or related roles.
- Proven experience designing, selecting, and deploying ML solutions used in production.
- Experience managing ML systems after deployment, including monitoring and iteration.
- Proven track record of working in cross-functional teams and leading ML initiatives.
Job Description:
Exp Range - [6y to 10y]
Qualifications:
- Minimum Bachelors Degree in Engineering or Computer Applications or AI/Data science
- Experience working in product companies/Startups for developing, validating, productionizing AI model in the recent projects in last 3 years.
- Prior experience in Python, Numpy, Scikit, Pandas, ETL/SQL, BI tools in previous roles preferred
Require Skills:
- Must Have – Direct hands-on experience working in Python for scripting automation analysis and Orchestration
- Must Have – Experience working with ML Libraries such as Scikit-learn, TensorFlow, PyTorch, Pandas, NumPy etc.
- Must Have – Experience working with models such as Random forest, Kmeans clustering, BERT…
- Should Have – Exposure to querying warehouses and APIs
- Should Have – Experience with writing moderate to complex SQL queries
- Should Have – Experience analyzing and presenting data with BI tools or Excel
- Must Have – Very strong communication skills to work with technical and non technical stakeholders in a global environment
Roles and Responsibilities:
- Work with Business stakeholders, Business Analysts, Data Analysts to understand various data flows and usage.
- Analyse and present insights about the data and processes to Business Stakeholders
- Validate and test appropriate AI/ML models based on the prioritization and insights developed while working with the Business Stakeholders
- Develop and deploy customized models on Production data sets to generate analytical insights and predictions
- Participate in cross functional team meetings and provide estimates of work as well as progress in assigned tasks.
- Highlight risks and challenges to the relevant stakeholders so that work is delivered in a timely manner.
- Share knowledge and best practices with broader teams to make everyone aware and more productive.
We are seeking an experienced AI Architect to design, build, and scale production-ready AI voice conversation agents deployed locally (on-prem / edge / private cloud) and optimized for GPU-accelerated, high-throughput environments.
You will own the end-to-end architecture of real-time voice systems, including speech recognition, LLM orchestration, dialog management, speech synthesis, and low-latency streaming pipelines—designed for reliability, scalability, and cost efficiency.
This role is highly hands-on and strategic, bridging research, engineering, and production infrastructure.
Key Responsibilities
Architecture & System Design
- Design low-latency, real-time voice agent architectures for local/on-prem deployment
- Define scalable architectures for ASR → LLM → TTS pipelines
- Optimize systems for GPU utilization, concurrency, and throughput
- Architect fault-tolerant, production-grade voice systems (HA, monitoring, recovery)
Voice & Conversational AI
- Design and integrate:
- Automatic Speech Recognition (ASR)
- Natural Language Understanding / LLMs
- Dialogue management & conversation state
- Text-to-Speech (TTS)
- Build streaming voice pipelines with sub-second response times
- Enable multi-turn, interruptible, natural conversations
Model & Inference Engineering
- Deploy and optimize local LLMs and speech models (quantization, batching, caching)
- Select and fine-tune open-source models for voice use cases
- Implement efficient inference using TensorRT, ONNX, CUDA, vLLM, Triton, or similar
Infrastructure & Production
- Design GPU-based inference clusters (bare metal or Kubernetes)
- Implement autoscaling, load balancing, and GPU scheduling
- Establish monitoring, logging, and performance metrics for voice agents
- Ensure security, privacy, and data isolation for local deployments
Leadership & Collaboration
- Set architectural standards and best practices
- Mentor ML and platform engineers
- Collaborate with product, infra, and applied research teams
- Drive decisions from prototype → production → scale
Required Qualifications
Technical Skills
- 7+ years in software / ML systems engineering
- 3+ years designing production AI systems
- Strong experience with real-time voice or conversational AI systems
- Deep understanding of LLMs, ASR, and TTS pipelines
- Hands-on experience with GPU inference optimization
- Strong Python and/or C++ background
- Experience with Linux, Docker, Kubernetes
AI & ML Expertise
- Experience deploying open-source LLMs locally
- Knowledge of model optimization:
- Quantization
- Batching
- Streaming inference
- Familiarity with voice models (e.g., Whisper-like ASR, neural TTS)
Systems & Scaling
- Experience with high-QPS, low-latency systems
- Knowledge of distributed systems and microservices
- Understanding of edge or on-prem AI deployments
Preferred Qualifications
- Experience building AI voice agents or call automation systems
- Background in speech processing or audio ML
- Experience with telephony, WebRTC, SIP, or streaming audio
- Familiarity with Triton Inference Server / vLLM
- Prior experience as Tech Lead or Principal Engineer
What We Offer
- Opportunity to architect state-of-the-art AI voice systems
- Work on real-world, high-scale production deployments
- Competitive compensation and equity (if applicable)
- High ownership and technical influence
- Collaboration with top-tier AI and infrastructure talent
Company Description
VMax e-Solutions India Private Limited, based in Hyderabad, is a dynamic organization specializing in Open Source ERP Product Development and Mobility Solutions. As an ISO 9001:2015 and ISO 27001:2013 certified company, VMax is dedicated to delivering tailor-made and scalable products, with a strong focus on e-Governance projects across multiple states in India. The company's innovative technologies aim to solve real-life problems and enhance the daily services accessed by millions of citizens. With a culture of continuous learning and growth, VMax provides its team members opportunities to develop expertise, take ownership, and grow their careers through challenging and impactful work.
About the Role
We’re hiring a Senior Data Scientist with deep real-time voice AI experience and strong backend engineering skills.
1. You’ll own and scale our end-to-end voice agent pipeline that powers AI SDRs, customer support 2. agents, and internal automation agents on calls. This is a hands-on, highly technical role where you’ll design and optimize low-latency, high-reliability voice systems.
3. You’ll work closely with our founders, product, and platform teams, with significant ownership over architecture, benchmarks.
What You’ll Do
1. Own the voice stack end-to-end – from telephony / WebRTC entrypoints to STT, turn-taking, LLM reasoning, and TTS back to the caller.
2. Design for real-time – architect and optimize streaming pipelines for sub-second latency, barge-in, interruptions, and graceful recovery on bad networks.
3. Integrate and tune models – evaluate, select, and integrate STT/TTS/LLM/VAD providers (and self-hosted models) for different use-cases, balancing quality, speed, and cost.
4. Build orchestration & tooling – implement agent orchestration logic, evaluation frameworks, call simulators, and dashboards for latency, quality, and reliability.
5. Harden for production – ensure high availability, observability, and robust fault-tolerance for thousands of concurrent calls in customer VPCs.
6. Shape the voice roadmap – influence how voice fits into our broader Agentic OS vision (simulation, analytics, multi-agent collaboration, etc.).
You’re a Great Fit If You Have
1. 6+ years of software engineering experience (backend or full-stack) in production systems.
2. Strong experience building real-time voice agents or similar systems using:
STT / ASR (e.g. Whisper, Deepgram, Assembly, AWS Transcribe, GCP Speech)
TTS (e.g. ElevenLabs, PlayHT, AWS Polly, Azure Neural TTS)
VAD / turn-taking and streaming audio pipelines
LLMs (e.g. OpenAI, Anthropic, Gemini, local models)
3. Proven track record designing and operating low-latency, high-throughput streaming systems (WebRTC, gRPC, websockets, Kafka, etc.).
4. Hands-on experience integrating ML models into live, user-facing applications with real-time inference & monitoring.
5. Solid backend skills with Python and TypeScript/Node.js; strong fundamentals in distributed systems, concurrency, and performance optimization.
6. Experience with cloud infrastructure – especially AWS (EKS, ECS, Lambda, SQS/Kafka, API Gateway, load balancers).
7. Comfortable working in Kubernetes / Docker environments, including logging, metrics, and alerting.
8. Startup DNA – at least 2 years in an early or mid-stage startup where you shipped fast, owned outcomes, and worked close to the customer.
Nice to Have
1. Experience self-hosting AI models (ASR / TTS / LLMs) and optimizing them for latency, cost, and reliability.
2. Telephony integration experience (e.g. Twilio, Vonage, Aircall, SignalWire, or similar).
3. Experience with evaluation frameworks for conversational agents (call quality scoring, hallucination checks, compliance rules, etc.).
4. Background in speech processing, signal processing, or dialog systems.
5. Experience deploying into enterprise VPC / on-prem environments and working with security/compliance constraints.
Full-Stack Machine Learning Engineer
Role: Full-Time, Long-Term Required: Python Preferred: C++
OVERVIEW
We are seeking a versatile ML engineer to join as a core member of our technical team. This is a long-term position for someone who wants to build sophisticated production systems and grow with a small, focused team. You will work across the entire stack—from data ingestion and feature engineering through model training, validation, and deployment.
The ideal candidate combines strong software engineering fundamentals with deep ML expertise, particularly in time series forecasting and quantitative applications. You should be comfortable operating independently, making architectural decisions, and owning systems end-to-end.
CORE TECHNICAL REQUIREMENTS
Python (Required): Professional-level proficiency writing clean, production-grade code—not just notebooks. Deep understanding of NumPy, Pandas, and their performance characteristics. You know when to use vectorized operations, understand memory management for large datasets, and can profile and optimize bottlenecks. Experience with async programming and multiprocessing is valuable.
Machine Learning (Required): Hands-on experience building and deploying ML systems in production. This goes beyond training models—you understand the full lifecycle: data validation, feature engineering, model selection, hyperparameter optimization, validation strategies, monitoring, and maintenance.
Specific experience we value: gradient boosting frameworks (LightGBM, XGBoost, CatBoost), time series forecasting, probabilistic prediction and uncertainty quantification, feature selection and dimensionality reduction, cross-validation strategies for non-IID data, model calibration.
You should understand overfitting deeply—not just as a concept but as something you actively defend against through proper validation, regularization, and architectural choices.
Data Pipelines (Required): Design and implement robust pipelines handling real-world messiness: missing data, late arrivals, schema changes, upstream failures. You understand idempotency, exactly-once semantics, and backfill strategies. Experience with workflow orchestration (Airflow, Prefect, Dagster) expected. Comfortable with ETL/ELT patterns, incremental vs full recomputation, data quality monitoring, database design and query optimization (PostgreSQL preferred), time series data at scale.
C++ (Preferred): Experience valuable for performance-critical components. Writing efficient C++ and interfacing with Python (pybind11, Cython) is a significant advantage.
HIGHLY DESIRABLE: MULTI-AGENT ORCHESTRATION
We are building systems leveraging LLM-based automation. Experience with multi-agent frameworks highly desirable: LangChain, LangGraph, or similar agent frameworks; designing reliable AI pipelines with error handling and fallbacks; prompt engineering and output parsing; managing context and state across agent interactions. You do not need to be an expert, but genuine interest and hands-on experience will set you apart.
DOMAIN EXPERIENCE: FINANCIAL DATA AND CRYPTO
Preference for candidates with experience in quantitative finance, algorithmic trading, or fintech; cryptocurrency markets and their unique characteristics; financial time series data and forecasting systems; market microstructure, volatility, and regime dynamics. This helps you understand why reproducibility is non-negotiable, why validation must account for temporal structure, and why production reliability cannot be an afterthought.
ENGINEERING STANDARDS
Code Quality: Readable, maintainable code others can modify. Proper version control (meaningful commits, branches, code review). Testing where appropriate. Documentation: docstrings, READMEs, decision records.
Production Mindset: Think about failure modes before they happen. Build in observability: logging, metrics, alerting. Design for reproducibility—same inputs produce same outputs.
Systems Thinking: Consider component interactions, not just isolated behavior. Understand tradeoffs: speed vs accuracy, flexibility vs simplicity. Zoom between architecture and implementation.
WHAT WE ARE LOOKING FOR
Self-Direction: Given a problem and context, you break it down, identify the path forward, and execute. You ask questions when genuinely blocked, not when you could find the answer yourself.
Long-Term Orientation: You think in years, not months. You make decisions considering future maintainability.
Intellectual Honesty: You acknowledge uncertainty and distinguish between what you know versus guess. When something fails, you dig into why.
Communication: You explain complex concepts clearly and document your reasoning.
EDUCATION
University degree in a quantitative/technical field preferred: Computer Science, Mathematics, Statistics, Physics, Engineering. Equivalent demonstrated expertise through work also considered.
TO APPLY
Include: (1) CV/resume, (2) Brief description of a production ML system you built, (3) Links to relevant work if available, (4) Availability and timezone.
About Us
Mobileum is a leading provider of Telecom analytics solutions for roaming, core network, security, risk management, domestic and international connectivity testing, and customer intelligence.
More than 1,000 customers rely on its Active Intelligence platform, which provides advanced analytics solutions, allowing customers to connect deep network and operational intelligence with real-time actions that increase revenue, improve customer experience, and reduce costs.
Headquartered in Silicon Valley, Mobileum has global offices in Australia, Dubai, Germany, Greece, India, Portugal, Singapore and UK with global HC of over 1800+.
Join Mobileum Team
At Mobileum we recognize that our team is the main reason for our success. What does work with us mean? Opportunities!
Role: GenAI/LLM Engineer – Domain-Specific AI Solutions (Telecom)
About the Job
We are seeking a highly skilled GenAI/LLM Engineer to design, fine-tune, and operationalize Large Language Models (LLMs) for telecom business applications. This role will be instrumental in building domain-specific GenAI solutions, including the development of domain-specific LLMs, to transform telecom operational processes, customer interactions, and internal decision-making workflows.
Roles & Responsibility:
- Build domain-specific LLMs by curating domain-relevant datasets and training/fine-tuning LLMs tailored for telecom use cases.
- Fine-tune pre-trained LLMs (e.g., GPT, Llama, Mistral) using telecom-specific datasets to improve task accuracy and relevance.
- Design and implement prompt engineering frameworks, optimize prompt construction and context strategies for telco-specific queries and processes.
- Develop Retrieval-Augmented Generation (RAG) pipelines integrated with vector databases (e.g., FAISS, Pinecone) to enhance LLM performance on internal knowledge.
- Build multi-agent LLM pipelines using orchestration tools (LangChain, LlamaIndex) to support complex telecom workflows.
- Collaborate cross-functionally with data engineers, product teams, and domain experts to translate telecom business logic into GenAI workflows.
- Conduct systematic model evaluation focused on minimizing hallucinations, improving domain-specific accuracy, and tracking performance improvements on business KPIs.
- Contribute to the development of internal reusable GenAI modules, coding standards, and best practices documentation.
Desired Profile
- Familiarity with multi-modal LLMs (text + tabular/time-series).
- Experience with OpenAI function calling, LangGraph, or agent-based orchestration.
- Exposure to telecom datasets (e.g., call records, customer tickets, network logs).
- Experience with low-latency inference optimization (e.g., quantization, distillation).
Technical skills
- Hands-on experience in fine-tuning transformer models, prompt engineering, and RAG architecture design.
- Experience delivering production-ready AI solutions in enterprise environments; telecom exposure is a plus.
- Advanced knowledge of transformer architectures, fine-tuning techniques (LoRA, PEFT, adapters), and transfer learning.
- Proficiency in Python, with significant experience using PyTorch, Hugging Face Transformers, and related NLP libraries.
- Practical expertise in prompt engineering, RAG pipelines, and LLM orchestration tools (LangChain, LlamaIndex).
- Ability to build domain-adapted LLMs, from data preparation to final model deployment.
Work Experience
7+ years of professional experience in AI/ML, with at least 2+ years of practical exposure to LLMs or GenAI deployments.
Educational Qualification
- Master’s or Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, Natural Language Processing, or a related field.
- Ph.D. preferred for foundational model work and advanced research focus.
About the Role
We are seeking a highly skilled and experienced AI Ops Engineer to join our team. In this role, you will be responsible for ensuring the reliability, scalability, and efficiency of our AI/ML systems in production. You will work at the intersection of software engineering, machine learning, and DevOps— helping to design, deploy, and manage AI/ML models and pipelines that power mission-critical business applications.
The ideal candidate has hands-on experience in AI/ML operations and orchestrating complex data pipelines, a strong understanding of cloud-native technologies, and a passion for building robust, automated, and scalable systems.
Key Responsibilities
- AI/ML Systems Operations: Develop and manage systems to run and monitor production AI/ML workloads, ensuring performance, availability, cost-efficiency and convenience.
- Deployment & Automation: Build and maintain ETL, ML and Agentic pipelines, ensuring reproducibility and smooth deployments across environments.
- Monitoring & Incident Response: Design observability frameworks for ML systems (alerts and notifications, latency, cost, etc.) and lead incident triage, root cause analysis, and remediation.
- Collaboration: Partner with data scientists, ML engineers, and software engineers to operationalize models at scale.
- Optimization: Continuously improve infrastructure, workflows, and automation to reduce latency, increase throughput, and minimize costs.
- Governance & Compliance: Implement MLOps best practices, including versioning, auditing, security, and compliance for data and models.
- Leadership: Mentor junior engineers and contribute to the development of AI Ops standards and playbooks.
Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent practical experience).
- 4+ years of experience in AI/MLOps, DevOps, SRE, Data Engineering, or with at least 2+ years in AI/ML-focused operations.
- Strong expertise with cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes, Docker).
- Hands-on experience with ML pipelines and frameworks (MLflow, Kubeflow, Airflow, SageMaker, Vertex AI, etc.).
- Proficiency in Python and/or other scripting languages for automation.
- Familiarity with monitoring/observability tools (Prometheus, Grafana, Datadog, ELK, etc.).
- Deep understanding of CI/CD, GitOps, and Infrastructure as Code (Terraform, Helm, etc.).
- Knowledge of data governance, model drift detection, and compliance in AI systems.
- Excellent problem-solving, communication, and collaboration skills.
Nice-to-Have
- Experience in large-scale distributed systems and real-time data streaming (Kafka, Flink, Spark).
- Familiarity with data science concepts, and frameworks such as scikit-learn, Keras, PyTorch, Tensorflow, etc.
- Full Stack Development knowledge to collaborate effectively across end-to-end solution delivery
- Contributions to open-source MLOps/AI Ops tools or platforms.
- Exposure to Responsible AI practices, model fairness, and explainability frameworks
Why Join Us
- Opportunity to shape and scale AI/ML operations in a fast-growing, innovation-driven environment.
- Work alongside leading data scientists and engineers on cutting-edge AI solutions.
- Competitive compensation, benefits, and career growth opportunities.
Machine Learning Engineer | 3+ Years | Mumbai (Onsite)
Location: Ghansoli, Mumbai
Work Mode: Onsite | 5 days working
Notice Period: Immediate to 30 Days preferred
About the Role
We are hiring a Machine Learning Engineer with 3+ years of experience to build and deploy prediction, classification, and recommendation models. You’ll work on end-to-end ML pipelines and production-grade AI systems.
Must-Have Skills
- 3+ years of hands-on ML experience
- Strong Python (Pandas, NumPy, Scikit-learn, TensorFlow / PyTorch)
- Experience with feature engineering, model training & evaluation
- Hands-on with Azure ML / Azure Storage / Azure Functions
- Knowledge of modern AI concepts (embeddings, transformers, LLMs)
Good to Have
- MLOps tools (MLflow, Docker, CI/CD)
- Time-series forecasting
- Model serving using FastAPI
Why Join Us?
- Work on real-world ML use cases
- Exposure to modern AI & LLM-based systems
- Collaborative engineering environment
- High ownership & learning opportunities
Review Criteria
- Strong Data Scientist/Machine Learnings/ AI Engineer Profile
- 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
- Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
- Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
- Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
- Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
- Preferred (Company) – Must be from product companies
Job Specific Criteria
- CV Attachment is mandatory
- What's your current company?
- Which use cases you have hands on experience?
- Are you ok for Mumbai location (if candidate is from outside Mumbai)?
- Reason for change (if candidate has been in current company for less than 1 year)?
- Reason for hike (if greater than 25%)?
Role & Responsibilities
- Partner with Product to spot high-leverage ML opportunities tied to business metrics.
- Wrangle large structured and unstructured datasets; build reliable features and data contracts.
- Build and ship models to:
- Enhance customer experiences and personalization
- Boost revenue via pricing/discount optimization
- Power user-to-user discovery and ranking (matchmaking at scale)
- Detect and block fraud/risk in real time
- Score conversion/churn/acceptance propensity for targeted actions
- Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
- Design and run A/B tests with guardrails.
- Build monitoring for model/data drift and business KPIs
Ideal Candidate
- 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
- Proven, hands-on success in at least two (preferably 3–4) of the following:
- Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
- Fraud/risk detection (severe class imbalance, PR-AUC)
- Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
- Propensity models (payment/churn)
- Programming: strong Python and SQL; solid git, Docker, CI/CD.
- Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
- ML breadth: recommender systems, NLP or user profiling, anomaly detection.
- Communication: clear storytelling with data; can align stakeholders and drive decisions.
🎯 About Us
Stupa builds cutting-edge AI for real-time sports intelligence ; automated commentary, player tracking, non-contact biomechanics, ball trajectory, LED graphics, and broadcast-grade stats. Your models will be seen live by millions across global events.
🌍 Global Travel
Work that literally travels the world. You’ll deploy systems at international tournaments across Asia, Europe, and the Middle East, working inside world-class stadiums, courts, and TV production rooms.
✨ What You’ll Build
- AI Language Products
- Automated live commentary (LLM + ASR + OCR), real-time subtitles, AI storytelling.
- Non-Contact Measurement (CV + Tracking + Pose Estimation)
- Player velocity, footwork, acceleration, shot recognition, 2D/3D reconstruction, real-time edge inference.
- End-to-End Streaming Pipelines
- Temporal segmentation, multi-modal fusion, low-latency edge + cloud deployment.
🧠 What You’ll Do
Train and optimise ML/CV/NLP models for live sports, build tracking & pose pipelines, create LLM/ASR-based commentary systems, deploy on edge/cloud, ship rapid POCs→production, manage datasets & accuracy, and collaborate with product, engineering, and broadcast teams.
🧩 Requirements
Core Skills:
- Strong ML fundamentals (NLP/CV/multimodal)
- PyTorch/TensorFlow, transformers, ASR or pose estimation
- Data pipelines, optimisation, evaluation
- Deployment (Docker, ONNX, TensorRT, FastAPI, K8s, edge GPU)
- Strong Python engineering
Bonus: Sports analytics, LLM fine-tuning, low-latency optimisation, prior production ML systems.
🌟 Why Join Us
- Your models go LIVE in global sports broadcasts
- International travel for tournaments
- High ownership, zero bureaucracy
- Build India’s most advanced AI × Sports product
- Cool, futuristic problems + freedom to innovate
- Up to ₹40LPA for exceptional talent
🔥 You Belong Here If You…
Build what the world hasn’t seen • Want impact on live sports • Thrive in fast-paced ownership-driven environments.
Job Description: Applied Scientist
Location: Bangalore / Hybrid / Remote
Company: LodgIQ
Industry: Hospitality / SaaS / Machine Learning
About LodgIQ
Headquartered in New York, LodgIQ delivers a revolutionary B2B SaaS platform to the
travel industry. By leveraging machine learning and artificial intelligence, we enable precise
forecasting and optimized pricing for hotel revenue management. Backed by Highgate
Ventures and Trilantic Capital Partners, LodgIQ is a well-funded, high-growth startup with a
global presence.
About the Role
We are seeking a highly motivated Applied Scientist to join our Data Science team. This
individual will play a key role in enhancing and scaling our existing forecasting and pricing
systems and developing new capabilities that support our intelligent decision-making
platform.
We are looking for team members who: ● Are deeply curious and passionate about applying machine learning to real-world
problems. ● Demonstrate strong ownership and the ability to work independently. ● Excel in both technical execution and collaborative teamwork. ● Have a track record of shipping products in complex environments.
What You’ll Do ● Build, train, and deploy machine learning and operations research models for
forecasting, pricing, and inventory optimization. ● Work with large-scale, noisy, and temporally complex datasets. ● Collaborate cross-functionally with engineering and product teams to move models
from research to production. ● Generate interpretable and trusted outputs to support adoption of AI-driven rate
recommendations. ● Contribute to the development of an AI-first platform that redefines hospitality revenue
management.
Required Qualifications ● Bachelor's or Master’s degree or PhD in Computer Science or related field. ● 3-5 years of hands-on experience in a product-centric company, ideally with full model
lifecycle exposure.
Commented [1]: Leaving note here
Acceptable Degree types - Masters or PhD
Fields
Operations Research
Industrial/Systems Engineering
Computer Science
Applied Mathematics
● Demonstrated ability to apply machine learning and optimization techniques to solve
real-world business problems.
● Proficient in Python and machine learning libraries such as PyTorch, statsmodel,
LightGBM, scikit-learn, XGBoost
● Strong knowledge of Operations Research models (Stochastic optimization, dynamic
programming) and forecasting models (time-series and ML-based).
● Understanding of machine learning and deep learning foundations.
● Translate research into commercial solutions
● Strong written and verbal communication skills to explain complex technical concepts
clearly to cross-functional teams.
● Ability to work independently and manage projects end-to-end.
Preferred Experience
● Experience in revenue management, pricing systems, or demand forecasting,
particularly within the hotel and hospitality domain.
● Applied knowledge of reinforcement learning techniques (e.g., bandits, Q-learning,
model-based control).
● Familiarity with causal inference methods (e.g., DAGs, treatment effect estimation).
● Proven experience in collaborative product development environments, working closely
with engineering and product teams.
Why LodgIQ?
● Join a fast-growing, mission-driven company transforming the future of hospitality.
● Work on intellectually challenging problems at the intersection of machine learning,
decision science, and human behavior.
● Be part of a high-impact, collaborative team with the autonomy to drive initiatives from
ideation to production.
● Competitive salary and performance bonuses.
● For more information, visit https://www.lodgiq.com
Job Overview:
As a Technical Lead, you will be responsible for leading the design, development, and deployment of AI-powered Edtech solutions. You will mentor a team of engineers, collaborate with data scientists, and work closely with product managers to build scalable and efficient AI systems. The ideal candidate should have 8-10 years of experience in software development, machine learning, AI use case development and product creation along with strong expertise in cloud-based architectures.
Key Responsibilities:
AI Tutor & Simulation Intelligence
- Architect the AI intelligence layer that drives contextual tutoring, retrieval-based reasoning, and fact-grounded explanations.
- Build RAG (retrieval-augmented generation) pipelines and integrate verified academic datasets from textbooks and internal course notes.
- Connect the AI Tutor with the Simulation Lab, enabling dynamic feedback — the system should read experiment results, interpret them, and explain why outcomes occur.
- Ensure AI responses remain transparent, syllabus-aligned, and pedagogically accurate.
Platform & System Architecture
- Lead the development of a modular, full-stack platform unifying courses, explainers, AI chat, and simulation windows in a single environment.
- Design microservice architectures with API bridges across content systems, AI inference, user data, and analytics.
- Drive performance, scalability, and platform stability — every millisecond and every click should feel seamless.
Reliability, Security & Analytics
- Establish system observability and monitoring pipelines (usage, engagement, AI accuracy).
- Build frameworks for ethical AI, ensuring transparency, privacy, and student safety.
- Set up real-time learning analytics to measure comprehension and identify concept gaps.
Leadership & Collaboration
- Mentor and elevate engineers across backend, ML, and front-end teams.
- Collaborate with the academic and product teams to translate physics pedagogy into engineering precision.
- Evaluate and integrate emerging tools — multi-modal AI, agent frameworks, explainable AI — into the product roadmap.
Qualifications & Skills:
- 8–10 years of experience in software engineering, ML systems, or scalable AI product builds.
- Proven success leading cross-functional AI/ML and full-stack teams through 0→1 and scale-up phases.
- Expertise in cloud architecture (AWS/GCP/Azure) and containerization (Docker, Kubernetes).
- Experience designing microservices and API ecosystems for high-concurrency platforms.
- Strong knowledge of LLM fine-tuning, RAG pipelines, and vector databases (Pinecone, Weaviate, etc.).
- Demonstrated ability to work with educational data, content pipelines, and real-time systems.
Bonus Skills (Nice to Have):
- Experience with multi-modal AI models (text, image, audio, video).
- Knowledge of AI safety, ethical AI, and explain ability techniques.
- Prior work in AI-powered automation tools or AI-driven SaaS products.

Global digital transformation solutions provider.
Job Description – Senior Technical Business Analyst
Location: Trivandrum (Preferred) | Open to any location in India
Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST
About the Role
We are seeking highly motivated and analytically strong Senior Technical Business Analysts who can work seamlessly with business and technology stakeholders to convert a one-line problem statement into a well-defined project or opportunity. This role is ideal for fresh graduates who have a strong foundation in data analytics, data engineering, data visualization, and data science, along with a strong drive to learn, collaborate, and grow in a dynamic, fast-paced environment.
As a Technical Business Analyst, you will be responsible for translating complex business challenges into actionable user stories, analytical models, and executable tasks in Jira. You will work across the entire data lifecycle—from understanding business context to delivering insights, solutions, and measurable outcomes.
Key Responsibilities
Business & Analytical Responsibilities
- Partner with business teams to understand one-line problem statements and translate them into detailed business requirements, opportunities, and project scope.
- Conduct exploratory data analysis (EDA) to uncover trends, patterns, and business insights.
- Create documentation including Business Requirement Documents (BRDs), user stories, process flows, and analytical models.
- Break down business needs into concise, actionable, and development-ready user stories in Jira.
Data & Technical Responsibilities
- Collaborate with data engineering teams to design, review, and validate data pipelines, data models, and ETL/ELT workflows.
- Build dashboards, reports, and data visualizations using leading BI tools to communicate insights effectively.
- Apply foundational data science concepts such as statistical analysis, predictive modeling, and machine learning fundamentals.
- Validate and ensure data quality, consistency, and accuracy across datasets and systems.
Collaboration & Execution
- Work closely with product, engineering, BI, and operations teams to support the end-to-end delivery of analytical solutions.
- Assist in development, testing, and rollout of data-driven solutions.
- Present findings, insights, and recommendations clearly and confidently to both technical and non-technical stakeholders.
Required Skillsets
Core Technical Skills
- 6+ years of Technical Business Analyst experience within an overall professional experience of 8+ years
- Data Analytics: SQL, descriptive analytics, business problem framing.
- Data Engineering (Foundational): Understanding of data warehousing, ETL/ELT processes, cloud data platforms (AWS/GCP/Azure preferred).
- Data Visualization: Experience with Power BI, Tableau, or equivalent tools.
- Data Science (Basic/Intermediate): Python/R, statistical methods, fundamentals of ML algorithms.
Soft Skills
- Strong analytical thinking and structured problem-solving capability.
- Ability to convert business problems into clear technical requirements.
- Excellent communication, documentation, and presentation skills.
- High curiosity, adaptability, and eagerness to learn new tools and techniques.
Educational Qualifications
- BE/B.Tech or equivalent in:
- Computer Science / IT
- Data Science
What We Look For
- Demonstrated passion for data and analytics through projects and certifications.
- Strong commitment to continuous learning and innovation.
- Ability to work both independently and in collaborative team environments.
- Passion for solving business problems using data-driven approaches.
- Proven ability (or aptitude) to convert a one-line business problem into a structured project or opportunity.
Why Join Us?
- Exposure to modern data platforms, analytics tools, and AI technologies.
- A culture that promotes innovation, ownership, and continuous learning.
- Supportive environment to build a strong career in data and analytics.
Skills: Data Analytics, Business Analysis, Sql
Must-Haves
Technical Business Analyst (6+ years), SQL, Data Visualization (Power BI, Tableau), Data Engineering (ETL/ELT, cloud platforms), Python/R
******
Notice period - 0 to 15 days (Max 30 Days)
Educational Qualifications: BE/B.Tech or equivalent in: (Computer Science / IT) /Data Science
Location: Trivandrum (Preferred) | Open to any location in India
Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST
Artificial Intelligence Research Intern
We are looking for a passionate and skilled AI Intern to join our dynamic team for a 6-month full-time internship. This is an excellent opportunity to work on cutting-edge technologies in Artificial Intelligence, Machine Learning, Deep Learning, and Natural Language Processing (NLP), contributing to real-world projects that create a tangible impact.
Key Responsibilities:
• Research, design, develop, and implement AI and Deep Learning algorithms.
• Work on NLP systems and models for tasks such as text classification, sentiment analysis, and
data extraction.
• Evaluate and optimize machine learning and deep learning models.
• Collect, process, and analyze large-scale datasets.
• Use advanced techniques for text representation and classification.
• Write clean, efficient, and testable code for production-ready applications.
• Perform web scraping and data extraction using Python (requests, BeautifulSoup, Selenium, APIs, etc.).
• Collaborate with cross-functional teams and clearly communicate technical concepts to both technical and non-technical audiences.
Required Skills and Experience:
• Theoretical and practical knowledge of AI, ML, and DL concepts.
• Good Understanding of Python and libraries such as:TensorFlow, PyTorch, Keras, Scikit-learn, Numpy, Pandas, Scipy, Matplotlib NLP tools like NLTK, spaCy, etc.
• Strong understanding of Neural Network Architectures (CNNs, RNNs, LSTMs).
• Familiarity with data structures, data modeling, and software architecture.
• Understanding of text representation techniques (n-grams, BoW, TF-IDF, etc.).
• Comfortable working in Linux/UNIX environments.
• Basic knowledge of HTML, JavaScript, HTTP, and Networking.
• Strong communication skills and a collaborative mindset.
Job Type: Full-Time Internship
Location: In-Office (Bhayander)
ML Intern
Hyperworks Imaging is a cutting-edge technology company based out of Bengaluru, India since 2016. Our team uses the latest advances in deep learning and multi-modal machine learning techniques to solve diverse real world problems. We are rapidly growing, working with multiple companies around the world.
JOB OVERVIEW
We are seeking a talented and results-oriented ML Intern to join our growing team in India. In this role, you will be responsible for developing and implementing new advanced ML algorithms and AI agents for creating assistants of the future.
The ideal candidate will work on a complete ML pipeline starting from extraction, transformation and analysis of data to developing novel ML algorithms. The candidate will implement latest research papers and closely work with various stakeholders to ensure data-driven decisions and integrate the solutions into a robust ML pipeline.
RESPONSIBILITIES:
- Create AI agents using Model Context Protocols (MCPs), Claude Code, DsPy etc.
- Develop custom evals for AI agents.
- Build and maintain ML pipelines
- Optimize and evaluate ML models to ensure accuracy and performance.
- Define system requirements and integrate ML algorithms into cloud based workflows.
- Write clean, well-documented, and maintainable code following best practices
REQUIREMENTS:
- 1-3+ years of experience in data science, machine learning, or a similar role.
- Demonstrated expertise with python, PyTorch, and TensorFlow.
- Graduated/Graduating with B.Tech/M.Tech/PhD degrees in Electrical Engg./Electronics Engg./Computer Science/Maths and Computing/Physics
- Has done coursework in Linear Algebra, Probability, Image Processing, Deep Learning and Machine Learning.
- Has demonstrated experience with Model Context Protocols (MCPs), DsPy, AI Agents, MLOps etc
WHO CAN APPLY:
Only those candidates will be considered who,
- have relevant skills and interests
- can commit full time
- Can show prior work and deployed projects
- can start immediately
Please note that we will reach out to ONLY those applicants who satisfy the criteria listed above.
SALARY DETAILS: Commensurate with experience.
JOINING DATE: Immediate
JOB TYPE: Full-time
ROLE & RESPONSIBILITIES:
We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.
KEY RESPONSIBILITIES:
1. Cloud Security (AWS)-
- Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
- Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
- Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
- Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
- Ensure encryption of data at rest/in transit across all cloud services.
2. DevOps Security (IaC, CI/CD, Kubernetes, Linux)-
Infrastructure as Code & Automation Security:
- Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
- Enforce misconfiguration scanning and automated remediation.
CI/CD Security:
- Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
- Implement secure build, artifact signing, and deployment workflows.
Containers & Kubernetes:
- Harden Docker images, private registries, runtime policies.
- Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
- Apply CIS Benchmarks for Kubernetes and Linux.
Monitoring & Reliability:
- Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
- Ensure audit logging across cloud/platform layers.
3. MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-
Pipeline & Workflow Security:
- Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
- Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.
ML Platform Security:
- Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
- Control model access, artifact protection, model registry security, and ML metadata integrity.
Data Security:
- Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
- Enforce data versioning security, lineage tracking, PII protection, and access governance.
ML Observability:
- Implement drift detection (data drift/model drift), feature monitoring, audit logging.
- Integrate ML monitoring with Grafana/Prometheus/CloudWatch.
4. Network & Endpoint Security-
- Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
- Conduct vulnerability assessments, penetration test coordination, and network segmentation.
- Secure remote workforce connectivity and internal office networks.
5. Threat Detection, Incident Response & Compliance-
- Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
- Build security alerts, automated threat detection, and incident workflows.
- Lead incident containment, forensics, RCA, and remediation.
- Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
- Maintain security policies, procedures, RRPs (Runbooks), and audits.
IDEAL CANDIDATE:
- 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
- Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
- Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
- Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
- Strong Linux security (CIS hardening, auditing, intrusion detection).
- Proficiency in Python, Bash, and automation/scripting.
- Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
- Understanding of microservices, API security, serverless security.
- Strong understanding of vulnerability management, penetration testing practices, and remediation plans.
EDUCATION:
- Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
- Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.
PERKS, BENEFITS AND WORK CULTURE:
- Competitive Salary Package
- Generous Leave Policy
- Flexible Working Hours
- Performance-Based Bonuses
- Health Care Benefits
Strong Senior Data Scientist (AI/ML/GenAI) Profile
Mandatory (Experience 1) – Must have a minimum of 5+ years of experience in designing, developing, and deploying Machine Learning / Deep Learning (ML/DL) systems in production
Mandatory (Experience 2) – Must have strong hands-on experience in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.
Mandatory (Experience 3) – Must have 1+ years of experience in fine-tuning Large Language Models (LLMs) using techniques like LoRA/QLoRA, and building RAG (Retrieval-Augmented Generation) pipelines.
Mandatory (Experience 4) – Must have experience with MLOps and production-grade systems including Docker, Kubernetes, Spark, model registries, and CI/CD workflows
mandatory (Note): Budget for upto 5 yoe is 25Lacs, Upto 7 years is 35 lacs and upto 12 years is 45 lacs, Also client can pay max 30-40% based on candidature
This opportunity through ClanX is for Parspec (direct payroll with Parspec)
Please note you would be interviewed for this role at Parspec while ClanX is a recruitment partner helping Parspec find the right candidates and manage the hiring process.
Parspec is hiring a Machine Learning Engineer to build state-of-the-art AI systems, focusing on NLP, LLMs, and computer vision in a high-growth, product-led environment.
Company Details:
Parspec is a fast-growing technology startup revolutionizing the construction supply chain. By digitizing and automating critical workflows, Parspec empowers sales teams with intelligent tools to drive efficiency and close more deals. With a mission to bring a data-driven future to the construction industry, Parspec blends deep domain knowledge with AI expertise.
Requirements:
- Bachelor’s or Master’s degree in Science or Engineering.
- 5-7 years of experience in ML and data science.
- Recent hands-on work with LLMs, including fine-tuning, RAG, agent flows, and integrations.
- Strong understanding of foundational models and transformers.
- Solid grasp of ML and DL fundamentals, with experience in CV and NLP.
- Recent experience working with large datasets.
- Python experience with ML libraries like numpy, pandas, sklearn, matplotlib, nltk and others.
- Experience with frameworks like Hugging Face, Spacy, BERT, TensorFlow, PyTorch, OpenRouter or Modal.
Requirements:
Must haves
- Design, develop, and deploy NLP, CV, and recommendation systems
- Train and implement deep learning models
- Research and explore novel ML architectures
- Build and maintain end-to-end ML pipelines
- Collaborate across product, design, and engineering teams
- Work closely with business stakeholders to shape product features
- Ensure high scalability and performance of AI solutions
- Uphold best practices in engineering and contribute to a culture of excellence
- Actively participate in R&D and innovation within the team
Good to haves
- Experience building scalable AI pipelines for extracting structured data from unstructured sources.
- Experience with cloud platforms, containerization, and managed AI services.
- Knowledge of MLOps practices, CI/CD, monitoring, and governance.
- Experience with AWS or Django.
- Familiarity with databases and web application architecture.
- Experience with OCR or PDF tools.
Responsibilities:
- Design, develop, and deploy NLP, CV, and recommendation systems
- Train and implement deep learning models
- Research and explore novel ML architectures
- Build and maintain end-to-end ML pipelines
- Collaborate across product, design, and engineering teams
- Work closely with business stakeholders to shape product features
- Ensure high scalability and performance of AI solutions
- Uphold best practices in engineering and contribute to a culture of excellence
- Actively participate in R&D and innovation within the team
Interview Process
- Technical interview (coding, ML concepts, project walkthrough)
- System design and architecture round
- Culture fit and leadership interaction
- Final offer discussion
Review Criteria:
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred:
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria:
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities:
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate:
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
Core Responsibilities:
- The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
- Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
- Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
- Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
- System Integration: Integrate models into existing systems and workflows.
- Model Deployment: Deploy models to production environments and monitor performance.
- Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
- Continuous Improvement: Identify areas for improvement in model performance and systems.
Skills:
- Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
- Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaossearch logs, etc for troubleshooting; Other tech touch points are ScyllaDB (like BigTable), OpenSearch, Neo4J graph
- Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
- Knowledge of model monitoring and performance evaluation.
Required experience:
- Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sagemaker pipeline with ability to analyze gaps and recommend/implement improvements
- AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
- AWS data: Redshift, Glue
- Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
Skills: Aws, Aws Cloud, Amazon Redshift, Eks
Must-Haves
Machine Learning +Aws+ (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sagemaker
Notice period - 0 to 15days only
Hybrid work mode- 3 days office, 2 days at home
We are looking for a Senior AI / ML Engineer to join our fast-growing team and help build AI-driven data platforms and intelligent solutions. If you are passionate about AI, data engineering, and building real-world GenAI systems, this role is for you!
🔧 Key Responsibilities
• Develop and deploy AI/ML models for real-world applications
• Build scalable pipelines for data processing, training, and evaluation
• Work on LLMs, RAG, embeddings, and agent workflows
• Collaborate with data engineers, product teams, and software developers
• Write clean, efficient Python code and ensure high-quality engineering practices
• Handle model monitoring, performance tuning, and documentation
Required Skills
• 2–5 years of experience in AI/ML engineering
• Strong knowledge of Python, TensorFlow/PyTorch
• Experience with LLMs, GenAI, RAG, or NLP
• Knowledge of Databricks, MLOps or cloud platforms (AWS/Azure/GCP)
• Good understanding of APIs, distributed systems, and data pipelines
🎯 Good to Have
• Experience in healthcare, SaaS, or big data
• Exposure to Databricks Mosaic AI
• Experience building AI agents
Role Overview
Join our core tech team to build the intelligence layer of Clink's platform. You'll architect AI agents, design prompts, build ML models, and create systems powering personalized offers for thousands of restaurants. High-growth opportunity working directly with founders, owning critical features from day one.
Why Clink?
Clink revolutionizes restaurant loyalty using AI-powered offer generation and customer analytics:
- ML-driven customer behavior analysis (Pattern detection)
- Personalized offers via LLMs and custom AI agents
- ROI prediction and forecasting models
- Instagram marketing rewards integration
Tech Stack:
- Python,
- FastAPI,
- PostgreSQL,
- Redis,
- Docker,
- LLMs
You Will Work On:
AI Agents: Design and optimize AI agents
ML Models: Build redemption prediction, customer segmentation, ROI forecasting
Data & Analytics: Analyze data, build behavior pattern pipelines, create product bundling matrices
System Design: Architect scalable async AI pipelines, design feedback loops, implement A/B testing
Experimentation: Test different LLM approaches, explore hybrid LLM+ML architectures, prototype new capabilities
Must-Have Skills
Technical: 0-2 years AI/ML experience (projects/internships count), strong Python, LLM API knowledge, ML fundamentals (supervised learning, clustering), Pandas/NumPy proficiency
Mindset: Extreme curiosity, logical problem-solving, builder mentality (side projects/hackathons), ownership mindset
Nice to Have: Pydantic, FastAPI, statistical forecasting, PostgreSQL/SQL, scikit-learn, food-tech/loyalty domain interest
We are looking for enthusiastic engineers passionate about building and maintaining solutioning platform components on cloud and Kubernetes infrastructure. The ideal candidate will go beyond traditional SRE responsibilities by collaborating with stakeholders, understanding the applications hosted on the platform, and designing automation solutions that enhance platform efficiency, reliability, and value.
[Technology and Sub-technology]
• ML Engineering / Modelling
• Python Programming
• GPU frameworks: TensorFlow, Keras, Pytorch etc.
• Cloud Based ML development and Deployment AWS or Azure
[Qualifications]
• Bachelor’s Degree in Computer Science, Computer Engineering or equivalent technical degree
• Proficient programming knowledge in Python or Java and ability to read and explain open source codebase.
• Good foundation of Operating Systems, Networking and Security Principles
• Exposure to DevOps tools, with experience integrating platform components into Sagemaker/ECR and AWS Cloud environments.
• 4-6 years of relevant experience working on AI/ML projects
[Primary Skills]:
• Excellent analytical & problem solving skills.
• Exposure to Machine Learning and GenAI technologies.
• Understanding and hands-on experience with AI/ML Modeling, Libraries, frameworks, and Tools (TensorFlow, Keras, Pytorch etc.)
• Strong knowledge of Python, SQL/NoSQL
• Cloud Based ML development and Deployment AWS or Azure
Review Criteria
- Strong Senior Data Scientist (AI/ML/GenAI) Profile
- 5+ years of experience in designing, developing, and deploying Machine Learning / Deep Learning (ML/DL) systems in production
- Must have strong hands-on experience in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.
- 1+ years of experience in fine-tuning Large Language Models (LLMs) using techniques like LoRA/QLoRA, and building RAG (Retrieval-Augmented Generation) pipelines.
- Must have experience with MLOps and production-grade systems including Docker, Kubernetes, Spark, model registries, and CI/CD workflows
Preferred
- Prior experience in open-source GenAI contributions, applied LLM/GenAI research, or large-scale production AI systems
- Preferred (Education) – B.S./M.S./Ph.D. in Computer Science, Data Science, Machine Learning, or a related field.
Job Specific Criteria
- CV Attachment is mandatory
- Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
- Are you okay with 3 Days WFO?
- Virtual Interview requires video to be on, are you okay with it?
Role & Responsibilities
Company is hiring a Senior Data Scientist with strong expertise in AI, machine learning engineering (MLE), and generative AI. You will play a leading role in designing, deploying, and scaling production-grade ML systems — including large language model (LLM)-based pipelines, AI copilots, and agentic workflows. This role is ideal for someone who thrives on balancing cutting-edge research with production rigor and loves mentoring while building impact-first AI applications.
Responsibilities:
- Own the full ML lifecycle: model design, training, evaluation, deployment
- Design production-ready ML pipelines with CI/CD, testing, monitoring, and drift detection
- Fine-tune LLMs and implement retrieval-augmented generation (RAG) pipelines
- Build agentic workflows for reasoning, planning, and decision-making
- Develop both real-time and batch inference systems using Docker, Kubernetes, and Spark
- Leverage state-of-the-art architectures: transformers, diffusion models, RLHF, and multimodal pipelines
- Collaborate with product and engineering teams to integrate AI models into business applications
- Mentor junior team members and promote MLOps, scalable architecture, and responsible AI best practices
Ideal Candidate
- 5+ years of experience in designing, deploying, and scaling ML/DL systems in production
- Proficient in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX
- Experience with LLM fine-tuning, LoRA/QLoRA, vector search (Weaviate/PGVector), and RAG pipelines
- Familiarity with agent-based development (e.g., ReAct agents, function-calling, orchestration)
- Solid understanding of MLOps: Docker, Kubernetes, Spark, model registries, and deployment workflows
- Strong software engineering background with experience in testing, version control, and APIs
- Proven ability to balance innovation with scalable deployment
- B.S./M.S./Ph.D. in Computer Science, Data Science, or a related field
- Bonus: Open-source contributions, GenAI research, or applied systems at scale
Job Title: Sensor Expert – MLFF (Multi-Lane Free Flow) Engagement Type: Consultant / External Associate Organization: Bosch - MPIN
Location: Bangalore, India
Purpose of the Role:
To provide technical expertise in sensing technologies for MLFF (Multi-Lane Free Flow) and ITMS (Intelligent Traffic Management System) solutions. The role focuses on camera systems, AI/ML based computer vision, and multi-sensor integration (camera, RFID, radar) to drive solution performance, optimization, and business success. Key
Responsibilities:
• Lead end-to-end sensor integration for MLFF and ITMS platforms.
• Manage camera systems, ANPR, and data packet processing.
• Apply AI/ML techniques for performance optimization in computer vision.
• Collaborate with System Integrators and internal teams on architecture and implementation.
• Support B2G proposals (smart city, mining, infrastructure projects) with domain expertise.
• Drive continuous improvement in deployed MLFF solutions.
Key Competencies:
• Deep understanding of camera and sensor technologies, AI/ML for vision systems, and system integration.
• Experience in PoC development and solution optimization.
• Strong analytical, problem-solving, and collaboration skills.
• Familiarity with B2G environments and public infrastructure tenders preferred.
Qualification & Experience:
• Bachelor’s/Master’s in Electronics, Electrical, or Computer Science.
• 8–10 years of experience in camera technology, AI/ML, and sensor integration.
• Proven track record in system design, implementation, and field optimization.
Role: Azure AI Tech Lead
Exp-3.5-7 Years
Location: Remote / Noida (NCR)
Notice Period: Immediate to 15 days
Mandatory Skills: Python, Azure AI/ML, PyTorch, TensorFlow, JAX, HuggingFace, LangChain, Kubeflow, MLflow, LLMs, RAG, MLOps, Docker, Kubernetes, Generative AI, Model Deployment, Prometheus, Grafana
JOB DESCRIPTION
As the Azure AI Tech Lead, you will serve as the principal technical expert leading the design, development, and deployment of advanced AI and ML solutions on the Microsoft Azure platform. You will guide a team of engineers, establish robust architectures, and drive end-to-end implementation of AI projects—transforming proof-of-concepts into scalable, production-ready systems.
Key Responsibilities:
- Lead architectural design and development of AI/ML solutions using Azure AI, Azure OpenAI, and Cognitive Services.
- Develop and deploy scalable AI systems with best practices in MLOps across the full model lifecycle.
- Mentor and upskill AI/ML engineers through technical reviews, training, and guidance.
- Implement advanced generative AI techniques including LLM fine-tuning, RAG systems, and diffusion models.
- Collaborate cross-functionally to translate business goals into innovative AI solutions.
- Enforce governance, responsible AI practices, and performance optimization standards.
- Stay ahead of trends in LLMs, agentic AI, and applied research to shape next-gen solutions.
Qualifications:
- Bachelor’s or Master’s in Computer Science or related field.
- 3.5–7 years of experience delivering end-to-end AI/ML solutions.
- Strong expertise in Azure AI ecosystem and production-grade model deployment.
- Deep technical understanding of ML, DL, Generative AI, and MLOps pipelines.
- Excellent analytical and problem-solving abilities; applied research or open-source contributions preferred.
MUST-HAVES:
- Machine Learning + Aws + (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sage maker
- Notice period - 0 to 15 days only
- Hybrid work mode- 3 days office, 2 days at home
SKILLS: AWS, AWS CLOUD, AMAZON REDSHIFT, EKS
ADDITIONAL GUIDELINES:
- Interview process: - 2 Technical round + 1 Client round
- 3 days in office, Hybrid model.
CORE RESPONSIBILITIES:
- The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
- Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
- Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
- Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
- System Integration: Integrate models into existing systems and workflows.
- Model Deployment: Deploy models to production environments and monitor performance.
- Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
- Continuous Improvement: Identify areas for improvement in model performance and systems.
SKILLS:
- Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
- Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaos search logs, etc. for troubleshooting; Other tech touch points are Scylla DB (like BigTable), OpenSearch, Neo4J graph
- Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
- Knowledge of model monitoring and performance evaluation.
REQUIRED EXPERIENCE:
- Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sage maker pipeline with ability to analyze gaps and recommend/implement improvements
- AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
- AWS data: Redshift, Glue
- Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
AI based systems design and development, entire pipeline from image/ video ingest, metadata ingest, processing, encoding, transmitting.
Implementation and testing of advanced computer vision algorithms.
Dataset search, preparation, annotation, training, testing, fine tuning of vision CNN models. Multimodal AI, LLMs, hardware deployment, explainability.
Detailed analysis of results. Documentation, version control, client support, upgrades.
Experience- 6 to 8 years
Location- Bangalore
Job Description-
- Extensive experience with machine learning utilizing the latest analytical models in Python. (i.e., experience in generating data-driven insights that play a key role in rapid decision-making and driving business outcomes.)
- Extensive experience using Tableau, table design, PowerApps, Power BI, Power Automate, and cloud environments, or equivalent experience designing/implementing data analysis pipelines and visualization.
- Extensive experience using AI agent platforms. (AI = data analysis: a required skill for data analysts.)
- A statistics major or equivalent understanding of statistical analysis results interpretation.
At iDreamCareer, we’re on a mission to democratize career guidance for millions of young learners across India and beyond. Technology is at the heart of this mission — and we’re looking for an Engineering Manager who thrives in high-ownership environments, thinks with an enterprising mindset, and gets excited about solving problems that genuinely change lives.
This is not just a management role. It’s a chance to shape the product, scale the platform, influence the engineering culture, and lead a team that builds with heart and hustle.
As an Director-Engineering here, you will:
- Lead a talented team of engineers while remaining hands-on with architecture and development.
- Champion the use of AI/ML, LLM-driven features, and intelligent systems to elevate learner experience.
- Inspire a culture of high performance, clear thinking, and thoughtful engineering.
- Partner closely with product, design, and content teams to deliver delightful, meaningful user experiences.
- Bring structure, clarity, and energy to complex problem-solving.
- This role is ideal for someone who loves building, mentoring, scaling, and thinking several steps ahead.
Key Responsibilities
Technical Leadership & Ownership
- Lead end-to-end development across backend, frontend, architecture, and infrastructure in partnership with product and design teams.
- Stay hands-on with the MERN stack, Python, and AI/ML technologies, while guiding and coaching a high-performance engineering team.
- Architect, develop, and maintain distributed microservices, event-driven systems, and robust APIs on AWS.
AI/ML Engineering
- Build and deploy AI-powered features, leveraging LLMs, RAG pipelines, embeddings, vector databases, and model evaluation frameworks.
- Drive prompt engineering, retrieval optimization, and continuous refinement of AI system performance.
- Champion the adoption of modern AI coding tools and emerging AI platforms to boost team productivity.
Cloud, Data, DevOps & Scaling
- Own deployments and auto-scaling on AWS (ECS, Lambda, CloudFront, SQS, SES, ELB, S3).
- Build and optimize real-time and batch data pipelines using BigQuery and other analytics tools.
- Implement CI/CD pipelines for Dockerized applications, ensuring strong observability through Prometheus, Loki, Grafana, CloudWatch.
- Enforce best practices around security, code quality, testing, and system performance.
Collaboration & Delivery Excellence
- Partner closely with product managers, designers, and QA to deliver features with clarity, speed, and reliability.
- Drive agile rituals, ensure engineering predictability, and foster a culture of ownership, innovation, and continuous improvement
Required Skills & Experience
- 8-15 years of experience in full-stack or backend engineering with at least 5+ years leading engineering teams.
- Strong hands-on expertise in the MERN stack and modern JavaScript/TypeScript ecosystems.
- 5+ years building and scaling production-grade applications and distributed systems.
- 2+ years building and deploying AI/ML products — including training, tuning, integrating, and monitoring AI models in production.
- Practical experience with SQL, NoSQL, vector databases, embeddings, and production-grade RAG systems.
- Strong understanding of LLM prompt optimization, evaluation frameworks, and AI-driven system design.
- Hands-on with AI developer tools, automation utilities, and emerging AI productivity platforms.
Preferred Skills
- Familiarity with LLM orchestration frameworks (LangChain, LlamaIndex, etc.) and advanced tool-calling workflows.
- Experience building async workflows, schedulers, background jobs, and offline processing systems.
- Exposure to modern frontend testing frameworks, QA automation, and performance testing.
Job description
Job Title: Python Trainer (Workshop Model Freelance / Part-time)
Location: Thrissur & Ernakulam
Program Duration: 30 or 60 Hours (Workshop Model)
Job Type: Freelance / Contract
About the Role:
We are seeking an experienced and passionate Python Trainer to deliver interactive, hands-on training sessions for students under a workshop model in Thrissur and Ernakulam locations. The trainer will be responsible for engaging learners with practical examples and real-time coding exercises.
Key Responsibilities:
Conduct offline workshop-style Python training sessions (30 or 60 hours total).
Deliver interactive lectures and coding exercises focused on Python programming fundamentals and applications.
Customize the curriculum based on learners skill levels and project needs.
Guide students through mini-projects, assignments, and coding challenges.
Ensure effective knowledge transfer through practical, real-world examples.
Requirements:
Experience: 15 years of training or industry experience in Python programming.
Technical Skills: Strong knowledge of Python, including OOPs concepts, file handling, libraries (NumPy, Pandas, etc.), and basic data visualization.
Prior experience in academic or corporate training preferred.
Excellent communication and presentation skills.
Mode: Offline Workshop (Thrissur / Ernakulam)
Duration: Flexible – 30 Hours or 60 Hours Total
Organization: KGiSL Microcollege
Role: Other
Industry Type: Education / Training
Department: Other
Employment Type: Full Time, Permanent
Role Category: Other
Education
UG: Any Graduate
Key Skills
Data Science,Artificial Intelligence
Role: Other
Industry Type: Education / Training
Department: Other
Employment Type: Full Time, Permanent
Role Category: Other
Education
UG: Any Graduate
Job description
Job Title: Python Trainer (Workshop Model Freelance / Part-time)
Location: Thrissur & Ernakulam
Program Duration: 30 or 60 Hours (Workshop Model)
Job Type: Freelance / Contract
About the Role:
We are seeking an experienced and passionate Python Trainer to deliver interactive, hands-on training sessions for students under a workshop model in Thrissur and Ernakulam locations. The trainer will be responsible for engaging learners with practical examples and real-time coding exercises.
Key Responsibilities:
Conduct offline workshop-style Python training sessions (30 or 60 hours total).
Deliver interactive lectures and coding exercises focused on Python programming fundamentals and applications.
Customize the curriculum based on learners skill levels and project needs.
Guide students through mini-projects, assignments, and coding challenges.
Ensure effective knowledge transfer through practical, real-world examples.
Requirements:
Experience: 15 years of training or industry experience in Python programming.
Technical Skills: Strong knowledge of Python, including OOPs concepts, file handling, libraries (NumPy, Pandas, etc.), and basic data visualization.
Prior experience in academic or corporate training preferred.
Excellent communication and presentation skills.
Mode: Offline Workshop (Thrissur / Ernakulam)
Duration: Flexible – 30 Hours or 60 Hours Total
Organization: KGiSL Microcollege
Role: Other
Industry Type: Education / Training
Department: Other
Employment Type: Full Time, Permanent
Role Category: Other
Education
UG: Any Graduate
Key Skills
Data Science,Artificial Intelligence
Role: Other
Industry Type: Education / Training
Department: Other
Employment Type: Full Time, Permanent
Role Category: Other
Education
UG: Any Graduate
We are looking for an 4-5 years of experienced AI/ML Engineer who can design, develop, and deploy scalable machine learning models and AI-driven solutions. The ideal candidate should have strong expertise in data processing, model building, and production deployment, along with solid programming and problem-solving skills.
Key Responsibilities
- Develop and deploy machine learning, deep learning, and NLP models for various business use cases.
- Build end-to-end ML pipelines including data preprocessing, feature engineering, training, evaluation, and production deployment.
- Optimize model performance and ensure scalability in production environments.
- Work closely with data scientists, product teams, and engineers to translate business requirements into AI solutions.
- Conduct data analysis to identify trends and insights.
- Implement MLOps practices for versioning, monitoring, and automating ML workflows.
- Research and evaluate new AI/ML techniques, tools, and frameworks.
- Document system architecture, model design, and development processes.
Required Skills
- Strong programming skills in Python (NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch, Keras).
- Hands-on experience in building and deploying ML/DL models in production.
- Good understanding of machine learning algorithms, neural networks, NLP, and computer vision.
- Experience with REST APIs, Docker, Kubernetes, and cloud platforms (AWS/GCP/Azure).
- Working knowledge of MLOps tools such as MLflow, Airflow, DVC, or Kubeflow.
- Familiarity with data pipelines and big data technologies (Spark, Hadoop) is a plus.
- Strong analytical skills and ability to work with large datasets.
- Excellent communication and problem-solving abilities.
Preferred Qualifications
- Bachelor’s or Master’s degree in Computer Science, Data Science, AI, Machine Learning, or related fields.
- Experience in deploying models using cloud services (AWS Sagemaker, GCP Vertex AI, etc.).
- Experience in LLM fine-tuning or generative AI is an added advantage.
About the Company
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position Summary
We are seeking a highly experienced and visionary Senior Engineering Manager – Inference Services to lead and scale our team responsible for building high-performance inference systems that power cutting-edge AI/ML products. This role requires a blend of strong technical expertise, leadership skills, and product-oriented thinking to drive innovation, scalability, and reliability of our inference infrastructure.
Key Responsibilities
Leadership & Strategy
- Lead, mentor, and grow a team of engineers focused on inference platforms, services, and optimizations.
- Define the long-term vision and roadmap for inference services in alignment with product and business goals.
- Partner with cross-functional leaders in ML, Product, Data Science, and Infrastructure to deliver robust, low-latency, and scalable inference solutions.
Engineering Excellence
- Architect and oversee development of distributed, production-grade inference systems ensuring scalability, efficiency, and reliability.
- Drive adoption of best practices for model deployment, monitoring, and continuous improvement of inference pipelines.
- Ensure high availability, cost optimization, and performance tuning of inference workloads across cloud and on-prem environments.
Innovation & Delivery
- Evaluate emerging technologies, frameworks, and hardware accelerators (GPUs, TPUs, etc.) to continuously improve inference efficiency.
- Champion automation and standardization of model deployment and lifecycle management.
- Balance short-term delivery with long-term architectural evolution.
People & Culture
- Build a strong engineering culture focused on collaboration, innovation, and accountability.
- Provide coaching, feedback, and career development opportunities to team members.
- Foster a growth mindset and data-driven decision-making.
Basic Qualifications
Experience
- 12+ years of software engineering experience with at least 4–5 years in engineering leadership roles.
- Proven track record of managing high-performing teams delivering large-scale distributed systems or ML platforms.
- Experience in building and operating inference systems, ML serving platforms, or real-time data systems at scale.
Technical Expertise
- Strong understanding of machine learning model deployment, serving, and optimization (batch & real-time).
- Proficiency in cloud-native technologies (Kubernetes, Docker, microservices architecture).
- Hands-on knowledge of inference frameworks (TensorFlow Serving, Triton Inference Server, TorchServe, etc.) and hardware accelerators.
- Solid background in programming languages (Python, Java, C++ or Go) and performance optimization techniques.
Preferred Qualifications
- Experience with MLOps platforms and end-to-end ML lifecycle management.
- Prior work in high-throughput, low-latency systems (ad-tech, search, recommendations, etc.).
- Knowledge of cost optimization strategies for large-scale inference workloads.
What We’re Looking For
- 3-5 years of Data Science & ML experience in consumer internet / B2C products.
- Degree in Statistics, Computer Science, or Engineering (or certification in Data Science).
- Machine Learning wizardry: recommender systems, NLP, user profiling, image processing, anomaly detection.
- Statistical chops: finding meaningful insights in large data sets.
- Programming ninja: R, Python, SQL + hands-on with Numpy, Pandas, scikit-learn, Keras, TensorFlow (or similar).
- Visualization skills: Redshift, Tableau, Looker, or similar.
- A strong problem-solver with curiosity hardwired into your DNA.
- Brownie Points
- Experience with big data platforms: Hadoop, Spark, Hive, Pig.
- Extra love if you’ve played with BI tools like Tableau or Looker.
About Us: The Next Generation of WealthTech
We're Cambridge Wealth, an award-winning force in mutual fund distribution and Fintech. We're not just moving money; we're redefining wealth management for everyone from retail investors to ultra-HNIs (including the NRI segment). Our brand is synonymous with excellence, backed by accolades from the BSE and top Mutual Fund houses.
If you thrive on building high-performance, scalable systems that drive real-world financial impact, you'll feel right at home. Join us in Pune to build the future of finance.
[Learn more: www.cambridgewealth.in]
The Role: Engineering Meets Team Meets Customer
We're looking for an experienced, hands-on Tech Catalyst to accelerate our product innovation. This isn't just a coding job; it's a chance to blend deep backend expertise with product strategy. You will be the engine driving rapid, data-driven product experiments, leveraging AI and Machine Learning to create smart, personalized financial solutions. You'll lead by example, mentoring a small, dedicated team and ensuring technical excellence and rapid deployment in the high-stakes financial domain.
Key Impact Areas: Ship Fast, Break Ground
1. Backend & AI/ML Innovation
- Rapid Prototyping: Design and execute quick, iterative experiments to validate new features and market hypotheses, moving from concept to production in days, not months.
- AI-Powered Features: Build scalable Python-based backend services that integrate AI/ML models to enhance customer profiling, portfolio recommendation, and risk analysis.
- System Architecture: Own the performance, stability, and scalability of our core fintech platform, implementing best practices in modern backend development.
2. Product Leadership & Execution
- Agile Catalyst: Drive and optimize Agile sprints, ensuring clear technical milestones, efficient resource allocation, backlog grooming and maintaining a laser focus on preventing scope creep.
- Mentorship & Management: Provide technical guidance and mentorship to a team of developers, fostering a culture of high performance, code quality, and continuous learning.
- Domain Alignment: Translate complex financial requirements and market insights into precise, actionable technical specifications and seamless user stories.
- Problem Solver: Proactively identify and resolve technical and process bottlenecks, acting as the ultimate problem solver for the engineering and product teams.
3. Financial Domain Expertise
- High-Value Delivery: Apply deep knowledge of the mutual fund and broader fintech landscape to inform product decisions, ensuring our solutions are compliant, competitive, and truly valuable to our clients.
- Risk & Security: Proactively architect solutions with security and financial risk management baked in from the ground up, protecting client data and assets.
Your Tech Stack & Experience
The Must-Haves
- Mindset: A verifiable track record as a proactive First Principle Problem Solver with an intense Passion to Ship production-ready features frequently.
- Customer Empathy: Keeps the customer's experience in mind at all times.
- Team Leadership: Experience in leading, mentoring, or managing a small development team, driving technical excellence and project delivery.
- Systems Thinker: Diagnoses and solves problems by viewing the organization as an interconnected system to anticipate broad impacts and develop holistic, strategic solutions.
- Backend Powerhouse: 2+ years of professional experience with a strong focus on backend development.
- Python Guru: Expert proficiency in Python and related frameworks (e.g., Django, Flask) for building robust, scalable APIs and services.
- AI/ML Integration: Proven ability to leverage and integrate AI/ML models into production-level applications.
- Data Driven: Expert in SQL for complex data querying, analysis, and ETL processes.
- Financial Domain Acumen:Strong, demonstrable knowledge of financial products, especially mutual funds, wealth management, and key fintech metrics.
Nice-to-Haves
- Experience with cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes).
- Familiarity with Zoho Analytics, Zoho CRM and Zoho Deluge
- Familiarity with modern data analysis tools and visualization platforms (e.g., Mixpanel, Tableau, or custom dashboard tools).
- Understanding of Mutual Fund, AIF, PMS operations
Ready to Own the Backend and Shape Finance?
This is where your code meets the capital market. If you’re a Fintech-savvy Python expert ready to lead a team and build a scalable platform in Pune, we want to talk.
Apply now to join our award-winning, forward-thinking team.
Our High-Velocity Hiring Process:
- You Apply & Engage: Quick application and a few insightful questions. (5 min)
- Online Tech Challenge: Prove your tech mettle. (90 min)
- People Sync: A focused call to understand if there is cultural and value alignment. (30 min)
- Deep Dive Technical Interview: Discuss architecture and projects with our senior engineers. (1 hour)
- Founder's Vision Interview: Meet the leadership and discuss your impact. (1 hour)
- Offer & Onboarding: Reference and BGV check follow the successful offer.
What are you building right now that you're most proud of?
Job Title: AI Engineer
Location: Bengaluru
Experience: 3 Years
Working Days: 5 Days
About the Role
We’re reimagining how enterprises interact with documents and workflows—starting with BFSI and healthcare. Our AI-first platforms are transforming credit decisioning, document intelligence, and underwriting at scale. The focus is on Intelligent Document Processing (IDP), GenAI-powered analysis, and human-in-the-loop (HITL) automation to accelerate outcomes across lending, insurance, and compliance workflows.
As an AI Engineer, you’ll be part of a high-caliber engineering team building next-gen AI systems that:
- Power robust APIs and platforms used by underwriters, credit analysts, and financial institutions.
- Build and integrate GenAI agents.
- Enable “human-in-the-loop” workflows for high-assurance decisions in real-world conditions.
Key Responsibilities
- Build and optimize ML/DL models for document understanding, classification, and summarization.
- Apply LLMs and RAG techniques for validation, search, and question-answering tasks.
- Design and maintain data pipelines for structured and unstructured inputs (PDFs, OCR text, JSON, etc.).
- Package and deploy models as REST APIs or microservices in production environments.
- Collaborate with engineering teams to integrate models into existing products and workflows.
- Continuously monitor and retrain models to ensure reliability and performance.
- Stay updated on emerging AI frameworks, architectures, and open-source tools; propose improvements to internal systems.
Required Skills & Experience
- 2–5 years of hands-on experience in AI/ML model development, fine-tuning, and building ML solutions.
- Strong Python proficiency with libraries such as NumPy, Pandas, scikit-learn, PyTorch, or TensorFlow.
- Solid understanding of transformers, embeddings, and NLP pipelines.
- Experience working with LLMs (OpenAI, Claude, Gemini, etc.) and frameworks like LangChain.
- Exposure to OCR, document parsing, and unstructured text analytics.
- Familiarity with model serving, APIs, and microservice architectures (FastAPI, Flask).
- Working knowledge of Docker, cloud environments (AWS/GCP/Azure), and CI/CD pipelines.
- Strong grasp of data preprocessing, evaluation metrics, and model validation workflows.
- Excellent problem-solving ability, structured thinking, and clean, production-ready coding practices.
IMP: Please read through before applying!
Nature of role: Full-time; On-site
Location: Thiruvanmiyur, Chennai
Responsibilities:
Build and manage automation workflows using n8n, Make (Integromat), Zapier, or custom APIs.
Integrate tools across JugaadX, WhatsApp, Shopify, Meta, Google Workspace, CRMs, and internal systems.
Develop and maintain scalable, modular automation systems with clear documentation.
Integrate and experiment with AI tools and APIs such as OpenAI, Gemini, Claude, HeyGen, Runway, etc.
Create intelligent workflows — from chatbots and lead scorers to content generators and auto-responders.
Manage cloud infrastructure (VPS, Docker, SSL, security) for automations and dashboards.
Identify repetitive tasks and convert them into reliable automated processes.
Build centralized dashboards and automated reports for teams and clients.
Stay up-to-date with the latest in AI, automation, and LLM technologies, and bring new ideas to life within Jugaad’s ecosystem.
Requirements:
Hands-on experience with n8n, Make, or Zapier (or similar tools).
Familiarity with OpenAI, Gemini, HuggingFace, ElevenLabs, HeyGen, and other AI platforms.
Working knowledge of JavaScript and basic Python for API scripting.
Strong understanding of REST APIs, webhooks, and authentication.
Experience with Docker, VPS (AWS/DigitalOcean), and server management.
Proficiency with Google Sheets, Airtable, JSON, and basic SQL.
Clear communication and documentation skills — able to explain technical systems simply.
Who You Are:
A self-starter who loves automation, optimization, and innovation.
Comfortable building end-to-end tech solutions independently.
Excited to collaborate across creative, marketing, and tech teams.
Always experimenting with new AI tools and smarter ways to work.
Obsessed with efficiency, scalability, and impact — you love saving time and getting more done with less.
What You Get:
A strategic and hands-on role at the intersection of AI, automation, and operations.
The chance to shape the tech backbone of Jugaad and influence how we work, scale, and innovate.
Freedom to experiment, build, and deploy your ideas fast.
A young, fast-moving team where your work directly drives impact and growth.
Role Overview
We are looking for a highly skilled and intellectually curious Senior Data Scientist with 7+ years of experience in applying advanced machine learning and AI techniques to solve complex business problems. The ideal candidate will have deep expertise in Classical Machine Learning, Deep Learning, Natural Language Processing (NLP), and Generative AI (GenAI), along with strong hands-on coding skills and a proven track record of delivering impactful data science solutions. This role requires a blend of technical excellence, business acumen, and collaborative mindset.
Key Responsibilities
- Design, develop, and deploy ML models using classical algorithms (e.g., regression, decision trees, ensemble methods) and deep learning architectures (CNNs, RNNs, Transformers).
- Build NLP solutions for tasks such as text classification, entity recognition, summarization, and conversational AI.
- Develop and fine-tune GenAI models for use cases like content generation, code synthesis, and personalization.
- Architect and implement Retrieval-Augmented Generation (RAG) systems for enhanced contextual AI applications.
- Collaborate with data engineers to build scalable data pipelines and feature stores.
- Perform advanced feature engineering and selection to improve model accuracy and robustness.
- Work with large-scale structured and unstructured datasets using distributed computing frameworks.
- Translate business problems into data science solutions and communicate findings to stakeholders.
- Present insights and recommendations through compelling storytelling and visualization.
- Mentor junior data scientists and contribute to internal knowledge sharing and innovation.
Required Qualifications
- 7+ years of experience in data science, machine learning, and AI.
- Strong academic background in Computer Science, Statistics, Mathematics, or related field (Master’s or PhD preferred).
- Proficiency in Python, SQL, and ML libraries (scikit-learn, TensorFlow, PyTorch, Hugging Face).
- Experience with NLP and GenAI tools (e.g., Azure AI Foundry, Azure AI studio, GPT, LLaMA, LangChain).
- Hands-on experience with Retrieval-Augmented Generation (RAG) systems and vector databases.
- Familiarity with cloud platforms (Azure preferred, AWS/GCP acceptable) and MLOps tools (MLflow, Airflow, Kubeflow).
- Solid understanding of data structures, algorithms, and software engineering principles.
- Experience with Aure, Azure Copilot Studio, Azure Cognitive Services
- Experience with Azure AI Foundry would be a strong added advantage
Preferred Skills
- Exposure to LLM fine-tuning, prompt engineering, and GenAI safety frameworks.
- Experience in domains such as finance, healthcare, retail, or enterprise SaaS.
- Contributions to open-source projects, publications, or patents in AI/ML.
Soft Skills
- Strong analytical and problem-solving skills.
- Excellent communication and stakeholder engagement abilities.
- Ability to work independently and collaboratively in cross-functional teams.
- Passion for continuous learning and innovation.
We are building an AI-powered chatbot platform and looking for an AI/ML Engineer with strong backend skills as our first technical hire. You will be responsible for developing the core chatbot engine using LLMs, creating backend APIs, and building scalable RAG pipelines.
You should be comfortable working independently, shipping fast, and turning ideas into real product features. This role is ideal for someone who loves building with modern AI tools and wants to be part of a fast-growing product from day one.
Responsibilities
• Build the core AI chatbot engine using LLMs (OpenAI, Claude, Gemini, Llama etc.)
• Develop backend services and APIs using Python (FastAPI/Flask)
• Create RAG pipelines using vector databases (Pinecone, FAISS, Chroma)
• Implement embeddings, prompt flows, and conversation logic
• Integrate chatbot with web apps, WhatsApp, CRMs and 3rd-party APIs
• Ensure system reliability, performance, and scalability
• Work directly with the founder in shaping the product and roadmap
Requirements
• Strong experience with LLMs & Generative AI
• Excellent Python skills with FastAPI/Flask
• Hands-on experience with LangChain or RAG architectures
• Vector database experience (Pinecone/FAISS/Chroma)
• Strong understanding of REST APIs and backend development
• Ability to work independently, experiment fast, and deliver clean code
Nice to Have
• Experience with cloud (AWS/GCP)
• Node.js knowledge
• LangGraph, LlamaIndex
• MLOps or deployment experience
Full Stack Engineer
Position Description
Responsibilities
• Take design mockups provided by UX/UI designers and translate them into web pages or applications using HTML and CSS. Ensure that the design is faithfully replicated in the final product.
• Develop enabling frameworks and application E2E and enhance with data analytics and AI enablement
• Ensure effective Design, Development, Validation and Support activities in line with the Customer needs, architectural requirements, and ABB Standards.
• Support ABB business units through consulting engagements.
• Develop and implement machine learning models to solve specific business problems, such as predictive analytics, classification, and recommendation systems
• Perform exploratory data analysis, clean and preprocess data, and identify trends and patterns.
• Evaluate the performance of machine learning models and fine-tune them for optimal results.
• Create informative and visually appealing data visualizations to communicate findings and insights to non-technical stakeholders.
• Conduct statistical analysis, hypothesis testing, and A/B testing to support decision-making processes.
• Define the solution, Project plan, identifying and allocation of team members, project tracking; Work with data engineers to integrate, transform, and store data from various sources.
• Collaborate with cross-functional teams, including business analysts, data engineers, and domain experts, to understand business objectives and develop data science solutions.
• Prepare clear and concise reports and documentation to communicate results and methodologies.
• Stay updated with the latest data science and machine learning trends and techniques.
• Familiarity with ML Model Deployment as REST APIs.
Background
• Engineering graduate / Masters degree with rich exposure to Data science, from a reputed institution
• Create responsive web designs that adapt to different screen sizes and devices using media queries and responsive design techniques.
• Write and maintain JavaScript code to add interactivity and dynamic functionality to web pages. This may include user input handling, form validation, and basic animations.
• Familiarity with front-end JavaScript libraries and frameworks such as React, Angular, or Vue.js. Depending on the projects, you may be responsible for working within these frameworks
• Atleast 6+ years experience in AI ML concepts, Python (preferred), prefer knowledge in deep learning frameworks like PyTorch and TensorFlow
• Domain knowledge of Manufacturing/ process Industries, Physics and first principle based analysis
• Analytical thinking for translating data into meaningful insight and could be consumed by ML Model for Training and Prediction.
• Should be able to deploy Model using Cloud services like Azure Databricks or Azure ML Studio. Familiarity with technologies like Docker, Kubernetes and MLflow is good to have.
• Agile development of customer centric prototypes or ‘Proof of Concepts’ for focused digital solutions
• Good communication skills must be able to discuss the requirements effectively with the client teams, and with internal teams.
Mission
Own architecture across web + backend, ship reliably, and establish patterns the team can scale on.
Responsibilities
- Lead system architecture for Next.js (web) and FastAPI (backend); own code quality, reviews, and release cadence.
- Build and maintain the web app (marketing, auth, dashboard) and a shared TS SDK (@revilo/contracts, @revilo/sdk).
- Integrate Stripe, Maps, analytics; enforce accessibility and performance baselines.
- Define CI/CD (GitHub Actions), containerization (Docker), env/promotions (staging → prod).
- Partner with Mobile and AI engineers on API/tool schemas and developer experience.
Requirements
- 6–10+ years; expert TypeScript, strong Python.
- Next.js (App Router), TanStack Query, shadcn/ui; FastAPI, Postgres, pydantic/SQLModel.
- Auth (OTP/JWT/OAuth), payments, caching, pagination, API versioning.
- Practical CI/CD and observability (logs/metrics/traces).
Nice-to-haves
- OpenAPI typegen (Zod), feature flags, background jobs/queues, Vercel/EAS.
Key Outcomes (ongoing)
- Stable architecture with typed contracts; <2% crash/error on web, p95 API latency in budget, reliable weekly releases.
Review Criteria
- Strong DevOps /Cloud Engineer Profiles
- Must have 3+ years of experience as a DevOps / Cloud Engineer
- Must have strong expertise in cloud platforms – AWS / Azure / GCP (any one or more)
- Must have strong hands-on experience in Linux administration and system management
- Must have hands-on experience with containerization and orchestration tools such as Docker and Kubernetes
- Must have experience in building and optimizing CI/CD pipelines using tools like GitHub Actions, GitLab CI, or Jenkins
- Must have hands-on experience with Infrastructure-as-Code tools such as Terraform, Ansible, or CloudFormation
- Must be proficient in scripting languages such as Python or Bash for automation
- Must have experience with monitoring and alerting tools like Prometheus, Grafana, ELK, or CloudWatch
- Top tier Product-based company (B2B Enterprise SaaS preferred)
Preferred
- Experience in multi-tenant SaaS infrastructure scaling.
- Exposure to AI/ML pipeline deployments or iPaaS / reverse ETL connectors.
Role & Responsibilities
We are seeking a DevOps Engineer to design, build, and maintain scalable, secure, and resilient infrastructure for our SaaS platform and AI-driven products. The role will focus on cloud infrastructure, CI/CD pipelines, container orchestration, monitoring, and security automation, enabling rapid and reliable software delivery.
Key Responsibilities:
- Design, implement, and manage cloud-native infrastructure (AWS/Azure/GCP).
- Build and optimize CI/CD pipelines to support rapid release cycles.
- Manage containerization & orchestration (Docker, Kubernetes).
- Own infrastructure-as-code (Terraform, Ansible, CloudFormation).
- Set up and maintain monitoring & alerting frameworks (Prometheus, Grafana, ELK, etc.).
- Drive cloud security automation (IAM, SSL, secrets management).
- Partner with engineering teams to embed DevOps into SDLC.
- Troubleshoot production issues and drive incident response.
- Support multi-tenant SaaS scaling strategies.
Ideal Candidate
- 3–6 years' experience as DevOps/Cloud Engineer in SaaS or enterprise environments.
- Strong expertise in AWS, Azure, or GCP.
- Strong expertise in LINUX Administration.
- Hands-on with Kubernetes, Docker, CI/CD tools (GitHub Actions, GitLab, Jenkins).
- Proficient in Terraform/Ansible/CloudFormation.
- Strong scripting skills (Python, Bash).
- Experience with monitoring stacks (Prometheus, Grafana, ELK, CloudWatch).
- Strong grasp of cloud security best practices.



















