50+ Machine Learning (ML) Jobs in India
Apply to 50+ Machine Learning (ML) Jobs on CutShort.io. Find your next job, effortlessly. Browse Machine Learning (ML) Jobs and apply today!
- Strong experience in Azure – mainly Azure ML Studio, AKS, Blob Storage, ADF, ADO Pipelines.
- Ability and experience to register and deploy ML/AI/GenAI models via Azure ML Studio.
- Working knowledge of deploying models in AKS clusters.
- Design and implement data processing, training, inference, and monitoring pipelines using Azure ML.
- Excellent Python skills – environment setup and dependency management, coding as per best practices, and knowledge of automatic code review tools like linting and Black.
- Experience with MLflow for model experiments, logging artifacts and models, and monitoring.
- Experience in orchestrating machine learning pipelines using MLOps best practices.
- Experience in DevOps with CI/CD knowledge (Git in Azure DevOps).
- Experience in model monitoring (drift detection and performance monitoring).
- Fundamentals of data engineering.
- Docker-based deployment is good to have.
Responsibilities:
• End-to-end design, development, and deployment of enterprise-grade AI solutions leveraging Azure AI, Google Vertex AI, or comparable cloud platforms.
• Architect and implement advanced AI systems, including agentic workflows, LLM integrations, MCP-based solutions, RAG pipelines, and scalable microservices.
• Oversee the development of Python-based applications, RESTful APIs, data processing pipelines, and complex system integrations.
• Define and uphold engineering best practices, including CI/CD automation, testing frameworks, model evaluation procedures, observability, and operational monitoring.
• Partner closely with product owners and business stakeholders to translate requirements into actionable technical designs, delivery plans, and execution roadmaps.
• Provide hands-on technical leadership, conducting code reviews, offering architectural guidance, and ensuring adherence to security, governance, and compliance standards.
• Communicate technical decisions, delivery risks, and mitigation strategies effectively to senior leadership and cross-functional teams.

This is for one of our Reputed Entertainment organisation
Key Responsibilities
· Advanced ML & Deep Learning: Design, develop, and deploy end-to-end Machine Learning models for Content Recommendation Engines, Churn Prediction, and Customer Lifetime Value (CLV).
· Generative AI Implementation: Prototype and integrate GenAI solutions (using LLMs like Gemini/GPT) for automated Metadata Tagging, Script Summarization, or AI-driven Chatbots for viewer engagement.
· Develop and maintain high-scale video processing pipelines using Python, OpenCV, and FFmpeg to automate scene detection, ad-break identification, and visual feature extraction for content enrichment
· Cloud Orchestration: Utilize GCP (Vertex AI, BigQuery, Dataflow) to build scalable data pipelines and manage the full ML lifecycle (MLOps).
· Business Intelligence & Storytelling: Create high-impact, automated dashboards in to track KPIs for data-driven decision making
· Cross-functional Collaboration: Work closely with Product, Design, Engineering, Content, and Marketing teams to translate "viewership data" into "strategic growth."
Preferred Qualifications
· Experience in Media/OTT: Prior experience working with large scale data from broadcast channels, videos, streaming platforms or digital ad-tech.
· Education: Master’s/Bachelor’s degree in a quantitative field (Computer Science, Statistics, Mathematics, or Data Science).
· Product Mindset: Ability to not just build a model, but to understand the business implications of the solution.
· Communication: Exceptional ability to explain "Neural Network outputs" to a "Creative Content Producer" in simple terms.
About MyOperator
MyOperator is a Business AI Operator and category leader that unifies WhatsApp, Calls, and AI-powered chatbots & voice bots into one intelligent business communication platform.Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single no-code platform.Trusted by 12,000+ brands including Amazon, Domino’s, Apollo, and Razor-pay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement
Role Summary
We’re hiring a Front Deployed Engineer (FDE)—a customer-facing, field-deployed engineer who owns the end-to-end delivery of AI bots/agents.
This role is “frontline”: you’ll work directly with customers (often onsite), translate business reality into bot workflows, do prompt engineering + knowledge grounding, ship deployments, and iterate until it works reliably in production.
Think: solutions engineer + implementation engineer + prompt engineer, with a strong bias for execution.
Responsibilities
Requirement Discovery & Stakeholder Interaction
- Join customer calls alongside Sales and Revenue teams.
- Ask targeted questions to understand business objectives, user journeys, automation expectations, and edge cases.
- Identify data sources (CRM, APIs, Excel, SharePoint, etc.) required for the solution.
- Act as the AI subject-matter expert during client discussions.
Use Case & Solution Documentation
- Convert discussions into clear, structured use case documents, including:
- Problem statement & goals.
- Current vs. proposed conversational flows.
- Chatbot conversation logic, integrations, and dependencies.
- Assumptions, limitations, and success criteria.
Customer Delivery Ownership
Own deployment of AI bots for customer use-cases (lead qualification, support, booking, etc.). Run workshops to capture processes, FAQs, edge cases, and success metrics. Drive the go-live process: requirements through monitoring and improvement.
Prompt Engineering & Conversation Design
Craft prompts, tool instructions, guardrails, fallbacks, and escalation policies for stable behavior. Build structured conversational flows: intents, entities, routing, handoff, and compliant responses. Create reusable prompt patterns and "prompt packs."
Testing, Debugging & Iteration
Analyze logs to find failure modes (misclassification, hallucination, poor handling). Create test sets ("golden conversations"), run regressions, and measure improvements. Coordinate with Product/Engineering for platform needs.
Integrations & Technical Coordination
Integrate bots with APIs/webhooks (CRM, ticketing, internal tools) to complete workflows. Troubleshoot production issues and coordinate fixes/root-cause analysis.
What Success Looks Like
- Customer bots go live quickly and show high containment + high task completion with low escalation.
- You can diagnose failures from transcripts/logs and fix them with prompt/workflow/knowledge changes.
- Customers trust you as the “AI delivery owner”—clear communication, realistic timelines, crisp execution.
Requirements (Must Have)
- 2–5 years in customer-facing delivery roles: implementation, solutions engineering, customer success engineering, or similar.
- Hands-on comfort with LLMs and prompt engineering (structured outputs, guardrails, tool use, iteration).
- Strong communication: workshops, requirement capture, crisp documentation, stakeholder management.
- Technical fluency: APIs/webhooks concepts, JSON, debugging logs, basic integration troubleshooting.
- Willingness to be front deployed (customer calls/visits as needed).
Good to Have (Nice to Have)
- Experience with chatbots/voicebots, IVR, WhatsApp automation, conversational AI platforms with at least a couple of projects.
- Understanding of metrics like containment, resolution rate, response latency, CSAT drivers.
- Prior SaaS onboarding/delivery experience in mid-market or enterprises.
Working Style & Traits We Value
- High agency: you don’t wait for perfect specs—you create clarity and ship.
- Customer empathy + engineering discipline.
- Strong bias for iteration: deploy → learn → improve.
- Calm under ambiguity (real customer environments are chaotic by default).
We are looking for a Machine Learning Engineer to design, build, and operate production-grade ML systems powering scalable, data-driven applications.
You will be responsible for developing end-to-end machine learning pipelines, ensuring seamless consistency between development and production environments while building reliable and scalable ML infrastructure.
This role focuses on production ML engineering, not experimentation-only data science. You will work closely with backend, data, and product teams to deploy and operate predictive systems at scale.
Requirements
- Strong coding skills in Python, with the ability to build reliable, production-quality systems.
- Experience developing end-to-end machine learning pipelines, ensuring consistency between development, training, and production environments.
- Ability to design and implement scalable ML architectures tailored to site traffic, system scale, and predictive feature complexity.
- Familiarity with model and data versioning, resource allocation, system scaling, and structured logging practices.
- Experience building systems that monitor, detect, and respond to failures across infrastructure resources, data pipelines, and model predictions.
- Hands-on expertise with MLOps tools and workflows for scalable, production-level model deployment and lifecycle management.
- Strong problem-solving abilities and comfort working in a fast-paced, high-ownership environment.
We're at the forefront of creating advanced AI systems, from fully autonomous agents that provide intelligent customer interaction to data analysis tools that offer insightful business solutions. We are seeking enthusiastic interns who are passionate about AI and ready to tackle real-world problems using the latest technologies.
We are looking for a passionate AI/ML Intern with hands-on exposure to Large Language Models (LLMs), fine-tuning techniques like LoRA, and strong fundamentals in Data Structures & Algorithms (DSA). This role is ideal for someone eager to work on real-world AI applications, experiment with open-source models, and contribute to production-ready AI systems.
Duration: 6 months
Perks:
- Hands-on experience with real AI projects.
- Mentoring from industry experts.
- A collaborative, innovative and flexible work environment
After completion of the internship period, there is a chance to get a full-time opportunity as AI/ML engineer (6-8 LPA).
Compensation:
- Stipend: Base is INR 8000/- & can increase up to 20000/- depending upon performance matrix.
Key Responsibilities
- Work on Large Language Models (LLMs) for real-world AI applications.
- Implement and experiment with LoRA (Low-Rank Adaptation) and other parameter-efficient fine-tuning techniques.
- Perform model fine-tuning, evaluation, and optimization.
- Engage in prompt engineering to improve model outputs and performance.
- Develop backend services using Python for AI-powered applications.
- Utilize GitHub for version control, including managing branches, pull requests, and code reviews.
- Work with AI platforms such as Hugging Face and OpenAI to deploy and test models.
- Collaborate with the team to build scalable and efficient AI solutions.
Must-Have Skills
- Strong proficiency in Python.
- Hands-on experience with LLMs (open-source or API-based).
- Practical knowledge of LoRA or other parameter-efficient fine-tuning techniques.
- Solid understanding of Data Structures & Algorithms (DSA).
- Experience with GitHub and version control workflows.
- Familiarity with Hugging Face Transformers and/or OpenAI APIs.
- Basic understanding of Deep Learning and NLP concepts.
🤖 Robotics Engineer
Company: Pentabay Softwares
Location: Anna Salai (Mount Road), Chennai
Employment Type: Full-Time
🔹 Job Summary
Pentabay Softwares is seeking a highly skilled and innovative Robotics Engineer to design, develop, test, and implement robotic systems and automation solutions. The ideal candidate will have strong technical expertise in robotics programming, control systems, and hardware integration, along with a passion for building intelligent and efficient systems.
🔹 Key Responsibilities
Design, develop, and test robotic systems and automation solutions
Develop and implement control algorithms and motion planning systems
Integrate sensors, actuators, and embedded systems
Program robots using languages such as Python, C++, or ROS
Troubleshoot, debug, and optimize robotic applications
Collaborate with cross-functional teams including software, hardware, and AI engineers
Ensure compliance with safety and quality standards
Document system architecture, processes, and technical specifications
🔹 Required Qualifications
Bachelor’s or Master’s degree in Robotics, Mechatronics, Mechanical, Electronics, or related field
2+ years of experience in robotics development (preferred)
Strong knowledge of robotics frameworks (e.g., ROS)
Experience with microcontrollers, embedded systems, and sensor integration
Familiarity with AI/ML concepts is a plus
Strong analytical and problem-solving skills
🔹 Preferred Skills
Experience with computer vision systems
Knowledge of SLAM, kinematics, and motion planning
Experience with industrial automation or autonomous systems
Strong teamwork and communication skills
🌟 Why Join Pentabay Softwares?
Work on innovative and future-focused technologies
Collaborative and growth-oriented work culture
Opportunities for skill development and career advancement
Exposure to cutting-edge automation and AI-driven projects
In this role, you'll be responsible for building machine learning based systems and conduct data analysis that improves the quality of our large geospatial data. You’ll be developing NLP models to extract information, using outlier detection to identifying anomalies and applying data science methods to quantify the quality of our data. You will take part in the development, integration, productionisation and deployment of the models at scale, which would require a good combination of data science and software development.
Responsibilities
- Development of machine learning models
- Building and maintaining software development solutions
- Provide insights by applying data science methods
- Take ownership of delivering features and improvements on time
Must-have Qualifications
- 4 year's experience
- Senior data scientist preferable with knowledge of NLP
- Strong programming skills and extensive experience with Python
- Professional experience working with LLMs, transformers and open-source models from HuggingFace
- Professional experience working with machine learning and data science, such as classification, feature engineering, clustering, anomaly detection and neural networks
- Knowledgeable in classic machine learning algorithms (SVM, Random Forest, Naive Bayes, KNN etc.).
- Experience using deep learning libraries and platforms, such as PyTorch
- Experience with frameworks such as Sklearn, Numpy, Pandas, Polars
- Excellent analytical and problem-solving skills
- Excellent oral and written communication skills
Extra Merit Qualifications
- Knowledge in at least one of the following: NLP, information retrieval, data mining
- Ability to do statistical modeling and building predictive models
- Programming skills and experience with Scala and/or Java
Job Description -
Profile: AI/ML
Experience: 4-8 Years
Mode: Remote
Mandatory Skills - AI/ML, LLM, RAG, Agentic AI, Traditional ML, GCP
Must-Have:
● Proven experience as an AI/ML specifically with a focus on Generative AI and Large Language Models (LLMs) in production.
● Deep expertise in building Agentic Workflows using frameworks like LangChain, LangGraph, or AutoGen.
● Strong proficiency in designing RAG (Retrieval-Augmented Generation)
● Experience with Function Calling/Tool Use in LLMs to connect AI models with external APIs (REST/gRPC) for transactional tasks
● Hands-on experience with Google Cloud Platform (GCP), specifically Vertex AI, Model Garden, and deploying models on GPUs
● Proficiency in Python and deep learning frameworks (PyTorch or TensorFlow).
Job Description -
Profile: Senior ML Lead
Experience Required: 10+ Years
Work Mode: Remote
Key Responsibilities:
- Design end-to-end AI/ML architectures including data ingestion, model development, training, deployment, and monitoring
- Evaluate and select appropriate ML algorithms, frameworks, and cloud platforms (Azure, Snowflake)
- Guide teams in model operationalization (MLOps), versioning, and retraining pipelines
- Ensure AI/ML solutions align with business goals, performance, and compliance requirements
- Collaborate with cross-functional teams on data strategy, governance, and AI adoption roadmap
Required Skills:
- Strong expertise in ML algorithms, Linear Regression, and modeling fundamentals
- Proficiency in Python with ML libraries and frameworks
- MLOps: CI/CD/CT pipelines for ML deployment with Azure
- Experience with OpenAI/Generative AI solutions
- Cloud-native services: Azure ML, Snowflake
- 8+ years in data science with at least 2 years in solution architecture role
- Experience with large-scale model deployment and performance tuning
Good-to-Have:
- Strong background in Computer Science or Data Science
- Azure certifications
- Experience in data governance and compliance
JOB DETAILS:
* Job Title: Principal Data Scientist
* Industry: Healthcare
* Salary: Best in Industry
* Experience: 6-10 years
* Location: Bengaluru
Preferred Skills: Generative AI, NLP & ASR, Transformer Models, Cloud Deployment, MLOps
Criteria:
- Candidate must have 7+ years of experience in ML, Generative AI, NLP, ASR, and LLMs (preferably healthcare).
- Candidate must have strong Python skills with hands-on experience in PyTorch/TensorFlow and transformer model fine-tuning.
- Candidate must have experience deploying scalable AI solutions on AWS/Azure/GCP with MLOps, Docker, and Kubernetes.
- Candidate must have hands-on experience with LangChain, OpenAI APIs, vector databases, and RAG architectures.
- Candidate must have experience integrating AI with EHR/EMR systems, ensuring HIPAA/HL7/FHIR compliance, and leading AI initiatives.
Job Description
Principal Data Scientist
(Healthcare AI | ASR | LLM | NLP | Cloud | Agentic AI)
Job Details
- Designation: Principal Data Scientist (Healthcare AI, ASR, LLM, NLP, Cloud, Agentic AI)
- Location: Hebbal Ring Road, Bengaluru
- Work Mode: Work from Office
- Shift: Day Shift
- Reporting To: SVP
- Compensation: Best in the industry (for suitable candidates)
Educational Qualifications
- Ph.D. or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field
- Technical certifications in AI/ML, NLP, or Cloud Computing are an added advantage
Experience Required
- 7+ years of experience solving real-world problems using:
- Natural Language Processing (NLP)
- Automatic Speech Recognition (ASR)
- Large Language Models (LLMs)
- Machine Learning (ML)
- Preferably within the healthcare domain
- Experience in Agentic AI, cloud deployments, and fine-tuning transformer-based models is highly desirable
Role Overview
This position is part of company, a healthcare division of Focus Group specializing in medical coding and scribing.
We are building a suite of AI-powered, state-of-the-art web and mobile solutions designed to:
- Reduce administrative burden in EMR data entry
- Improve provider satisfaction and productivity
- Enhance quality of care and patient outcomes
Our solutions combine cutting-edge AI technologies with live scribing services to streamline clinical workflows and strengthen clinical decision-making.
The Principal Data Scientist will lead the design, development, and deployment of cognitive AI solutions, including advanced speech and text analytics for healthcare applications. The role demands deep expertise in generative AI, classical ML, deep learning, cloud deployments, and agentic AI frameworks.
Key Responsibilities
AI Strategy & Solution Development
- Define and develop AI-driven solutions for speech recognition, text processing, and conversational AI
- Research and implement transformer-based models (Whisper, LLaMA, GPT, T5, BERT, etc.) for speech-to-text, medical summarization, and clinical documentation
- Develop and integrate Agentic AI frameworks enabling multi-agent collaboration
- Design scalable, reusable, and production-ready AI frameworks for speech and text analytics
Model Development & Optimization
- Fine-tune, train, and optimize large-scale NLP and ASR models
- Develop and optimize ML algorithms for speech, text, and structured healthcare data
- Conduct rigorous testing and validation to ensure high clinical accuracy and performance
- Continuously evaluate and enhance model efficiency and reliability
Cloud & MLOps Implementation
- Architect and deploy AI models on AWS, Azure, or GCP
- Deploy and manage models using containerization, Kubernetes, and serverless architectures
- Design and implement robust MLOps strategies for lifecycle management
Integration & Compliance
- Ensure compliance with healthcare standards such as HIPAA, HL7, and FHIR
- Integrate AI systems with EHR/EMR platforms
- Implement ethical AI practices, regulatory compliance, and bias mitigation techniques
Collaboration & Leadership
- Work closely with business analysts, healthcare professionals, software engineers, and ML engineers
- Implement LangChain, OpenAI APIs, vector databases (Pinecone, FAISS, Weaviate), and RAG architectures
- Mentor and lead junior data scientists and engineers
- Contribute to AI research, publications, patents, and long-term AI strategy
Required Skills & Competencies
- Expertise in Machine Learning, Deep Learning, and Generative AI
- Strong Python programming skills
- Hands-on experience with PyTorch and TensorFlow
- Experience fine-tuning transformer-based LLMs (GPT, BERT, T5, LLaMA, etc.)
- Familiarity with ASR models (Whisper, Canary, wav2vec, DeepSpeech)
- Experience with text embeddings and vector databases
- Proficiency in cloud platforms (AWS, Azure, GCP)
- Experience with LangChain, OpenAI APIs, and RAG architectures
- Knowledge of agentic AI frameworks and reinforcement learning
- Familiarity with Docker, Kubernetes, and MLOps best practices
- Understanding of FHIR, HL7, HIPAA, and healthcare system integrations
- Strong communication, collaboration, and mentoring skills
Position: NDT Applications Engineer – PAUT ( Corrosion Mapping/ Weld Inspection / PWI/Advance FMC-TFM)
Location: Noida
Job Type: Full-time
Experience Level: Mid-Level / Senior-Level
Industry: Non-Destructive Testing (NDT), PAUT.
We are specifically looking for candidates with hands-on experience as an NDT Senior Engineer; only those with relevant NDT expertise should apply.
We are specifically looking for candidates with hands-on experience as an NDT Senior Engineer; only those with relevant NDT expertise should apply."
Job Summary
We are seeking a highly skilled NDT-Engineer with expertise in Ultrasonic Testing (UT) and Phased Array Ultrasonic Testing (PAUT) for robotic integration. This role involves coordinating with software and control system teams to integrate UT & PAUT ( Corrosion Mapping/Weld Inspection) into robotic NDT systems, ensuring optimal inspection performance.
The engineer will focus on sensor selection, ultrasonic parameter optimization, calibration, and data interpretation, while the software team handles control algorithms and motion planning. The ideal candidate should have strong experience in NDT automation, probe and frequency selection, phased array data acquisition, and defect characterization.
Key Responsibilities
1. NDT Inspection & Signal Optimization
• Optimize probe selection, wedge design, and beam focusing to achieve high-resolution imaging.
• Define scanning techniques (sectorial, linear, and compound scans) to detect various defect types.
• Analyse UT & PAUT signals, ensuring accurate defect detection, sizing, and characterization.
• Implement Time-of-Flight Diffraction (TOFD) and Full Matrix Capture (FMC) techniques to enhance detection capabilities.
• Address electromagnetic interference (EMI) and signal noise issues affecting robotic UT/PAUT.
• Develop procedures for coupling enhancement, including the use of water column, dry coupling, and adaptive surface-following mechanisms for robotic probes.
• Evaluate attenuation, beam divergence, and wave mode conversion for different material types.
• Work with AI-based defect recognition systems to automate data processing and anomaly detection.
• Test different scanning configurations for challenging surfaces, curved geometries, and weld seams.
• Optimize gain, pulse repetition frequency (PRF), and filtering settings to ensure the highest signal clarity.
• Implement phased array data interpretation techniques to differentiate between false indications and real defects.
• Develop and refine automated thickness gauging algorithms for robotic NDT systems.
• Ensure the compatibility of PAUT imaging with robotic motion constraints to avoid signal distortion.
2. NDT-Integration for Robotics (UT & PAUT)
•Select, integrate, and optimize ultrasonic transducers and phased array probes for robotic inspection systems.
•Define NDT scanning parameters (frequency, angle, probe type, and scanning speed) for robotic UT/PAUT applications.
•Ensure seamless coordination with control system and software teams for planning and automation.
•Work with robotic hardware teams to mount, position, and align UT/PAUT probes accurately.
•Conduct system calibration and validate UT/PAUT performance on robotic platforms.
3. Data Analysis & Reporting
•Interpret PAUT sectorial scans, full matrix capture (FMC), and total focusing method (TFM) data.
•Assist the software team in processing PAUT data for defect characterization and AI-based analysis.
•Validate robotic UT/PAUT inspection results and generate detailed technical reports.
•Ensure compliance with NDT standards (ASME, ISO 9712, ASTM, API 510/570) for ultrasonic inspections.
4. Coordination with Software & Control System Teams
•Work closely with the software team to define scan path strategies and automation logic.
•Collaborate with control engineers to ensure precise probe movement and stability.
•Provide technical input on robotic payload capacity, motion constraints, and scanning efficiency.
•Assist in integration of AI-driven defect recognition for automated data interpretation.
5. Field Deployment & Validation
•Supervise robotic UT/PAUT system trials in real-world inspection environments.
•Ensure compliance with safety regulations and industry best practices.
•Support on-site troubleshooting and optimization of robotic NDT performance.
•Train operators on robot-assisted ultrasonic testing procedures.
Required Qualifications & Skills
1. Educational Background
•Master’s Degree in Metallurgy/NDT/Mechanical.
•ASNT-Level II/III, ISO 9712, PCN, AWS CWI, or API 510/570 certifications in UT & PAUT preferred.
2. Technical Skills & Experience
•3-10 years of experience in Ultrasonic Testing (UT) and Phased Array Ultrasonic Testing (PAUT).
•Strong understanding of probe selection, frequency tuning, and phased array beamforming.
•Experience with NDT software
•Knowledge of electromagnetic shielding, signal integrity, and noise reduction techniques in ultrasonic systems.
•Ability to collaborate with software and control teams for robotic NDT development.
3. Soft Skills
•Strong problem-solving and analytical abilities.
•Excellent technical communication and coordination skills.
•Ability to work in cross-functional teams with robotics, software, and NDT specialists.
•Willingness to travel for on-site robotic NDT deployments.
Work Conditions
•Lab – Hands-on testing and robotic system deployment.
•Flexible Work Hours – Based on project
Benefits & Perks
•Competitive salary & performance incentives.
•Exposure to cutting-edge robotic and AI-driven NDT innovations.
•Training & certification support for career growth.
•Opportunities to work on pioneering robotic NDT projects.
Hi,
Greetings from Ampera!
we are looking for a Data Scientist with strong Python & Forecasting experience.
Title : Data Scientist – Python & Forecasting
Experience : 4 to 7 Yrs
Location : Chennai/Bengaluru
Type of hire : PWD and Non PWD
Employment Type : Full Time
Notice Period : Immediate Joiner
Working hours : 09:00 a.m. to 06:00 p.m.
Workdays : Mon - Fri
Job Description:
We are looking for an experienced Data Scientist with strong expertise in Python programming and forecasting techniques. The ideal candidate should have hands-on experience building predictive and time-series forecasting models, working with large datasets, and deploying scalable solutions in production environments.
Key Responsibilities
- Develop and implement forecasting models (time-series and machine learning based).
- Perform exploratory data analysis (EDA), feature engineering, and model validation.
- Build, test, and optimize predictive models for business use cases such as demand forecasting, revenue prediction, trend analysis, etc.
- Design, train, validate, and optimize machine learning models for real-world business use cases.
- Apply appropriate ML algorithms based on business problems and data characteristics
- Write clean, modular, and production-ready Python code.
- Work extensively with Python Packages & libraries for data processing and modelling.
- Collaborate with Data Engineers and stakeholders to deploy models into production.
- Monitor model performance and improve accuracy through continuous tuning.
- Document methodologies, assumptions, and results clearly for business teams.
Technical Skills Required:
Programming
- Strong proficiency in Python
- Experience with Pandas, NumPy, Scikit-learn
Forecasting & Modelling
- Hands-on experience in Time Series Forecasting (ARIMA, SARIMA, Prophet, etc.)
- Experience with ML-based forecasting models (XGBoost, LightGBM, Random Forest, etc.)
- Understanding of seasonality, trend decomposition, and statistical modeling
Data & Deployment
- Experience handling structured and large datasets
- SQL proficiency
- Exposure to model deployment (API-based deployment preferred)
- Knowledge of MLOps concepts is an added advantage
Tools (Preferred)
- TensorFlow / PyTorch (optional)
- Airflow / MLflow
- Cloud platforms (AWS / Azure / GCP)
Educational Qualification
- Bachelor’s or Master’s degree in Data Science, Statistics, Computer Science, Mathematics, or related field.
Key Competencies
- Strong analytical and problem-solving skills
- Ability to communicate insights to technical and non-technical stakeholders
- Experience working in agile or fast-paced environments
Accessibility & Inclusion Statement
We are committed to creating an inclusive environment for all employees, including persons with disabilities. Reasonable accommodations will be provided upon request.
Equal Opportunity Employer (EOE) Statement
Ampera Technologies is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.
Position: Assistant Professor
Department: CSE / IT
Experience: 0 – 15 Years
Joining: Immediate / Within 1 Month
Salary: As per norms and experience
🎓 Qualification:
ME / M.Tech in Computer Science Engineering / Information Technology
Ph.D. (Preferred but not mandatory)
First Class in UG & PG as per AICTE norms
🔍 Roles & Responsibilities:
Deliver high-quality lectures for UG / PG programs
Prepare lesson plans, course materials, and academic content
Guide student projects and internships
Participate in curriculum development and academic planning
Conduct internal assessments, evaluations, and result analysis
Mentor students for academic and career growth
Participate in departmental research activities
Publish research papers in reputed journals (Scopus/SCI preferred)
Attend Faculty Development Programs (FDPs), workshops, and conferences
Contribute to NAAC / NBA accreditation processes
Support institutional administrative responsibilities
💡 Required Skills:
Strong subject knowledge in CSE / IT domains
Programming proficiency (Python, Java, C++, Data Structures, AI/ML, Cloud, etc.)
Excellent communication and presentation skills
Research orientation and academic enthusiasm
Team collaboration and mentoring ability
Review Criteria:
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred:
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria:
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities:
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate:
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
ROLE - TECH LEAD/ARCHITECT with AI Expertise
Experience: 10–15 Years
Location: Bangalore (Onsite)
Company Type: Product-based | AI B2B SaaS
About ProductNova
ProductNova is a fast-growing product development organization that partners with
ambitious companies to build, modernize, and scale high-impact digital products. Our teams
of product leaders, engineers, AI specialists, and growth experts work at the intersection of
strategy, technology, and execution to help organizations create differentiated product
portfolios and accelerate business outcomes.
Founded in early 2023, ProductNova has successfully designed, built, and launched 20+
large-scale, AI-powered products and platforms across industries. We specialize in solving
complex business problems through thoughtful product design, robust engineering, and
responsible use of AI.
Product Development
We design and build user-centric, scalable, AI-native B2B SaaS products that are deeply
aligned with business goals and long-term value creation.
Our end-to-end product development approach covers the full lifecycle:
1. Product discovery and problem definition
2. User research and product strategy
3. Experience design and rapid prototyping
4. AI-enabled engineering, testing, and platform architecture
5. Product launch, adoption, and continuous improvement
From early concepts to market-ready solutions, we focus on building products that are
resilient, scalable, and ready for real-world adoption. Post-launch, we work closely with
customers to iterate based on user feedback and expand products across new use cases,
customer segments, and markets.
Growth & Scale
For early-stage companies and startups, we act as product partners—shaping ideas into
viable products, identifying target customers, achieving product-market fit, and supporting
go-to-market execution, iteration, and scale.
For established organizations, we help unlock the next phase of growth by identifying
opportunities to modernize and scale existing products, enter new geographies, and build
entirely new product lines. Our teams enable innovation through AI, platform re-
architecture, and portfolio expansion to support sustained business growth.
Role Overview
We are looking for a Tech Lead / Architect to drive the end-to-end technical design and
development of AI-powered B2B SaaS products. This role requires a strong hands-on
technologist who can work closely with ML Engineers and Full Stack Development teams,
own the product architecture, and ensure scalability, security, and compliance across the
platform.
Key Responsibilities
• Lead the end-to-end architecture and development of AI-driven B2B SaaS products
• Collaborate closely with ML Engineers, Data Scientists, and Full Stack Developers to
integrate AI/ML models into production systems
• Define and own the overall product technology stack, including backend, frontend,
data, and cloud infrastructure
• Design scalable, resilient, and high-performance architectures for multi-tenant SaaS
platforms
• Drive cloud-native deployments (Azure) using modern DevOps and CI/CD practices
• Ensure data privacy, security, compliance, and governance (SOC2, GDPR, ISO, etc.)
across the product
• Take ownership of application security, access controls, and compliance
requirements
• Actively contribute hands-on through coding, code reviews, complex feature development and architectural POCs
• Mentor and guide engineering teams, setting best practices for coding, testing, and
system design
• Work closely with Product Management and Leadership to translate business
requirements into technical solutions
Qualifications:
• 10–15 years of overall experience in software engineering and product
development
• Strong experience building B2B SaaS products at scale
• Proven expertise in system architecture, design patterns, and distributed systems
• Hands-on experience with cloud platforms (Azure, AWS/GCP)
• Solid background in backend technologies (Python/ .NET / Node.js / Java) and
modern frontend frameworks like (React, JS, etc.)
• Experience working with AI/ML teams in deploying and tuning ML models into production
environments
• Strong understanding of data security, privacy, and compliance frameworks
• Experience with microservices, APIs, containers, Kubernetes, and cloud-native
architectures
• Strong working knowledge of CI/CD pipelines, DevOps, and infrastructure as code
• Excellent communication and leadership skills with the ability to work cross-
functionally
• Experience in AI-first or data-intensive SaaS platforms
• Exposure to MLOps frameworks and model lifecycle management
• Experience with multi-tenant SaaS security models
• Prior experience in product-based companies or startups
Why Join Us
• Build cutting-edge AI-powered B2B SaaS products
• Own architecture and technology decisions end-to-end
• Work with highly skilled ML and Full Stack teams
• Be part of a fast-growing, innovation-driven product organization
If you are a results-driven Technical Lead with a passion for developing innovative products that drives business growth, we invite you to join our dynamic team at ProductNova.
Job Title : QA Lead (AI/ML Products)
Employment Type : Full Time
Experience : 4 to 8 Years
Location : On-site
Mandatory Skills : Strong hands-on experience in testing AI/ML (LLM, RAG) applications with deep expertise in API testing, SQL/NoSQL database validation, and advanced backend functional testing.
Role Overview :
We are looking for an experienced QA Lead who can own end-to-end quality for AI-influenced products and backend-heavy systems. This role requires strong expertise in advanced functional testing, API validation, database verification, and AI model behavior testing in non-deterministic environments.
Key Responsibilities :
- Define and implement comprehensive test strategies aligned with business and regulatory goals.
- Validate AI/ML and LLM-driven applications, including RAG pipelines, hallucination checks, prompt injection scenarios, and model response validation.
- Perform deep API testing using Postman/cURL and validate JSON/XML payloads.
- Execute complex SQL queries (MySQL/PostgreSQL) and work with MongoDB for backend and data integrity validation.
- Analyze server logs and transactional flows to debug issues and ensure system reliability.
- Conduct risk analysis and report key QA metrics such as defect leakage and release readiness.
- Establish and refine QA processes, templates, standards, and agile testing practices.
- Identify performance bottlenecks and basic security vulnerabilities (e.g., IDOR, data exposure).
- Collaborate closely with developers, product managers, and domain experts to translate business requirements into testable scenarios.
- Own feature quality independently from conception to release.
Required Skills & Experience :
- 4+ years of hands-on experience in software testing and QA.
- Strong understanding of testing AI/ML products, LLM validation, and non-deterministic behavior testing.
- Expertise in API Testing, server log analysis, and backend validation.
- Proficiency in SQL (MySQL/PostgreSQL) and MongoDB.
- Deep knowledge of SDLC and Bug Life Cycle.
- Strong problem-solving ability and structured approach to ambiguous scenarios.
- Awareness of performance testing and basic security testing practices.
- Excellent communication skills to articulate defects and QA strategies.
What We’re Looking For :
A proactive QA professional who can go beyond UI testing, understands backend systems deeply, and can confidently test modern AI-driven applications while driving quality standards across the team.

We are building VALLI AI SecurePay, an AI-powered fintech and cybersecurity platform focused on fraud detection and transaction risk scoring.
We are looking for an AI / Machine Learning Engineer with strong AWS experience to design, develop, and deploy ML models in a cloud-native environment.
Responsibilities:
- Build ML models for fraud detection and anomaly detection
- Work with transactional and behavioral data
- Deploy models on AWS (S3, SageMaker, EC2/Lambda)
- Build data pipelines and inference workflows
- Integrate ML models with backend APIs
Requirements:
- Strong Python and Machine Learning experience
- Hands-on AWS experience
- Experience deploying ML models in production
- Ability to work independently in a remote setup
Job Type: Contract / Freelance
Duration: 3–6 months (extendable)
Location: Remote (India)
Hi,
PFB the Job Description for Data Science with ML
Type of hire : PWD and Non PWD
Employment Type : Full Time
Notice Period : Immediate Joiner
Work Days : Mon - Fri
About Ampera:
Ampera Technologies, a purpose driven Digital IT Services with primary focus on supporting our client with their Data, AI / ML, Accessibility and other Digital IT needs. We also ensure that equal opportunities are provided to Persons with Disabilities Talent. Ampera Technologies has its Global Headquarters in Chicago, USA and its Global Delivery Center is based out of Chennai, India. We are actively expanding our Tech Delivery team in Chennai and across India. We offer exciting benefits for our teams, such as 1) Hybrid and Remote work options available, 2) Opportunity to work directly with our Global Enterprise Clients, 3) Opportunity to learn and implement evolving Technologies, 4) Comprehensive healthcare, and 5) Conducive environment for Persons with Disability Talent meeting Physical and Digital Accessibility standards
About the Role
We are looking for a skilled Data Scientist with strong Machine Learning experience to design, develop, and deploy data-driven solutions. The role involves working with large datasets, building predictive and ML models, and collaborating with cross-functional teams to translate business problems into analytical solutions.
Key Responsibilities
- Analyze large, structured and unstructured datasets to derive actionable insights.
- Design, build, validate, and deploy Machine Learning models for prediction, classification, recommendation, and optimization.
- Apply statistical analysis, feature engineering, and model evaluation techniques.
- Work closely with business stakeholders to understand requirements and convert them into data science solutions.
- Develop end-to-end ML pipelines including data preprocessing, model training, testing, and deployment.
- Monitor model performance and retrain models as required.
- Document assumptions, methodologies, and results clearly.
- Collaborate with data engineers and software teams to integrate models into production systems.
- Stay updated with the latest advancements in data science and machine learning.
Required Skills & Qualifications
- Bachelor’s or Master’s degree in computer science, Data Science, Statistics, Mathematics, or related fields.
- 5+ years of hands-on experience in Data Science and Machine Learning.
- Strong proficiency in Python (NumPy, Pandas, Scikit-learn).
- Experience with ML algorithms:
- Regression, Classification, Clustering
- Decision Trees, Random Forest, Gradient Boosting
- SVM, KNN, Naïve Bayes
- Solid understanding of statistics, probability, and linear algebra.
- Experience with data visualization tools (Matplotlib, Seaborn, Power BI, Tableau – preferred).
- Experience working with SQL and relational databases.
- Knowledge of model evaluation metrics and optimization techniques.
Preferred / Good to Have
- Experience with Deep Learning frameworks (TensorFlow, PyTorch, Keras).
- Exposure to NLP, Computer Vision, or Time Series forecasting.
- Experience with big data technologies (Spark, Hadoop).
- Familiarity with cloud platforms (AWS, Azure, GCP).
- Experience with MLOps, CI/CD pipelines, and model deployment.
Soft Skills
- Strong analytical and problem-solving abilities.
- Excellent communication and stakeholder interaction skills.
- Ability to work independently and in cross-functional teams.
- Curiosity and willingness to learn new tools and techniques.
Accessibility & Inclusion Statement
We are committed to creating an inclusive environment for all employees, including persons with disabilities. Reasonable accommodations will be provided upon request.
Equal Opportunity Employer (EOE) Statement
Ampera Technologies is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.
.
We are looking for an AI/ML Engineer to build AI-powered applications for prediction, analytics, and intelligent reporting.
You will work with structured databases and unstructured data (PDFs, documents, logs).
Design and implement data ingestion, preprocessing, and feature pipelines.
Build ML models for prediction, trend analysis, and pattern detection.
Enable chat-based insights using LLMs for querying data and generating reports.
Implement role-based access control (RBAC) and secure AI workflows.
Integrate AI models into web/mobile applications via APIs.
Optimize model performance, accuracy, and scalability.
Work with vector databases, embeddings, and semantic search.
Collaborate with product and engineering teams on AI architecture.
Ensure data security, privacy, and compliance best practices.
Stay updated with latest AI/ML tools and frameworks
JOB DETAILS:
- Job Title: Lead II - Software Engineering- React Native - React Native, Mobile App Architecture, Performance Optimization & Scalability
- Industry: Global digital transformation solutions provider
- Experience: 7-9 years
- Working Days: 5 days/week
- Job Location: Mumbai
- CTC Range: Best in Industry
Job Description
Job Title
Lead React Native Developer (6–8 Years Experience)
Position Overview
We are looking for a Lead React Native Developer to provide technical leadership for our mobile applications. This role involves owning architectural decisions, setting development standards, mentoring teams, and driving scalable, high-performance mobile solutions aligned with business goals.
Must-Have Skills
- 6–8 years of experience in mobile application development
- Extensive hands-on experience leading React Native projects
- Expert-level understanding of React Native architecture and internals
- Strong knowledge of mobile app architecture patterns
- Proven experience with performance optimization and scalability
- Experience in technical leadership, team management, and mentorship
- Strong problem-solving and analytical skills
- Excellent communication and collaboration abilities
- Proficiency in modern React Native development practices
- Experience with Expo toolkit and libraries
- Strong understanding of custom hooks development
- Focus on writing clean, maintainable, and scalable code
- Understanding of mobile app lifecycle
- Knowledge of cross-platform design consistency
Good-to-Have Skills
- Experience with microservices architecture
- Knowledge of cloud platforms such as AWS, Firebase, etc.
- Understanding of DevOps practices and CI/CD pipelines
- Experience with A/B testing and feature flag implementation
- Familiarity with machine learning integration in mobile applications
- Exposure to innovation-driven technical decision-making
Skills: React native, mobile app development, devops, machine learning
******
Notice period - 0 to 15 days only (Need Feb Joiners)
Location: Navi Mumbai, Belapur
About PGAGI
At PGAGI, we believe in a future where AI and human intelligence coexist in harmony, creating a world that is smarter, faster, and better. We are not just building AI; we are shaping a future where AI is a fundamental and positive force for businesses, societies, and the planet.
Position Overview:
We are excited to announce openings for AI/ML Interns who are enthusiastic about artificial intelligence, machine learning, and data-driven technologies. This internship is designed for individuals who want to apply their knowledge of algorithms, data analysis, and model development to solve real-world problems. Interns will work closely with our AI engineering teams to develop, train, and deploy machine learning models, contributing to innovative solutions across various domains. This is an excellent opportunity to gain hands-on experience with cutting-edge tools and frameworks in a collaborative, research-oriented environment.
Key Responsibilities:
- Experience working with python, LLM, Deep Learning, NLP, etc..
- Utilize GitHub for version control, including pushing and pulling code updates.
- Work with Hugging Face and OpenAI platforms for deploying models and exploring open-source AI models.
- Engage in prompt engineering and the fine-tuning process of AI models.
Compensation
- Monthly Stipend: Base stipend of INR 8,000 per month, with the potential to increase up to INR 20,000 based on performance evaluations.
- Performance-Based Pay Scale: Eligibility for monthly performance-based bonuses, rewarding exceptional project contributions and teamwork.
- Additional Benefits: Access to professional development opportunities, including workshops, tech talks, and mentoring sessions.
Requirements:
- Proficiency in Python programming.
- Experience with GitHub and version control workflows.
- Familiarity with AI platforms such as Hugging Face and OpenAI.
- Understanding of prompt engineering and model fine-tuning.
- Excellent problem-solving abilities and a keen interest in AI technology.
Only students currently in their final year of a Bachelor's degree in Computer Science, Engineering,
or related fields /graduates are eligible to apply.
Perks:
- Hands-on experience with real AI projects.
- Mentoring from industry experts.
- A collaborative, innovative and flexible work environment
How to Apply
Interested candidates are invited to submit their resume and complete the assignment using
Shortlisted candidates will be contacted for an interview.
Selection Process
- Initial Screening: We'll review your application for evidence of your skills, experience, and a strong foundation in AI.
- Task Assignment: Candidates need to submit assignment which is already being attached in careers page , designed to assess your practical skills.
- Performance Review: Our experts will evaluate your task submission, with excellence in this stage being crucial for further consideration.
- Interview: Impressive task performers will be invited for an interview to discuss their potential contribution to our team.
- Onboarding: Successful candidates will join our team, with exciting projects ahead
Apply now to embark on a transformative career journey with PGAGI, where innovation and talent converge!
#artificialintelligence #Machinelearning #AI #AIML #LLM #FastAPI #NLP #openAI #AImodels #AIMLInternship #AIintern #Internship #aimlgraduate #TTS #Voice #Speech
About E2M:
E2M Solutions works as a trusted white-label partner for digital agencies. We support agencies with consistent and reliable delivery through services such as website design, web development, eCommerce, SEO, AI SEO, PPC, AI automation, and content writing .Founded on strong business ethics, we are an equal opportunity organization powered by 300+ experienced professionals, partnering with 400+ digital agencies across the US, UK, Canada, Europe, and Australia. At E2M, we value ownership, consistency, and people who are committed to doing meaningful work and growing together .If you’re someone who dreams big and has the gumption to make them come true, E2M has a place for you.
Role Overview:
We are seeking a highly skilled and client-centric AI Consultant/AI Adoption Specialist to join our growing team. In this pivotal role, you'll serve as a vital link between our clients' strategic objectives and the transformative power of AI. You'll primarily focus on understanding their needs, scoping opportunities, and architecting actionable AI roadmaps.
Key Responsibilities:
- Collaborate closely with clients to understand their challenges and identify opportunities to apply AI.
- Assess client requirements and prepare solution strategies using AI tools and methodologies.
- Work with internal teams to design, propose, and help execute AI-powered solutions.
- Provide AI-based recommendations that align with the client’s business objectives.
- Communicate technical possibilities in a business-friendly manner to decision-makers.
- Take ownership of the client journey from discovery to implementation and support.
- Stay updated with AI trends, tools, and real-world use cases that can benefit clients.
Required Skills & Qualifications:
- Minimum 2+ Years of hands on experience into Custom AI Development.
- Minimum 3+ years of experience in roles like Project Manager, Customer Success Manager, or Account Manager, preferably in a service-based company or digital agency.
- Strong understanding of AI concepts, trends, and tools (e.g., NLP, ML, Chatbots, Automation, native cloud technologies).
- Some hands-on experience in AI projects – either through execution, coordination, or implementation.
- Ability to manage multiple client engagements and communicate effectively with both technical and non-technical stakeholders.
- Strong problem-solving mind set with the ability to translate business needs into AI opportunities.
- Flexible to work with international clients, especially in the US time zone as needed.
About the Role
We are seeking a highly skilled and experienced AI Ops Engineer to join our team. In this role, you will be responsible for ensuring the reliability, scalability, and efficiency of our AI/ML systems in production. You will work at the intersection of software engineering, machine learning, and DevOps— helping to design, deploy, and manage AI/ML models and pipelines that power mission-critical business applications.
The ideal candidate has hands-on experience in AI/ML operations and orchestrating complex data pipelines, a strong understanding of cloud-native technologies, and a passion for building robust, automated, and scalable systems.
Key Responsibilities
- AI/ML Systems Operations: Develop and manage systems to run and monitor production AI/ML workloads, ensuring performance, availability, cost-efficiency and convenience.
- Deployment & Automation: Build and maintain ETL, ML and Agentic pipelines, ensuring reproducibility and smooth deployments across environments.
- Monitoring & Incident Response: Design observability frameworks for ML systems (alerts and notifications, latency, cost, etc.) and lead incident triage, root cause analysis, and remediation.
- Collaboration: Partner with data scientists, ML engineers, and software engineers to operationalize models at scale.
- Optimization: Continuously improve infrastructure, workflows, and automation to reduce latency, increase throughput, and minimize costs.
- Governance & Compliance: Implement MLOps best practices, including versioning, auditing, security, and compliance for data and models.
- Leadership: Mentor junior engineers and contribute to the development of AI Ops standards and playbooks.
Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent practical experience).
- 4+ years of experience in AI/MLOps, DevOps, SRE, Data Engineering, or with at least 2+ years in AI/ML-focused operations.
- Strong expertise with cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes, Docker).
- Hands-on experience with ML pipelines and frameworks (MLflow, Kubeflow, Airflow, SageMaker, Vertex AI, etc.).
- Proficiency in Python and/or other scripting languages for automation.
- Familiarity with monitoring/observability tools (Prometheus, Grafana, Datadog, ELK, etc.).
- Deep understanding of CI/CD, GitOps, and Infrastructure as Code (Terraform, Helm, etc.).
- Knowledge of data governance, model drift detection, and compliance in AI systems.
- Excellent problem-solving, communication, and collaboration skills.
Nice-to-Have
- Experience in large-scale distributed systems and real-time data streaming (Kafka, Flink, Spark).
- Familiarity with data science concepts, and frameworks such as scikit-learn, Keras, PyTorch, Tensorflow, etc.
- Full Stack Development knowledge to collaborate effectively across end-to-end solution delivery
- Contributions to open-source MLOps/AI Ops tools or platforms.
- Exposure to Responsible AI practices, model fairness, and explainability frameworks
Why Join Us
- Opportunity to shape and scale AI/ML operations in a fast-growing, innovation-driven environment.
- Work alongside leading data scientists and engineers on cutting-edge AI solutions.
- Competitive compensation, benefits, and career growth opportunities.
This role is at Glance
What you will be doing
We are looking for a Data Scientist who can operate at the intersection of classical machine learning, large-scale recommendation systems, and modern agentic AI systems.
You will design, build, and deploy intelligent systems that power Glance’s personalized lock screen and live entertainment experiences. This role blends deep ML craftsmanship with forward-looking innovation in autonomous/agentic systems.
Your responsibilities will include:
Classical ML & Recommendation Systems
- Design and develop large-scale recommendation systems using advanced ML, statistical modeling, ranking algorithms, and deep learning.
- Build and operate machine learning models on diverse, high-volume data sources for personalization, prediction, and content understanding.
- Develop rapid experimentation workflows to validate hypotheses and measure real-world business impact.
- Own data preparation, model training, evaluation, and deployment pipelines in collaboration with engineering counterparts.
- Monitor ML model performance using statistical techniques; identify drifts, failure modes, and improvement opportunities.
Agentic Systems & Next-Gen AI
- Build and experiment with agentic AI systems that autonomously observe model performance, trigger experiments, tune hyperparameters, improve ranking policies, or orchestrate ML workflows with minimal human intervention.
- Apply LLMs, embeddings, retrieval-augmented architectures, and multimodal generative models for semantic understanding, content classification, and user preference modeling.
- Design intelligent agents that can automate repetitive decision-making tasks—e.g., candidate generation tuning, feature selection, or context-aware content curation.
- Explore reinforcement learning, contextual bandits, and self-improving systems to power next-generation personalization.
Cross-functional impact
- Collaborate with Designers, UX Researchers, Product Managers, and Software Engineers to integrate ML and GenAI-driven features into Glance’s consumer experiences.
- Contribute to Glance’s ML/AI thought leadership—blogs, case studies, internal tech talks, and industry conferences.
- Thrive in a multi-functional, highly collaborative team environment with engineering, product, business, and creative teams.
- Plus: Interface with stakeholders across Product, Business, Data, and Infrastructure to align ML initiatives with strategic priorities.
The experience we need
We are seeking candidates with deep expertise in ML, recommendation systems, and a strong appetite for building agentic AI systems.
You should have experience with:
- Large-scale ML and recommendation systems (collaborative filtering, ranking models, content-based approaches, embeddings).
- Classical ML and deep learning techniques across NLP, sequence modeling, RL, clustering, and time series.
- Experience in deploying ML workflows/models in production system
- Big data processing (Spark, distributed data systems) and cloud computing.
- Designing end-to-end ML solutions—from prototype to production.
- Plus: Building or experimenting with LLMs, generative models, and agentic AI workflows (e.g., autonomous evaluators, self-improving pipelines, automated experiment agents).
We value curiosity, problem-solving ability, and a strong bias toward experimentation and production impact.
Qualifications
- Bachelor’s/Master’s in Computer Science, Statistics, Mathematics, Electrical Engineering, Operations Research, Economics, Analytics, or related fields. PhD is a plus.
- 4+ years of industry experience in ML/Data Science, ideally in large-scale recommendation systems or personalization.
- Experience with LLMs, retrieval systems, generative models, or agentic/autonomous ML systems is highly desirable.
- Expertise with algorithms in NLP, Reinforcement Learning, Time Series, and Deep Learning, applied on real-world datasets.
- Proficient in Python and comfortable with statistical tools (R, NumPy, SciPy, PyTorch/TensorFlow, etc.).
- Strong experience with the big data ecosystem (Spark, Hadoop) and cloud platforms (Azure, AWS, GCP/Vertex AI).
- Comfortable working in cross-functional teams.
- Familiarity with privacy-preserving ML and identity-less ecosystems (especially on iOS and Android).
- Excellent communication skills with the ability to simplify complex technical concepts.
Required Skills & Qualifications
● Strong hands-on experience with LLM frameworks and models, including LangChain,
OpenAI (GPT-4), and LLaMA
● Proven experience in LLM orchestration, workflow management, and multi-agent
system design using frameworks such as LangGraph
● Strong problem-solving skills with the ability to propose end-to-end solutions and
contribute at an architectural/system design level
● Experience building scalable AI-backed backend services using FastAPI and
asynchronous programming patterns
● Solid experience with cloud infrastructure on AWS, including EC2, S3, and Load
Balancers
● Hands-on experience with Docker and containerization for deploying and managing
AI/ML applications
● Good understanding of Transformer-based architectures and how modern LLMs work
internally
● Strong skills in data processing and analysis using NumPy and Pandas
● Experience with data visualization tools such as Matplotlib and Seaborn for analysis
and insights
● Hands-on experience with Retrieval-Augmented Generation (RAG), including
document ingestion, embeddings, and vector search pipelines
● Experience in model optimization and training techniques, including fine-tuning,
LoRA, and QLoRA
Nice to Have / Preferred
● Experience designing and operating production-grade AI systems
● Familiarity with cost optimization, observability, and performance tuning for
LLM-based applications
● Exposure to multi-cloud or large-scale AI platforms
Review Criteria:
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred:
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria:
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities:
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate:
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
Job Title: Technology Intern
Location: Remote (India)
Shift Timings:
- 5:00 PM – 2:00 AM
- 6:00 PM – 3:00 AM
Compensation: Stipend
Job Summary
ARDEM is looking for enthusiastic Technology Interns from Tier 1 colleges who are eager to build hands-on experience across web technologies, cloud platforms, and emerging technologies such as AI/ML. This role is ideal for final-year students (2026 pass-outs) or fresh graduates seeking real-world exposure in a fast-growing, technology-driven organization.
Eligibility & Qualifications
- Education:
- B.Tech (Computer Science) / M.Tech (Computer Science)
- Tier 1 colleges preferred
- Final semester students pursuing graduation (2026 pass-outs) or recently hired interns
- Experience Level: Fresher
- Communication: Excellent English communication skills (verbal & written)
Skills Required
Technical & Development Skills
- Basic understanding of AI / Machine Learning concepts
- Exposure to AWS (deployment or cloud fundamentals)
- PHP development
- WordPress development and customization
- JavaScript (ES5 / ES6+)
- jQuery
- AJAX calls and asynchronous handling
- Event handling
- HTML5 & CSS3
- Client-side form validation
Work Environment & Tools
- Comfortable working in a remote setup
- Familiarity with collaboration and remote access tools
Additional Requirements (Work-from-Home Setup)
This opportunity promotes a healthy work-life balance with remote work flexibility. Candidates must have the following minimum infrastructure:
- System: Laptop or Desktop (Windows-based)
- Operating System: Windows
- Screen Size: Minimum 14 inches
- Screen Resolution: Full HD (1920 × 1080)
- Processor: Intel i5 or higher
- RAM: Minimum 8 GB (Mandatory)
- Software: AnyDesk
- Internet Speed: 100 Mbps or higher
About ARDEM
ARDEM is a leading Business Process Outsourcing (BPO) and Business Process Automation (BPA) service provider. With over 20 years of experience, ARDEM has consistently delivered high-quality outsourcing and automation services to clients across the USA and Canada. We are growing rapidly and continuously innovating to improve our services. Our goal is to strive for excellence and become the best Business Process Outsourcing and Business Process Automation company for our customers.
Senior Full Stack Developer – Analytics Dashboard
Job Summary
We are seeking an experienced Full Stack Developer to design and build a scalable, data-driven analytics dashboard platform. The role involves developing a modern web application that integrates with multiple external data sources, processes large datasets, and presents actionable insights through interactive dashboards.
The ideal candidate should be comfortable working across the full stack and have strong experience in building analytical or reporting systems.
Key Responsibilities
- Design and develop a full-stack web application using modern technologies.
- Build scalable backend APIs to handle data ingestion, processing, and storage.
- Develop interactive dashboards and data visualisations for business reporting.
- Implement secure user authentication and role-based access.
- Integrate with third-party APIs using OAuth and REST protocols.
- Design efficient database schemas for analytical workloads.
- Implement background jobs and scheduled tasks for data syncing.
- Ensure performance, scalability, and reliability of the system.
- Write clean, maintainable, and well-documented code.
- Collaborate with product and design teams to translate requirements into features.
Required Technical Skills
Frontend
- Strong experience with React.js
- Experience with Next.js
- Knowledge of modern UI frameworks (Tailwind, MUI, Ant Design, etc.)
- Experience building dashboards using chart libraries (Recharts, Chart.js, D3, etc.)
Backend
- Strong experience with Node.js (Express or NestJS)
- REST and/or GraphQL API development
- Background job systems (cron, queues, schedulers)
- Experience with OAuth-based integrations
Database
- Strong experience with PostgreSQL
- Data modelling and performance optimisation
- Writing complex analytical SQL queries
DevOps / Infrastructure
- Cloud platforms (AWS)
- Docker and basic containerisation
- CI/CD pipelines
- Git-based workflows
Experience & Qualifications
- 5+ years of professional full stack development experience.
- Proven experience building production-grade web applications.
- Prior experience with analytics, dashboards, or data platforms is highly preferred.
- Strong problem-solving and system design skills.
- Comfortable working in a fast-paced, product-oriented environment.
Nice to Have (Bonus Skills)
- Experience with data pipelines or ETL systems.
- Knowledge of Redis or caching systems.
- Experience with SaaS products or B2B platforms.
- Basic understanding of data science or machine learning concepts.
- Familiarity with time-series data and reporting systems.
- Familiarity with meta ads/Google ads API
Soft Skills
- Strong communication skills.
- Ability to work independently and take ownership.
- Attention to detail and focus on code quality.
- Comfortable working with ambiguous requirements.
Ideal Candidate Profile (Summary)
A senior-level full stack engineer who has built complex web applications, understands data-heavy systems, and enjoys creating analytical products with a strong focus on performance, scalability, and user experience.
About Voiceoc
Voiceoc is a Delhi based health tech startup which was started with a vision to help healthcare companies round the globe by leveraging Voice & Text AI. We started our operations in August 2020 and today, the leading healthcare companies of US, India, Middle East & Africa leverage Voiceoc as a channel to communicate with thousands of patients on a daily basis.
Website: https://www.voiceoc.com/
Responsibilities Include (but not limited to):
We’re looking for a hands-on Chief Technology Officer (CTO) to lead all technology initiatives for Voiceoc’s US business.
This role is ideal for someone who combines strong engineering leadership with deep AI product-building experience — someone who can code, lead, and innovate at the same time.
The CTO will manage the engineering team, guide AI development, interface with clients for technical requirements, and ensure scalable, reliable delivery of all Voiceoc platforms.
Technical Leadership
- Own end-to-end architecture, development, and deployment of Voiceoc’s AI-driven Voice & Text platforms.
- Work closely with the Founder to define the technology roadmap, ensuring alignment with business priorities and client needs.
- Oversee AI/ML feature development — including LLM integrations, automation workflows, and backend systems.
- Ensure system scalability, data security, uptime, and performance across all active deployments (US Projects).
- Collaborate with the AI/ML engineers to guide RAG pipelines, voicebot logic, and LLM prompt optimization.
Hands-On Contribution
- Actively contribute to the core codebase (preferably Python/FastAPI/Node).
- Lead by example in code reviews, architecture design, and debugging.
- Experiment with LLM frameworks (OpenAI, Gemini, Mistral, etc.) and explore their applications in healthcare automation.
Product & Delivery Management
- Translate client requirements into clear technical specifications and deliverables.
- Oversee product versioning, release management, QA, and DevOps pipelines.
- Collaborate with client success and operations teams to handle technical escalations, performance issues, and integration requests.
- Drive AI feature innovation — identify opportunities for automation, personalization, and predictive insights.
Team Management
- Manage and mentor an 8–10 member engineering team.
- Conduct weekly sprint reviews, define coding standards, and ensure timely, high-quality delivery.
- Hire and train new engineers to expand Voiceoc’s technical capability.
- Foster a culture of accountability, speed, and innovation.
Client-Facing & Operational Ownership
- Join client calls (US-based hospitals) to understand technical requirements or resolve issues directly.
- Collaborate with the founder on technical presentations and proof-of-concept discussions.
- Handle A–Z of tech operations for the US business — infrastructure, integrations, uptime, and client satisfaction.
Technical Requirements
Must-Have:
- 5-7 years of experience in software engineering with at least 2+ years in a leadership capacity.
- Strong proficiency in Python (FastAPI, Flask, or Django).
- Experience integrating OpenAI / Gemini / Mistral / Whisper / LangChain.
- Solid experience with AI/ML model integration, LLMs, and RAG pipelines.
- Proven expertise in cloud deployment (AWS / GCP), Docker, and CI/CD.
- Strong understanding of backend architecture, API integrations, and system design.
- Experience building scalable, production-grade SaaS or conversational AI systems.
- Excellent communication and leadership skills — capable of interfacing with both engineers and clients.
Good to Have (Optional):
- Familiarity with telephony & voice tech stacks (Twilio, Exotel, Asterisk etc.).
What We Offer
- Opportunity to lead the entire technology vertical for a growing global healthtech startup.
- Direct collaboration with the Founder/CEO on strategy and innovation.
- Competitive compensation — salary + meaningful equity stake.
- Dynamic and fast-paced work culture with tangible impact on global healthcare.
Other Details
- Work Mode: Hybrid - Noida (Office) + Home
- Work Timing: US Hours
About Unilog
Unilog is the only connected product content and eCommerce provider serving the Wholesale Distribution, Manufacturing, and Specialty Retail industries. Our flagship CX1 Platform is at the center of some of the most successful digital transformations in North America. CX1 Platform’s syndicated product content, integrated eCommerce storefront, and automated PIM tool simplify our customers' path to success in the digital marketplace.
With more than 500 customers, Unilog is uniquely positioned as the leader in eCommerce and product content for Wholesale distribution, manufacturing, and specialty retail.
Unilog’ s Mission Statement
At Unilog, our mission is to provide purpose-built connected product content and eCommerce solutions that empower our customers to succeed in the face of intense competition. By virtue of living our mission, we are able to transform the way Wholesale Distributors, Manufacturers, and Specialty Retailers go to market. We help our customers extend a digital version of their business and accelerate their growth.
Designation:- AI Architect
Location: Bangalore/Mysore/Remote
Job Type: Full-time
Department: Software R&D
About the Role
We are looking for a highly motivated AI Architect to join our CTO Office and drive the exploration, prototyping, and adoption of next-generation technologies. This role offers a unique opportunity to work at the forefront of AI/ML, Generative AI (Gen AI), Large Language Models (LLMs), Vector Databases, AI Search, Agentic AI, Automation, and more.
As an Architect, you will be responsible for identifying emerging technologies, building proof-of-concepts (PoCs), and collaborating with cross-functional teams to define the future of AI-driven solutions. Your work will directly influence the company’s technology strategy and help shape disruptive innovations.
Key Responsibilities
Research & Experimentation: Stay ahead of industry trends, evaluate emerging AI/ML technologies, and prototype novel solutions in areas like Gen AI, Vector Search, AI Agents, and Automation.
Proof-of-Concept Development: Rapidly build, test, and iterate PoCs to validate new technologies for potential business impact.
AI/ML Engineering: Design and develop AI/ML models, LLMs, embedding’s, and intelligent search capabilities leveraging state-of-the-art techniques.
Vector & AI Search: Explore vector databases and optimize retrieval-augmented generation (RAG) workflows.
Automation & AI Agents: Develop autonomous AI agents and automation frameworks to enhance business processes.
Collaboration & Thought Leadership: Work closely with software developers and product teams to integrate innovations into production-ready solutions.
Innovation Strategy: Contribute to the technology roadmap, patents, and research papers to establish leadership in emerging domains.
Required Qualifications
- 8-14 years of experience in AI/ML, software engineering, or a related field.
- Strong hands-on expertise in Python, TensorFlow, PyTorch, LangChain, Hugging Face, OpenAI APIs, Claude, Gemini.
- Experience with LLMs, embeddings, AI search, vector databases (e.g., Pinecone, FAISS, Weaviate, PGVector), and agentic AI.
- Familiarity with cloud platforms (AWS, Azure, GCP) and AI/ML infrastructure.
- Strong problem-solving skills and a passion for innovation.
- Ability to communicate complex ideas effectively and work in a fast-paced, experimental environment.
Preferred Qualifications
- Experience with multi-modal AI (text, vision, audio), reinforcement learning, or AI security.
- Knowledge of data pipelines, MLOps, and AI governance.
- Contributions to open-source AI/ML projects or published research papers.
Why Join Us?
- Work on cutting-edge AI/ML innovations with the CTO Office.
- Influence the company’s future AI strategy and shape emerging technologies.
- Competitive compensation, growth opportunities, and a culture of continuous learning.
About our Benefits:
Unilog offers a competitive total rewards package including competitive salary, multiple medical, dental, and vision plans to meet all our employees’ needs, 401K match, career development, advancement opportunities, annual merit, pay-for-performance bonus eligibility, a generous time-off policy, and a flexible work environment.
Unilog is committed to building the best team and we are committed to fair hiring practices where we hire people for their potential and advocate for diversity, equity, and inclusion. As such, we do not discriminate or make decisions based on your race, color, religion, creed, ancestry, sex, national origin, age, disability, familial status, marital status, military status, veteran status, sexual orientation, gender identity, or expression, or any other protected class.
Responsibilities
- Design, develop, and implement MLOps pipelines for the continuous deployment and integration of machine learning models
- Collaborate with data scientists and engineers to understand model requirements and optimize deployment processes
- Automate the training, testing and deployment processes for machine learning models
- Continuously monitor and maintain models in production, ensuring optimal performance, accuracy and reliability
- Implement best practices for version control, model reproducibility and governance
- Optimize machine learning pipelines for scalability, efficiency and cost-effectiveness
- Troubleshoot and resolve issues related to model deployment and performance
- Ensure compliance with security and data privacy standards in all MLOps activities
- Keep up to date with the latest MLOps tools, technologies and trends
- Provide support and guidance to other team members on MLOps practices
Required Skills And Experience
- 3-10 years of experience in MLOps, DevOps or a related field
- Bachelors degree in computer science, Data Science or a related field
- Strong understanding of machine learning principles and model lifecycle management
- Experience in Jenkins pipeline development
- Experience in automation scripting
Technical Trainer at the Pollachi location.
Trainer - Pollachi.
Willing to travel around a 30km radius from Pollachi.
Job Description: Technical Trainer
Expertise: HTML, CSS, JavaScript, Python, Artificial Intelligence (AI), and Machine Learning (ML), IoT, and Robotics (Optional).
Work Location: Flexible (Work from Home & Office available)
Target Audience: School students and teachers
Employment Type: Full-time, IoT and Robotics (Optional).
Key Responsibilities:
* Develop and deliver content in an easy-to-understand format suitable for varying audience levels.
* Prepare training materials, exercises, and assessments to evaluate participant progress and measure their learning outcomes. Adapt teaching methods to suit both in-person (office) and virtual (work-from-home) formats.
* Stay updated with the latest trends and tools in technology to ensure high-quality training delivery.
Review Criteria
- Strong Data Scientist/Machine Learnings/ AI Engineer Profile
- 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
- Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
- Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
- Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
- Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
- Preferred (Company) – Must be from product companies
Job Specific Criteria
- CV Attachment is mandatory
- What's your current company?
- Which use cases you have hands on experience?
- Are you ok for Mumbai location (if candidate is from outside Mumbai)?
- Reason for change (if candidate has been in current company for less than 1 year)?
- Reason for hike (if greater than 25%)?
Role & Responsibilities
- Partner with Product to spot high-leverage ML opportunities tied to business metrics.
- Wrangle large structured and unstructured datasets; build reliable features and data contracts.
- Build and ship models to:
- Enhance customer experiences and personalization
- Boost revenue via pricing/discount optimization
- Power user-to-user discovery and ranking (matchmaking at scale)
- Detect and block fraud/risk in real time
- Score conversion/churn/acceptance propensity for targeted actions
- Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
- Design and run A/B tests with guardrails.
- Build monitoring for model/data drift and business KPIs
Ideal Candidate
- 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
- Proven, hands-on success in at least two (preferably 3–4) of the following:
- Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
- Fraud/risk detection (severe class imbalance, PR-AUC)
- Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
- Propensity models (payment/churn)
- Programming: strong Python and SQL; solid git, Docker, CI/CD.
- Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
- ML breadth: recommender systems, NLP or user profiling, anomaly detection.
- Communication: clear storytelling with data; can align stakeholders and drive decisions.
About the Company
SimplyFI Softech India Pvt. Ltd. is a product-led company working across AI, Blockchain, and Cloud. The team builds intelligent platforms for fintech, SaaS, and enterprise use cases, focused on solving real business problems with production-grade systems.
Role Overview
This role is for someone who enjoys working hands-on with data and machine learning models. You’ll support real-world AI use cases end to end, from data prep to model integration, while learning how AI systems are built and deployed in production.
Key Responsibilities
- Design, develop, and deploy machine learning models with guidance from senior engineers
- Work with structured and unstructured datasets for cleaning, preprocessing, and feature engineering
- Implement ML algorithms using Python and standard ML libraries
- Train, test, and evaluate models and track performance metrics
- Assist in integrating AI/ML models into applications and APIs
- Perform basic data analysis and visualization to extract insights
- Participate in code reviews, documentation, and team discussions
- Stay updated on ML, AI, and Generative AI trends
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, AI, Data Science, or a related field
- Strong foundation in Python
- Clear understanding of core ML concepts: supervised and unsupervised learning
- Hands-on exposure to NumPy, Pandas, and Scikit-learn
- Basic familiarity with TensorFlow or PyTorch
- Understanding of data structures, algorithms, and statistics
- Good analytical thinking and problem-solving skills
- Comfortable working in a fast-moving product environment
Good to Have
- Exposure to NLP, Computer Vision, or Generative AI
- Experience with Jupyter Notebook or Google Colab
- Basic knowledge of SQL or NoSQL databases
- Understanding of REST APIs and model deployment concepts
- Familiarity with Git/GitHub
- AI/ML internships or academic projects
Role Overview:
We are looking for a PHP & Angular Developer to build and maintain scalable full-stack web applications. The role requires strong backend expertise in PHP, solid frontend development using Angular, and exposure or interest in AI/ML-powered features.
Key Responsibilities:
- Develop and maintain applications using PHP and Angular
- Build and consume RESTful APIs
- Create reusable Angular components using TypeScript
- Work with MySQL/PostgreSQL databases
- Collaborate with Product, QA, and AI/ML teams
- Integrate AI/ML APIs where applicable
- Ensure performance, security, and scalability
- Debug and resolve production issues
Required Skills:
- 5–7 years experience in PHP development
- Strong hands-on with Laravel / CodeIgniter
- Experience with Angular (v10+)
- HTML, CSS, JavaScript, TypeScript
- REST APIs, JSON
- MySQL / PostgreSQL
- Git, MVC architecture
Good to Have:
- Exposure to AI/ML concepts or API integrations
- Python-based ML services (basic)
- Cloud platforms (AWS / Azure / GCP)
- Docker, CI/CD
- Agile/Scrum experience
- Product/start-up background
Role Overview:
We are looking for a PHP & Angular Developer to build and maintain scalable full-stack web applications. The role requires strong backend expertise in PHP, solid frontend development using Angular, and exposure or interest in AI/ML-powered features.
Key Responsibilities:
- Develop and maintain applications using PHP and Angular
- Build and consume RESTful APIs
- Create reusable Angular components using TypeScript
- Work with MySQL/PostgreSQL databases
- Collaborate with Product, QA, and AI/ML teams
- Integrate AI/ML APIs where applicable
- Ensure performance, security, and scalability
- Debug and resolve production issues
Required Skills:
- 5–7 years experience in PHP development
- Strong hands-on with Laravel / CodeIgniter
- Experience with Angular (v10+)
- HTML, CSS, JavaScript, TypeScript
- REST APIs, JSON
- MySQL / PostgreSQL
- Git, MVC architecture
Good to Have:
- Exposure to AI/ML concepts or API integrations
- Python-based ML services (basic)
- Cloud platforms (AWS / Azure / GCP)
- Docker, CI/CD
- Agile/Scrum experience
- Product/start-up background
About Role
We are looking for a hands-on Python Engineer with strong experience in backend development, AI-driven systems, and cloud infrastructure. The ideal candidate should be comfortable working across Python services, AI/ML pipelines, and cloud-native environments, and capable of building production-grade, scalable systems.
This role offers high ownership, exposure to real-world AI systems, and long-term growth, making it ideal for engineers who want to build meaningful products rather than just features
Key Responsibilities
- Design, develop, and maintain scalable backend services using Python
- Build APIs and services using FastAPI, Flask, or Django
- Ensure performance, reliability, and scalability of backend systems
- Integrate AI/ML models into production systems (model inference, automation)
- Build and maintain AI pipelines for data processing and inference
- Deploy and manage applications on AWS, with exposure to GCP and Azure
- Implement CI/CD pipelines, containerization, and cloud deployments
- Collaborate with product, frontend, and AI teams on end-to-end delivery
- Optimize cloud infrastructure for cost, performance, and reliability
- Collaborate with product, frontend, and AI teams on end-to-end delivery
- Follow best practices for security, monitoring, and logging
Required Qualifications
- 2–4 years of professional experience in Python development
- Strong understanding of backend frameworks: FastAPI, Flask, Django
- Hands-on experience integrating AI/ML systems into applications
- Solid experience with AWS (EC2, S3, Lambda, RDS, IAM)
- Exposure to Google Cloud Platform (GCP) and Microsoft Azure
- Experience with Docker and CI/CD workflows
- Understanding of scalable system design principles
- Strong problem-solving and debugging skills
- Ability to work collaboratively in a product-driven environment
Perks and Benefits
- Work in Nikhil Kamath funded startup
- ₹3 – ₹4.6 LPA with ESOPs linked to performance and tenure
- Opportunity to build long-term wealth through ESOP participation
- Work on production-scale AI systems used in real-world applications
- Hands-on experience with AWS, GCP, and Azure architectures
- Work with a team that values clean engineering, experimentation, and execution
- Exposure to modern backend frameworks, AI pipelines, and DevOps practices
- High autonomy, fast decision-making, and real ownership of features and systems
Job Title: AI/ML Engineer – Voice (2–3 Years)
Location: Bengaluru (On-site)
Employment Type: Full-time
About Impacto Digifin Technologies
Impacto Digifin Technologies enables enterprises to adopt digital transformation through intelligent, AI-powered solutions. Our platforms reduce manual work, improve accuracy, automate complex workflows, and ensure compliance—empowering organizations to operate with speed, clarity, and confidence.
We combine automation where it’s fastest with human oversight where it matters most. This hybrid approach ensures trust, reliability, and measurable efficiency across fintech and enterprise operations.
Role Overview
We are looking for an AI Engineer Voice with strong applied experience in machine learning, deep learning, NLP, GenAI, and full-stack voice AI systems.
This role requires someone who can design, build, deploy, and optimize end-to-end voice AI pipelines, including speech-to-text, text-to-speech, real-time streaming voice interactions, voice-enabled AI applications, and voice-to-LLM integrations.
You will work across core ML/DL systems, voice models, predictive analytics, banking-domain AI applications, and emerging AGI-aligned frameworks. The ideal candidate is an applied engineer with strong fundamentals, the ability to prototype quickly, and the maturity to contribute to R&D when needed.
This role is collaborative, cross-functional, and hands-on.
Key Responsibilities
Voice AI Engineering
- Build end-to-end voice AI systems, including STT, TTS, VAD, audio processing, and conversational voice pipelines.
- Implement real-time voice pipelines involving streaming interactions with LLMs and AI agents.
- Design and integrate voice calling workflows, bi-directional audio streaming, and voice-based user interactions.
- Develop voice-enabled applications, voice chat systems, and voice-to-AI integrations for enterprise workflows.
- Build and optimize audio preprocessing layers (noise reduction, segmentation, normalization)
- Implement voice understanding modules, speech intent extraction, and context tracking.
Machine Learning & Deep Learning
- Build, deploy, and optimize ML and DL models for prediction, classification, and automation use cases.
- Train and fine-tune neural networks for text, speech, and multimodal tasks.
- Build traditional ML systems where needed (statistical, rule-based, hybrid systems).
- Perform feature engineering, model evaluation, retraining, and continuous learning cycles.
NLP, LLMs & GenAI
- Implement NLP pipelines including tokenization, NER, intent, embeddings, and semantic classification.
- Work with LLM architectures for text + voice workflows
- Build GenAI-based workflows and integrate models into production systems.
- Implement RAG pipelines and agent-based systems for complex automation.
Fintech & Banking AI
- Work on AI-driven features related to banking, financial risk, compliance automation, fraud patterns, and customer intelligence.
- Understand fintech data structures and constraints while designing AI models.
Engineering, Deployment & Collaboration
- Deploy models on cloud or on-prem (AWS / Azure / GCP / internal infra).
- Build robust APIs and services for voice and ML-based functionalities.
- Collaborate with data engineers, backend developers, and business teams to deliver end-to-end AI solutions.
- Document systems and contribute to internal knowledge bases and R&D.
Security & Compliance
- Follow fundamental best practices for AI security, access control, and safe data handling.
- Awareness of financial compliance standards (plus, not mandatory).
- Follow internal guidelines on PII, audio data, and model privacy.
Primary Skills (Must-Have)
Core AI
- Machine Learning fundamentals
- Deep Learning architectures
- NLP pipelines and transformers
- LLM usage and integration
- GenAI development
- Voice AI (STT, TTS, VAD, real-time pipelines)
- Audio processing fundamentals
- Model building, tuning, and retraining
- RAG systems
- AI Agents (orchestration, multi-step reasoning)
Voice Engineering
- End-to-end voice application development
- Voice calling & telephony integration (framework-agnostic)
- Realtime STT ↔ LLM ↔ TTS interactive flows
- Voice chat system development
- Voice-to-AI model integration for automation
Fintech/Banking Awareness
- High-level understanding of fintech and banking AI use cases
- Data patterns in core banking analytics (advantageous)
Programming & Engineering
- Python (strong competency)
- Cloud deployment understanding (AWS/Azure/GCP)
- API development
- Data processing & pipeline creation
Secondary Skills (Good to Have)
- MLOps & CI/CD for ML systems
- Vector databases
- Prompt engineering
- Model monitoring & evaluation frameworks
- Microservices experience
- Basic UI integration understanding for voice/chat
- Research reading & benchmarking ability
Qualifications
- 2–3 years of practical experience in AI/ML/DL engineering.
- Bachelor’s/Master’s degree in CS, AI, Data Science, or related fields.
- Proven hands-on experience building ML/DL/voice pipelines.
- Experience in fintech or data-intensive domains preferred.
Soft Skills
- Clear communication and requirement understanding
- Curiosity and research mindset
- Self-driven problem solving
- Ability to collaborate cross-functionally
- Strong ownership and delivery discipline
- Ability to explain complex AI concepts simply
Strong Data Scientist/Machine Learnings/ AI Engineer Profile
Mandatory (Experience 1) – Must have 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
Mandatory (Experience 2) – Must have strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
Mandatory (Experience 3) – Must have hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
Mandatory (Experience 4) – Must have strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
Mandatory (Experience 5) – Must have experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
Mandatory (Company) – Must be from product companies, Avoid candidates from financial domains (e.g., JPMorgan, banks, fintech)
Review Criteria
- Strong AI/ML Test Engineer
- 5+ years of overall experience in Testing/QA
- 2+ years of experience in testing AI/ML models and data-driven applications, across NLP, recommendation engines, fraud detection, and advanced analytics models
- Must have expertise in validating AI/ML models for accuracy, bias, explainability, and performance, ensuring decisions are fair, reliable, and transparent
- Must have strong experience to design AI/ML test strategies, including boundary testing, adversarial input simulation, and anomaly monitoring to detect manipulation attempts by marketplace users (buyers/sellers)
- Proficiency in AI/ML testing frameworks and tools (like PyTest, TensorFlow Model Analysis, MLflow, Python-based data validation libraries, Jupyter) with the ability to integrate into CI/CD pipelines
- Must understand marketplace misuse scenarios, such as manipulating recommendation algorithms, biasing fraud detection systems, or exploiting gaps in automated scoring
- Must have strong verbal and written communication skills, able to collaborate with data scientists, engineers, and business stakeholders to articulate testing outcomes and issues.
- Degree in Engineering, Computer Science, IT, Data Science, or a related discipline (B.E./B.Tech/M.Tech/MCA/MS or equivalent)
- Candidate must be based within Delhi NCR (100 km radius)
Preferred
- Certifications such as ISTQB AI Testing, TensorFlow, Cloud AI, or equivalent applied AI credentials are an added advantage.
Job Specific Criteria
- CV Attachment is mandatory
- Have you worked with large datasets for AI/ML testing?
- Have you automated AI/ML testing using PyTest, Jupyter notebooks, or CI/CD pipelines?
- Please provide details of 2 key AI/ML testing projects you have worked on, including your role, responsibilities, and tools/frameworks used.
- Are you willing to relocate to Delhi and why (if not from Delhi)?
- Are you available for a face-to-face round?
Role & Responsibilities
- 5 years’ experience in testing AIML models and data driven applications including natural language processing NLP recommendation engines fraud detection and advanced analytics models
- Proven expertise in validating AI models for accuracy bias explainability and performance to ensure decisions eg bid scoring supplier ranking fraud detection are fair reliable and transparent
- Handson experience in data validation and model testing ensuring training and inference pipelines align with business requirements and procurement rules
- Strong skills in data science equipped to design test strategies for AI systems including boundary testing adversarial input simulation and dri monitoring to detect manipulation aempts by marketplace users buyers sellers
- Proficient in data science for defining AIML testing frameworks and tools TensorFlow Model Analysis MLflow PyTest Python based data validation libraries Jupyter with ability to integrate into CICD pipelines
- Business awareness of marketplace misuse scenarios such as manipulating recommendation algorithms biasing fraud detection systems or exploiting gaps in automated scoring
- Education Certifications Bachelors masters in engineering CSIT Data Science or equivalent
- Preferred Certifications ISTQB AI Testing TensorFlowCloud AI certifications or equivalent applied AI credentials
Ideal Candidate
- 5 years’ experience in testing AIML models and data driven applications including natural language processing NLP recommendation engines fraud detection and advanced analytics models
- Proven expertise in validating AI models for accuracy bias explainability and performance to ensure decisions eg bid scoring supplier ranking fraud detection are fair reliable and transparent
- Handson experience in data validation and model testing ensuring training and inference pipelines align with business requirements and procurement rules
- Strong skills in data science equipped to design test strategies for AI systems including boundary testing adversarial input simulation and dri monitoring to detect manipulation aempts by marketplace users buyers sellers
- Proficient in data science for defining AIML testing frameworks and tools TensorFlow Model Analysis MLflow PyTest Python based data validation libraries Jupyter with ability to integrate into CICD pipelines
- Business awareness of marketplace misuse scenarios such as manipulating recommendation algorithms biasing fraud detection systems or exploiting gaps in automated scoring
- Education Certifications Bachelors masters in engineering CSIT Data Science or equivalent
- Preferred Certifications ISTQB AI Testing TensorFlow Cloud AI certifications or equivalent applied AI credentials
Role: Senior AI Engineer
Work Location: TechGenzi Coimbatore Office (ODC for Tiramai.ai)
Employment Type: Full-time
Experience: 2–5 years (Full-stack development with AI exposure)
About the Role & Work Location.
The selected candidate will be employed by Tiramai.ai and will work exclusively on Tiramai.ai projects. The role will be based out of TechGenzi’s Coimbatore office, which functions as an Offshore Development Center (ODC) supporting Tiramai.ai’s product and engineering initiatives.
Primary Focus
As an AI Engineer at our enterprise SaaS and AI-native organization, you will play a pivotal role in building secure, scalable, and intelligent digital solutions. This role combines full-stack development expertise with applied AI skills to create next-generation platforms that empower enterprises to modernize and act smarter with AI. You will work on AI-driven features, APIs, and cloud-native applications that are production-ready, compliance-conscious, and aligned with our mission of delivering responsible AI innovation.
Key Responsibilities
- Design, develop, and maintain full-stack applications using Python (backend) and React/Angular (frontend).
- Build and integrate AI-driven modules, leveraging GenAI, ML models, and AI-native tools into enterprise-grade SaaS products.
- Develop scalable REST APIs and microservices with security, compliance, and performance in mind.
- Collaborate with architects, product managers, and cross-functional teams to translate requirements into production-ready features.
- Ensure adherence to secure coding standards, data privacy regulations, and human-in-the-loop AI principles.
- Participate in code reviews, system design discussions, and continuous integration/continuous deployment (CI/CD) practices.
- Contribute to reusable libraries, frameworks, and best practices to accelerate AI platform development.
Skills Required
- Strong proficiency in Python for backend development.
- Frontend expertise in React.js or Angular with 2+ years of experience.
- Hands-on experience in full SDLC development (design, build, test, deploy, maintain).
- Familiarity with AI/ML frameworks (e.g., TensorFlow, PyTorch) or GenAI tools (LangChain, vector DBs, OpenAI APIs).
- Knowledge of cloud-native development (AWS/Azure/GCP), Docker, Kubernetes, and CI/CD pipelines.
- Strong understanding of REST APIs, microservices, and enterprise-grade security standards.
- Ability to work collaboratively in fast-paced, cross-functional teams with strong problem-solving and analytical skills.
- Exposure to responsible AI principles (explainability, bias mitigation, compliance) is a plus.
Growth Path
- AI Engineer (24 years) focus on full-stack + AI integration, delivering production-ready features.
- Senior AI Engineer (4–6 years) lead modules, mentor juniors, and drive AI feature development at scale.
- Lead AI Engineer (6–8 years) own solution architecture for AI features, ensure security/compliance, collaborate closely with product/tech leaders.
- AI Architect / Engineering Manager (8+ years) shape AI platform strategy, guide large-scale deployments, and influence product/technology roadmap.
Job Title : Senior Software Engineer (Full Stack — AI/ML & Data Applications)
Experience : 5 to 10 Years
Location : Bengaluru, India
Employment Type : Full-Time | Onsite
Role Overview :
We are seeking a Senior Full Stack Software Engineer with strong technical leadership and hands-on expertise in AI/ML, data-centric applications, and scalable full-stack architectures.
In this role, you will design and implement complex applications integrating ML/AI models, lead full-cycle development, and mentor engineering teams.
Mandatory Skills :
Full Stack Development (React/Angular/Vue + Node.js/Python/Java), Data Engineering (Spark/Kafka/ETL), ML/AI Model Integration (TensorFlow/PyTorch/scikit-learn), Cloud & DevOps (AWS/GCP/Azure, Docker, Kubernetes, CI/CD), SQL/NoSQL Databases (PostgreSQL/MongoDB).
Key Responsibilities :
- Architect, design, and develop scalable full-stack applications for data and AI-driven products.
- Build and optimize data ingestion, processing, and pipeline frameworks for large datasets.
- Deploy, integrate, and scale ML/AI models in production environments.
- Drive system design, architecture discussions, and API/interface standards.
- Ensure engineering best practices across code quality, testing, performance, and security.
- Mentor and guide junior developers through reviews and technical decision-making.
- Collaborate cross-functionally with product, design, and data teams to align solutions with business needs.
- Monitor, diagnose, and optimize performance issues across the application stack.
- Maintain comprehensive technical documentation for scalability and knowledge-sharing.
Required Skills & Experience :
- Education : B.E./B.Tech/M.E./M.Tech in Computer Science, Data Science, or equivalent fields.
- Experience : 5+ years in software development with at least 2+ years in a senior or lead role.
- Full Stack Proficiency :
- Front-end : React / Angular / Vue.js
- Back-end : Node.js / Python / Java
- Data Engineering : Experience with data frameworks such as Apache Spark, Kafka, and ETL pipeline development.
- AI/ML Expertise : Practical exposure to TensorFlow, PyTorch, or scikit-learn and deploying ML models at scale.
- Databases : Strong knowledge of SQL & NoSQL systems (PostgreSQL, MongoDB) and warehousing tools (Snowflake, BigQuery).
- Cloud & DevOps : Working knowledge of AWS, GCP, or Azure; containerization & orchestration (Docker, Kubernetes); CI/CD; MLflow/SageMaker is a plus.
- Visualization : Familiarity with modern data visualization tools (D3.js, Tableau, Power BI).
Soft Skills :
- Excellent communication and cross-functional collaboration skills.
- Strong analytical mindset with structured problem-solving ability.
- Self-driven with ownership mentality and adaptability in fast-paced environments.
Preferred Qualifications (Bonus) :
- Experience deploying distributed, large-scale ML or data-driven platforms.
- Understanding of data governance, privacy, and security compliance.
- Exposure to domain-driven data/AI use cases in fintech, healthcare, retail, or e-commerce.
- Experience working in Agile environments (Scrum/Kanban).
- Active open-source contributions or a strong GitHub technical portfolio.
We are looking for an AI Engineer (Computer Vision) to design and deploy intelligent video analytics solutions using CCTV feeds. The role focuses on analyzing real-time and recorded video to extract insights such as attention levels, engagement, movement patterns, posture, and overall group behavior. You will work closely with data scientists, backend teams, and product managers to build scalable, privacy-aware AI systems.
Key Responsibilities
- Develop and deploy computer vision models for CCTV-based video analytics
- Analyze gaze, posture, facial expressions, movement, and crowd behavior
- Build real-time and batch video processing pipelines
- Train, fine-tune, and optimize deep learning models for production
- Convert visual signals into actionable insights & dashboards
- Ensure privacy, security, and ethical AI compliance
- Improve model accuracy, latency, and scalability
- Collaborate with engineering teams for end-to-end deployment
Required Skills
- Strong experience in Computer Vision & Deep Learning
- Proficiency in Python
- Hands-on experience with OpenCV, TensorFlow, PyTorch
- Knowledge of CNNs, object detection, tracking, pose estimation
- Experience with video analytics & CCTV data
- Understanding of model optimization and deployment
Good to Have
- Experience with real-time video streaming (RTSP, CCTV feeds)
- Familiarity with edge AI or GPU optimization
- Exposure to education analytics or surveillance systems
- Knowledge of cloud deployment (AWS/GCP/Azure)

is a global digital solutions partner trusted by leading Fortune 500 companies in industries such as pharma & healthcare, retail, and BFSI.It is expertise in data and analytics, data engineering, machine learning, AI, and automation help companies streamline operations and unlock business value.
Skills: Gen AI, Machine learning Models, AWS/ Azure, redshift, Python, Apachi, Airflow, Devops, minimum 4-5years experience as Architect, should be from Data Engineering background.
• 8+ years of experience in data engineering, data science, or architecture roles.
• Experience designing enterprise-grade AI platforms.
• Certification in major cloud platforms (AWS/Azure/GCP).
• Experience with governance tooling (Collibra, Alation) and lineage systems
• Strong hands-on background in data engineering, analytics, or data science.
• Expertise in building data platforms using:
o Cloud: AWS (Glue, S3, Redshift), Azure (Data Factory, Synapse), GCP (BigQuery,
Dataflow).
o Compute: Spark, Databricks, Flink.
o Data modelling: dimensional, relational, NoSQL, graph.
• Proficiency with Python, SQL, and data pipeline orchestration tools.
• Understanding of ML frameworks and tools: TensorFlow, PyTorch, Scikit-learn, MLflow, etc.
• Experience implementing MLOps, model deployment, monitoring, logging, and versioning.
Review Criteria:
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred:
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria:
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities:
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate:
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
Experience: 3+ years
Responsibilities:
- Build, train and fine tune ML models
- Develop features to improve model accuracy and outcomes.
- Deploy models into production using Docker, kubernetes and cloud services.
- Proficiency in Python, MLops, expertise in data processing and large scale data set.
- Hands on experience in Cloud AI/ML services.
- Exposure in RAG Architecture
Senior Machine Learning Engineer
About the Role
We are looking for a Senior Machine Learning Engineer who can take business problems, design appropriate machine learning solutions, and make them work reliably in production environments.
This role is ideal for someone who not only understands machine learning models, but also knows when and how ML should be applied, what trade-offs to make, and how to take ownership from problem understanding to production deployment.
Beyond technical skills, we need someone who can lead a team of ML Engineers, design end-to-end ML solutions, and clearly communicate decisions and outcomes to both engineering teams and business stakeholders. If you enjoy solving real problems, making pragmatic decisions, and owning outcomes from idea to deployment, this role is for you.
What You’ll Be Doing
Building and Deploying ML Models
- Design, build, evaluate, deploy, and monitor machine learning models for real production use cases.
- Take ownership of how a problem is approached, including deciding whether ML is the right solution and what type of ML approach fits the problem.
- Ensure scalability, reliability, and efficiency of ML pipelines across cloud and on-prem environments.
- Work with data engineers to design and validate data pipelines that feed ML systems.
- Optimize solutions for accuracy, performance, cost, and maintainability, not just model metrics.
Leading and Architecting ML Solutions
- Lead a team of ML Engineers, providing technical direction, mentorship, and review of ML approaches.
- Architect ML solutions that integrate seamlessly with business applications and existing systems.
- Ensure models and solutions are explainable, auditable, and aligned with business goals.
- Drive best practices in MLOps, including CI/CD, model monitoring, retraining strategies, and operational readiness.
- Set clear standards for how ML problems are framed, solved, and delivered within the team.
Collaborating and Communicating
- Work closely with business stakeholders to understand problem statements, constraints, and success criteria.
- Translate business problems into clear ML objectives, inputs, and expected outputs.
- Collaborate with software engineers, data engineers, platform engineers, and product managers to integrate ML solutions into production systems.
- Present ML decisions, trade-offs, and outcomes to non-technical stakeholders in a simple and understandable way.
What We’re Looking For
Machine Learning Expertise
- Strong understanding of supervised and unsupervised learning, deep learning, NLP techniques, and large language models (LLMs).
- Experience choosing appropriate modeling approaches based on the problem, available data, and business constraints.
- Experience training, fine-tuning, and deploying ML and LLM models for real-world use cases.
- Proficiency in common ML frameworks such as TensorFlow, PyTorch, Scikit-learn, etc.
Production and Cloud Deployment
- Hands-on experience deploying and running ML systems in production environments on AWS, GCP, or Azure.
- Good understanding of MLOps practices, including CI/CD for ML models, monitoring, and retraining workflows.
- Experience with Docker, Kubernetes, or serverless architectures is a plus.
- Ability to think beyond deployment and consider operational reliability and long-term maintenance.
Data Handling
- Strong programming skills in Python.
- Proficiency in SQL and working with large-scale datasets.
- Ability to reason about data quality, data limitations, and how they impact ML outcomes.
- Familiarity with distributed computing frameworks like Spark or Dask is a plus.
Leadership and Communication
- Ability to lead and mentor ML Engineers and work effectively across teams.
- Strong communication skills to explain ML concepts, decisions, and limitations to business teams.
- Comfortable taking ownership and making decisions in ambiguous problem spaces.
- Passion for staying updated with advancements in ML and AI, with a practical mindset toward adoption.
Experience Needed
- 6+ years of experience in machine learning engineering or related roles.
- Proven experience designing, selecting, and deploying ML solutions used in production.
- Experience managing ML systems after deployment, including monitoring and iteration.
- Proven track record of working in cross-functional teams and leading ML initiatives.
Job Description:
Exp Range - [6y to 10y]
Qualifications:
- Minimum Bachelors Degree in Engineering or Computer Applications or AI/Data science
- Experience working in product companies/Startups for developing, validating, productionizing AI model in the recent projects in last 3 years.
- Prior experience in Python, Numpy, Scikit, Pandas, ETL/SQL, BI tools in previous roles preferred
Require Skills:
- Must Have – Direct hands-on experience working in Python for scripting automation analysis and Orchestration
- Must Have – Experience working with ML Libraries such as Scikit-learn, TensorFlow, PyTorch, Pandas, NumPy etc.
- Must Have – Experience working with models such as Random forest, Kmeans clustering, BERT…
- Should Have – Exposure to querying warehouses and APIs
- Should Have – Experience with writing moderate to complex SQL queries
- Should Have – Experience analyzing and presenting data with BI tools or Excel
- Must Have – Very strong communication skills to work with technical and non technical stakeholders in a global environment
Roles and Responsibilities:
- Work with Business stakeholders, Business Analysts, Data Analysts to understand various data flows and usage.
- Analyse and present insights about the data and processes to Business Stakeholders
- Validate and test appropriate AI/ML models based on the prioritization and insights developed while working with the Business Stakeholders
- Develop and deploy customized models on Production data sets to generate analytical insights and predictions
- Participate in cross functional team meetings and provide estimates of work as well as progress in assigned tasks.
- Highlight risks and challenges to the relevant stakeholders so that work is delivered in a timely manner.
- Share knowledge and best practices with broader teams to make everyone aware and more productive.
We are seeking an experienced AI Architect to design, build, and scale production-ready AI voice conversation agents deployed locally (on-prem / edge / private cloud) and optimized for GPU-accelerated, high-throughput environments.
You will own the end-to-end architecture of real-time voice systems, including speech recognition, LLM orchestration, dialog management, speech synthesis, and low-latency streaming pipelines—designed for reliability, scalability, and cost efficiency.
This role is highly hands-on and strategic, bridging research, engineering, and production infrastructure.
Key Responsibilities
Architecture & System Design
- Design low-latency, real-time voice agent architectures for local/on-prem deployment
- Define scalable architectures for ASR → LLM → TTS pipelines
- Optimize systems for GPU utilization, concurrency, and throughput
- Architect fault-tolerant, production-grade voice systems (HA, monitoring, recovery)
Voice & Conversational AI
- Design and integrate:
- Automatic Speech Recognition (ASR)
- Natural Language Understanding / LLMs
- Dialogue management & conversation state
- Text-to-Speech (TTS)
- Build streaming voice pipelines with sub-second response times
- Enable multi-turn, interruptible, natural conversations
Model & Inference Engineering
- Deploy and optimize local LLMs and speech models (quantization, batching, caching)
- Select and fine-tune open-source models for voice use cases
- Implement efficient inference using TensorRT, ONNX, CUDA, vLLM, Triton, or similar
Infrastructure & Production
- Design GPU-based inference clusters (bare metal or Kubernetes)
- Implement autoscaling, load balancing, and GPU scheduling
- Establish monitoring, logging, and performance metrics for voice agents
- Ensure security, privacy, and data isolation for local deployments
Leadership & Collaboration
- Set architectural standards and best practices
- Mentor ML and platform engineers
- Collaborate with product, infra, and applied research teams
- Drive decisions from prototype → production → scale
Required Qualifications
Technical Skills
- 7+ years in software / ML systems engineering
- 3+ years designing production AI systems
- Strong experience with real-time voice or conversational AI systems
- Deep understanding of LLMs, ASR, and TTS pipelines
- Hands-on experience with GPU inference optimization
- Strong Python and/or C++ background
- Experience with Linux, Docker, Kubernetes
AI & ML Expertise
- Experience deploying open-source LLMs locally
- Knowledge of model optimization:
- Quantization
- Batching
- Streaming inference
- Familiarity with voice models (e.g., Whisper-like ASR, neural TTS)
Systems & Scaling
- Experience with high-QPS, low-latency systems
- Knowledge of distributed systems and microservices
- Understanding of edge or on-prem AI deployments
Preferred Qualifications
- Experience building AI voice agents or call automation systems
- Background in speech processing or audio ML
- Experience with telephony, WebRTC, SIP, or streaming audio
- Familiarity with Triton Inference Server / vLLM
- Prior experience as Tech Lead or Principal Engineer
What We Offer
- Opportunity to architect state-of-the-art AI voice systems
- Work on real-world, high-scale production deployments
- Competitive compensation and equity (if applicable)
- High ownership and technical influence
- Collaboration with top-tier AI and infrastructure talent
Company Description
VMax e-Solutions India Private Limited, based in Hyderabad, is a dynamic organization specializing in Open Source ERP Product Development and Mobility Solutions. As an ISO 9001:2015 and ISO 27001:2013 certified company, VMax is dedicated to delivering tailor-made and scalable products, with a strong focus on e-Governance projects across multiple states in India. The company's innovative technologies aim to solve real-life problems and enhance the daily services accessed by millions of citizens. With a culture of continuous learning and growth, VMax provides its team members opportunities to develop expertise, take ownership, and grow their careers through challenging and impactful work.
About the Role
We’re hiring a Senior Data Scientist with deep real-time voice AI experience and strong backend engineering skills.
1. You’ll own and scale our end-to-end voice agent pipeline that powers AI SDRs, customer support 2. agents, and internal automation agents on calls. This is a hands-on, highly technical role where you’ll design and optimize low-latency, high-reliability voice systems.
3. You’ll work closely with our founders, product, and platform teams, with significant ownership over architecture, benchmarks.
What You’ll Do
1. Own the voice stack end-to-end – from telephony / WebRTC entrypoints to STT, turn-taking, LLM reasoning, and TTS back to the caller.
2. Design for real-time – architect and optimize streaming pipelines for sub-second latency, barge-in, interruptions, and graceful recovery on bad networks.
3. Integrate and tune models – evaluate, select, and integrate STT/TTS/LLM/VAD providers (and self-hosted models) for different use-cases, balancing quality, speed, and cost.
4. Build orchestration & tooling – implement agent orchestration logic, evaluation frameworks, call simulators, and dashboards for latency, quality, and reliability.
5. Harden for production – ensure high availability, observability, and robust fault-tolerance for thousands of concurrent calls in customer VPCs.
6. Shape the voice roadmap – influence how voice fits into our broader Agentic OS vision (simulation, analytics, multi-agent collaboration, etc.).
You’re a Great Fit If You Have
1. 6+ years of software engineering experience (backend or full-stack) in production systems.
2. Strong experience building real-time voice agents or similar systems using:
STT / ASR (e.g. Whisper, Deepgram, Assembly, AWS Transcribe, GCP Speech)
TTS (e.g. ElevenLabs, PlayHT, AWS Polly, Azure Neural TTS)
VAD / turn-taking and streaming audio pipelines
LLMs (e.g. OpenAI, Anthropic, Gemini, local models)
3. Proven track record designing and operating low-latency, high-throughput streaming systems (WebRTC, gRPC, websockets, Kafka, etc.).
4. Hands-on experience integrating ML models into live, user-facing applications with real-time inference & monitoring.
5. Solid backend skills with Python and TypeScript/Node.js; strong fundamentals in distributed systems, concurrency, and performance optimization.
6. Experience with cloud infrastructure – especially AWS (EKS, ECS, Lambda, SQS/Kafka, API Gateway, load balancers).
7. Comfortable working in Kubernetes / Docker environments, including logging, metrics, and alerting.
8. Startup DNA – at least 2 years in an early or mid-stage startup where you shipped fast, owned outcomes, and worked close to the customer.
Nice to Have
1. Experience self-hosting AI models (ASR / TTS / LLMs) and optimizing them for latency, cost, and reliability.
2. Telephony integration experience (e.g. Twilio, Vonage, Aircall, SignalWire, or similar).
3. Experience with evaluation frameworks for conversational agents (call quality scoring, hallucination checks, compliance rules, etc.).
4. Background in speech processing, signal processing, or dialog systems.
5. Experience deploying into enterprise VPC / on-prem environments and working with security/compliance constraints.


















