50+ Machine Learning (ML) Jobs in India
Apply to 50+ Machine Learning (ML) Jobs on CutShort.io. Find your next job, effortlessly. Browse Machine Learning (ML) Jobs and apply today!
Review Criteria
- Strong Data Scientist/Machine Learnings/ AI Engineer Profile
- 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
- Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
- Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
- Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
- Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
- Preferred (Company) – Must be from product companies
Job Specific Criteria
- CV Attachment is mandatory
- What's your current company?
- Which use cases you have hands on experience?
- Are you ok for Mumbai location (if candidate is from outside Mumbai)?
- Reason for change (if candidate has been in current company for less than 1 year)?
- Reason for hike (if greater than 25%)?
Role & Responsibilities
- Partner with Product to spot high-leverage ML opportunities tied to business metrics.
- Wrangle large structured and unstructured datasets; build reliable features and data contracts.
- Build and ship models to:
- Enhance customer experiences and personalization
- Boost revenue via pricing/discount optimization
- Power user-to-user discovery and ranking (matchmaking at scale)
- Detect and block fraud/risk in real time
- Score conversion/churn/acceptance propensity for targeted actions
- Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
- Design and run A/B tests with guardrails.
- Build monitoring for model/data drift and business KPIs
Ideal Candidate
- 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
- Proven, hands-on success in at least two (preferably 3–4) of the following:
- Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
- Fraud/risk detection (severe class imbalance, PR-AUC)
- Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
- Propensity models (payment/churn)
- Programming: strong Python and SQL; solid git, Docker, CI/CD.
- Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
- ML breadth: recommender systems, NLP or user profiling, anomaly detection.
- Communication: clear storytelling with data; can align stakeholders and drive decisions.
🎯 About Us
Stupa builds cutting-edge AI for real-time sports intelligence ; automated commentary, player tracking, non-contact biomechanics, ball trajectory, LED graphics, and broadcast-grade stats. Your models will be seen live by millions across global events.
🌍 Global Travel
Work that literally travels the world. You’ll deploy systems at international tournaments across Asia, Europe, and the Middle East, working inside world-class stadiums, courts, and TV production rooms.
✨ What You’ll Build
- AI Language Products
- Automated live commentary (LLM + ASR + OCR), real-time subtitles, AI storytelling.
- Non-Contact Measurement (CV + Tracking + Pose Estimation)
- Player velocity, footwork, acceleration, shot recognition, 2D/3D reconstruction, real-time edge inference.
- End-to-End Streaming Pipelines
- Temporal segmentation, multi-modal fusion, low-latency edge + cloud deployment.
🧠 What You’ll Do
Train and optimise ML/CV/NLP models for live sports, build tracking & pose pipelines, create LLM/ASR-based commentary systems, deploy on edge/cloud, ship rapid POCs→production, manage datasets & accuracy, and collaborate with product, engineering, and broadcast teams.
🧩 Requirements
Core Skills:
- Strong ML fundamentals (NLP/CV/multimodal)
- PyTorch/TensorFlow, transformers, ASR or pose estimation
- Data pipelines, optimisation, evaluation
- Deployment (Docker, ONNX, TensorRT, FastAPI, K8s, edge GPU)
- Strong Python engineering
Bonus: Sports analytics, LLM fine-tuning, low-latency optimisation, prior production ML systems.
🌟 Why Join Us
- Your models go LIVE in global sports broadcasts
- International travel for tournaments
- High ownership, zero bureaucracy
- Build India’s most advanced AI × Sports product
- Cool, futuristic problems + freedom to innovate
- Up to ₹40LPA for exceptional talent
🔥 You Belong Here If You…
Build what the world hasn’t seen • Want impact on live sports • Thrive in fast-paced ownership-driven environments.
Job Description: Applied Scientist
Location: Bangalore / Hybrid / Remote
Company: LodgIQ
Industry: Hospitality / SaaS / Machine Learning
About LodgIQ
Headquartered in New York, LodgIQ delivers a revolutionary B2B SaaS platform to the
travel industry. By leveraging machine learning and artificial intelligence, we enable precise
forecasting and optimized pricing for hotel revenue management. Backed by Highgate
Ventures and Trilantic Capital Partners, LodgIQ is a well-funded, high-growth startup with a
global presence.
About the Role
We are seeking a highly motivated Applied Scientist to join our Data Science team. This
individual will play a key role in enhancing and scaling our existing forecasting and pricing
systems and developing new capabilities that support our intelligent decision-making
platform.
We are looking for team members who: ● Are deeply curious and passionate about applying machine learning to real-world
problems. ● Demonstrate strong ownership and the ability to work independently. ● Excel in both technical execution and collaborative teamwork. ● Have a track record of shipping products in complex environments.
What You’ll Do ● Build, train, and deploy machine learning and operations research models for
forecasting, pricing, and inventory optimization. ● Work with large-scale, noisy, and temporally complex datasets. ● Collaborate cross-functionally with engineering and product teams to move models
from research to production. ● Generate interpretable and trusted outputs to support adoption of AI-driven rate
recommendations. ● Contribute to the development of an AI-first platform that redefines hospitality revenue
management.
Required Qualifications ● Bachelor's or Master’s degree or PhD in Computer Science or related field. ● 3-5 years of hands-on experience in a product-centric company, ideally with full model
lifecycle exposure.
Commented [1]: Leaving note here
Acceptable Degree types - Masters or PhD
Fields
Operations Research
Industrial/Systems Engineering
Computer Science
Applied Mathematics
● Demonstrated ability to apply machine learning and optimization techniques to solve
real-world business problems.
● Proficient in Python and machine learning libraries such as PyTorch, statsmodel,
LightGBM, scikit-learn, XGBoost
● Strong knowledge of Operations Research models (Stochastic optimization, dynamic
programming) and forecasting models (time-series and ML-based).
● Understanding of machine learning and deep learning foundations.
● Translate research into commercial solutions
● Strong written and verbal communication skills to explain complex technical concepts
clearly to cross-functional teams.
● Ability to work independently and manage projects end-to-end.
Preferred Experience
● Experience in revenue management, pricing systems, or demand forecasting,
particularly within the hotel and hospitality domain.
● Applied knowledge of reinforcement learning techniques (e.g., bandits, Q-learning,
model-based control).
● Familiarity with causal inference methods (e.g., DAGs, treatment effect estimation).
● Proven experience in collaborative product development environments, working closely
with engineering and product teams.
Why LodgIQ?
● Join a fast-growing, mission-driven company transforming the future of hospitality.
● Work on intellectually challenging problems at the intersection of machine learning,
decision science, and human behavior.
● Be part of a high-impact, collaborative team with the autonomy to drive initiatives from
ideation to production.
● Competitive salary and performance bonuses.
● For more information, visit https://www.lodgiq.com
Job Overview:
As a Technical Lead, you will be responsible for leading the design, development, and deployment of AI-powered Edtech solutions. You will mentor a team of engineers, collaborate with data scientists, and work closely with product managers to build scalable and efficient AI systems. The ideal candidate should have 8-10 years of experience in software development, machine learning, AI use case development and product creation along with strong expertise in cloud-based architectures.
Key Responsibilities:
AI Tutor & Simulation Intelligence
- Architect the AI intelligence layer that drives contextual tutoring, retrieval-based reasoning, and fact-grounded explanations.
- Build RAG (retrieval-augmented generation) pipelines and integrate verified academic datasets from textbooks and internal course notes.
- Connect the AI Tutor with the Simulation Lab, enabling dynamic feedback — the system should read experiment results, interpret them, and explain why outcomes occur.
- Ensure AI responses remain transparent, syllabus-aligned, and pedagogically accurate.
Platform & System Architecture
- Lead the development of a modular, full-stack platform unifying courses, explainers, AI chat, and simulation windows in a single environment.
- Design microservice architectures with API bridges across content systems, AI inference, user data, and analytics.
- Drive performance, scalability, and platform stability — every millisecond and every click should feel seamless.
Reliability, Security & Analytics
- Establish system observability and monitoring pipelines (usage, engagement, AI accuracy).
- Build frameworks for ethical AI, ensuring transparency, privacy, and student safety.
- Set up real-time learning analytics to measure comprehension and identify concept gaps.
Leadership & Collaboration
- Mentor and elevate engineers across backend, ML, and front-end teams.
- Collaborate with the academic and product teams to translate physics pedagogy into engineering precision.
- Evaluate and integrate emerging tools — multi-modal AI, agent frameworks, explainable AI — into the product roadmap.
Qualifications & Skills:
- 8–10 years of experience in software engineering, ML systems, or scalable AI product builds.
- Proven success leading cross-functional AI/ML and full-stack teams through 0→1 and scale-up phases.
- Expertise in cloud architecture (AWS/GCP/Azure) and containerization (Docker, Kubernetes).
- Experience designing microservices and API ecosystems for high-concurrency platforms.
- Strong knowledge of LLM fine-tuning, RAG pipelines, and vector databases (Pinecone, Weaviate, etc.).
- Demonstrated ability to work with educational data, content pipelines, and real-time systems.
Bonus Skills (Nice to Have):
- Experience with multi-modal AI models (text, image, audio, video).
- Knowledge of AI safety, ethical AI, and explain ability techniques.
- Prior work in AI-powered automation tools or AI-driven SaaS products.
We are seeking a hands-on eCommerce Analytics & Insights Lead to help establish and scale our newly launched eCommerce business. The ideal candidate is highly data-savvy, understands eCommerce deeply, and can lead KPI definition, performance tracking, insights generation, and data-driven decision-making.
You will work closely with cross-functional teams—Buying, Marketing, Operations, and Technology—to build dashboards, uncover growth opportunities, and guide the evolution of our online channel.
Key Responsibilities
Define & Monitor eCommerce KPIs
- Set up and track KPIs across the customer journey: traffic, conversion, retention, AOV/basket size, repeat rate, etc.
- Build KPI frameworks aligned with business goals.
Data Tracking & Infrastructure
- Partner with marketing, merchandising, operations, and tech teams to define data tracking requirements.
- Collaborate with eCommerce and data engineering teams to ensure data quality, completeness, and availability.
Dashboards & Reporting
- Build dashboards and automated reports to track:
- Overall site performance
- Category & product performance
- Marketing ROI and acquisition effectiveness
Insights & Performance Diagnosis
Identify trends, opportunities, and root causes of underperformance in areas such as:
- Product availability & stock health
- Pricing & promotions
- Checkout funnel drop-offs
- Customer retention & cohort behavior
- Channel acquisition performance
Conduct:
- Cohort analysis
- Funnel analytics
- Customer segmentation
- Basket analysis
Data-Driven Growth Initiatives
- Propose and evaluate experiments, optimization ideas, and quick wins.
- Help business teams interpret KPIs and take informed decisions.
Required Skills & Experience
- 2–5 years experience in eCommerce analytics (grocery retail experience preferred).
- Strong understanding of eCommerce metrics and analytics frameworks (Traffic → Conversion → Repeat → LTV).
- Proficiency with tools such as:
- Google Analytics / GA4
- Excel
- SQL
- Power BI or Tableau
- Experience working with:
- Digital marketing data
- CRM and customer data
- Product/category performance data
- Ability to convert business questions into analytical tasks and produce clear, actionable insights.
- Familiarity with:
- Customer journey mapping
- Funnel analysis
- Basket and behavioral analysis
- Comfortable working in fast-paced, ambiguous, and build-from-scratch environments.
- Strong communication and stakeholder management skills.
- Strong technical capability in at least one programming language: SQL or PySpark.
Good to Have
- Experience with eCommerce platforms (Shopify, Magento, Salesforce Commerce, etc.).
- Exposure to A/B testing, recommendation engines, or personalization analytics.
- Knowledge of Python/R for deeper analytics (optional).
- Experience with tracking setup (GTM, event tagging, pixel/event instrumentation).

Global digital transformation solutions provider.
Job Description – Senior Technical Business Analyst
Location: Trivandrum (Preferred) | Open to any location in India
Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST
About the Role
We are seeking highly motivated and analytically strong Senior Technical Business Analysts who can work seamlessly with business and technology stakeholders to convert a one-line problem statement into a well-defined project or opportunity. This role is ideal for fresh graduates who have a strong foundation in data analytics, data engineering, data visualization, and data science, along with a strong drive to learn, collaborate, and grow in a dynamic, fast-paced environment.
As a Technical Business Analyst, you will be responsible for translating complex business challenges into actionable user stories, analytical models, and executable tasks in Jira. You will work across the entire data lifecycle—from understanding business context to delivering insights, solutions, and measurable outcomes.
Key Responsibilities
Business & Analytical Responsibilities
- Partner with business teams to understand one-line problem statements and translate them into detailed business requirements, opportunities, and project scope.
- Conduct exploratory data analysis (EDA) to uncover trends, patterns, and business insights.
- Create documentation including Business Requirement Documents (BRDs), user stories, process flows, and analytical models.
- Break down business needs into concise, actionable, and development-ready user stories in Jira.
Data & Technical Responsibilities
- Collaborate with data engineering teams to design, review, and validate data pipelines, data models, and ETL/ELT workflows.
- Build dashboards, reports, and data visualizations using leading BI tools to communicate insights effectively.
- Apply foundational data science concepts such as statistical analysis, predictive modeling, and machine learning fundamentals.
- Validate and ensure data quality, consistency, and accuracy across datasets and systems.
Collaboration & Execution
- Work closely with product, engineering, BI, and operations teams to support the end-to-end delivery of analytical solutions.
- Assist in development, testing, and rollout of data-driven solutions.
- Present findings, insights, and recommendations clearly and confidently to both technical and non-technical stakeholders.
Required Skillsets
Core Technical Skills
- 6+ years of Technical Business Analyst experience within an overall professional experience of 8+ years
- Data Analytics: SQL, descriptive analytics, business problem framing.
- Data Engineering (Foundational): Understanding of data warehousing, ETL/ELT processes, cloud data platforms (AWS/GCP/Azure preferred).
- Data Visualization: Experience with Power BI, Tableau, or equivalent tools.
- Data Science (Basic/Intermediate): Python/R, statistical methods, fundamentals of ML algorithms.
Soft Skills
- Strong analytical thinking and structured problem-solving capability.
- Ability to convert business problems into clear technical requirements.
- Excellent communication, documentation, and presentation skills.
- High curiosity, adaptability, and eagerness to learn new tools and techniques.
Educational Qualifications
- BE/B.Tech or equivalent in:
- Computer Science / IT
- Data Science
What We Look For
- Demonstrated passion for data and analytics through projects and certifications.
- Strong commitment to continuous learning and innovation.
- Ability to work both independently and in collaborative team environments.
- Passion for solving business problems using data-driven approaches.
- Proven ability (or aptitude) to convert a one-line business problem into a structured project or opportunity.
Why Join Us?
- Exposure to modern data platforms, analytics tools, and AI technologies.
- A culture that promotes innovation, ownership, and continuous learning.
- Supportive environment to build a strong career in data and analytics.
Skills: Data Analytics, Business Analysis, Sql
Must-Haves
Technical Business Analyst (6+ years), SQL, Data Visualization (Power BI, Tableau), Data Engineering (ETL/ELT, cloud platforms), Python/R
******
Notice period - 0 to 15 days (Max 30 Days)
Educational Qualifications: BE/B.Tech or equivalent in: (Computer Science / IT) /Data Science
Location: Trivandrum (Preferred) | Open to any location in India
Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST
Artificial Intelligence Research Intern
We are looking for a passionate and skilled AI Intern to join our dynamic team for a 6-month full-time internship. This is an excellent opportunity to work on cutting-edge technologies in Artificial Intelligence, Machine Learning, Deep Learning, and Natural Language Processing (NLP), contributing to real-world projects that create a tangible impact.
Key Responsibilities:
• Research, design, develop, and implement AI and Deep Learning algorithms.
• Work on NLP systems and models for tasks such as text classification, sentiment analysis, and
data extraction.
• Evaluate and optimize machine learning and deep learning models.
• Collect, process, and analyze large-scale datasets.
• Use advanced techniques for text representation and classification.
• Write clean, efficient, and testable code for production-ready applications.
• Perform web scraping and data extraction using Python (requests, BeautifulSoup, Selenium, APIs, etc.).
• Collaborate with cross-functional teams and clearly communicate technical concepts to both technical and non-technical audiences.
Required Skills and Experience:
• Theoretical and practical knowledge of AI, ML, and DL concepts.
• Good Understanding of Python and libraries such as:TensorFlow, PyTorch, Keras, Scikit-learn, Numpy, Pandas, Scipy, Matplotlib NLP tools like NLTK, spaCy, etc.
• Strong understanding of Neural Network Architectures (CNNs, RNNs, LSTMs).
• Familiarity with data structures, data modeling, and software architecture.
• Understanding of text representation techniques (n-grams, BoW, TF-IDF, etc.).
• Comfortable working in Linux/UNIX environments.
• Basic knowledge of HTML, JavaScript, HTTP, and Networking.
• Strong communication skills and a collaborative mindset.
Job Type: Full-Time Internship
Location: In-Office (Bhayander)

Job Title: AI/ML Engineer – Voice (2–3 Years)
Location: Bengaluru (On-site)
Employment Type: Full-time
About Impacto Digifin Technologies
Impacto Digifin Technologies enables enterprises to adopt digital transformation through intelligent, AI-powered solutions. Our platforms reduce manual work, improve accuracy, automate complex workflows, and ensure compliance—empowering organizations to operate with speed, clarity, and confidence.
We combine automation where it’s fastest with human oversight where it matters most. This hybrid approach ensures trust, reliability, and measurable efficiency across fintech and enterprise operations.
Role Overview
We are looking for an AI Engineer Voice with strong applied experience in machine learning, deep learning, NLP, GenAI, and full-stack voice AI systems.
This role requires someone who can design, build, deploy, and optimize end-to-end voice AI pipelines, including speech-to-text, text-to-speech, real-time streaming voice interactions, voice-enabled AI applications, and voice-to-LLM integrations.
You will work across core ML/DL systems, voice models, predictive analytics, banking-domain AI applications, and emerging AGI-aligned frameworks. The ideal candidate is an applied engineer with strong fundamentals, the ability to prototype quickly, and the maturity to contribute to R&D when needed.
This role is collaborative, cross-functional, and hands-on.
Key Responsibilities
Voice AI Engineering
- Build end-to-end voice AI systems, including STT, TTS, VAD, audio processing, and conversational voice pipelines.
- Implement real-time voice pipelines involving streaming interactions with LLMs and AI agents.
- Design and integrate voice calling workflows, bi-directional audio streaming, and voice-based user interactions.
- Develop voice-enabled applications, voice chat systems, and voice-to-AI integrations for enterprise workflows.
- Build and optimize audio preprocessing layers (noise reduction, segmentation, normalization)
- Implement voice understanding modules, speech intent extraction, and context tracking.
Machine Learning & Deep Learning
- Build, deploy, and optimize ML and DL models for prediction, classification, and automation use cases.
- Train and fine-tune neural networks for text, speech, and multimodal tasks.
- Build traditional ML systems where needed (statistical, rule-based, hybrid systems).
- Perform feature engineering, model evaluation, retraining, and continuous learning cycles.
NLP, LLMs & GenAI
- Implement NLP pipelines including tokenization, NER, intent, embeddings, and semantic classification.
- Work with LLM architectures for text + voice workflows
- Build GenAI-based workflows and integrate models into production systems.
- Implement RAG pipelines and agent-based systems for complex automation.
Fintech & Banking AI
- Work on AI-driven features related to banking, financial risk, compliance automation, fraud patterns, and customer intelligence.
- Understand fintech data structures and constraints while designing AI models.
Engineering, Deployment & Collaboration
- Deploy models on cloud or on-prem (AWS / Azure / GCP / internal infra).
- Build robust APIs and services for voice and ML-based functionalities.
- Collaborate with data engineers, backend developers, and business teams to deliver end-to-end AI solutions.
- Document systems and contribute to internal knowledge bases and R&D.
Security & Compliance
- Follow fundamental best practices for AI security, access control, and safe data handling.
- Awareness of financial compliance standards (plus, not mandatory).
- Follow internal guidelines on PII, audio data, and model privacy.
Primary Skills (Must-Have)
Core AI
- Machine Learning fundamentals
- Deep Learning architectures
- NLP pipelines and transformers
- LLM usage and integration
- GenAI development
- Voice AI (STT, TTS, VAD, real-time pipelines)
- Audio processing fundamentals
- Model building, tuning, and retraining
- RAG systems
- AI Agents (orchestration, multi-step reasoning)
Voice Engineering
- End-to-end voice application development
- Voice calling & telephony integration (framework-agnostic)
- Realtime STT ↔ LLM ↔ TTS interactive flows
- Voice chat system development
- Voice-to-AI model integration for automation
Fintech/Banking Awareness
- High-level understanding of fintech and banking AI use cases
- Data patterns in core banking analytics (advantageous)
Programming & Engineering
- Python (strong competency)
- Cloud deployment understanding (AWS/Azure/GCP)
- API development
- Data processing & pipeline creation
Secondary Skills (Good to Have)
- MLOps & CI/CD for ML systems
- Vector databases
- Prompt engineering
- Model monitoring & evaluation frameworks
- Microservices experience
- Basic UI integration understanding for voice/chat
- Research reading & benchmarking ability
Qualifications
- 2–3 years of practical experience in AI/ML/DL engineering.
- Bachelor’s/Master’s degree in CS, AI, Data Science, or related fields.
- Proven hands-on experience building ML/DL/voice pipelines.
- Experience in fintech or data-intensive domains preferred.
Soft Skills
- Clear communication and requirement understanding
- Curiosity and research mindset
- Self-driven problem solving
- Ability to collaborate cross-functionally
- Strong ownership and delivery discipline
- Ability to explain complex AI concepts simply
ML Intern
Hyperworks Imaging is a cutting-edge technology company based out of Bengaluru, India since 2016. Our team uses the latest advances in deep learning and multi-modal machine learning techniques to solve diverse real world problems. We are rapidly growing, working with multiple companies around the world.
JOB OVERVIEW
We are seeking a talented and results-oriented ML Intern to join our growing team in India. In this role, you will be responsible for developing and implementing new advanced ML algorithms and AI agents for creating assistants of the future.
The ideal candidate will work on a complete ML pipeline starting from extraction, transformation and analysis of data to developing novel ML algorithms. The candidate will implement latest research papers and closely work with various stakeholders to ensure data-driven decisions and integrate the solutions into a robust ML pipeline.
RESPONSIBILITIES:
- Create AI agents using Model Context Protocols (MCPs), Claude Code, DsPy etc.
- Develop custom evals for AI agents.
- Build and maintain ML pipelines
- Optimize and evaluate ML models to ensure accuracy and performance.
- Define system requirements and integrate ML algorithms into cloud based workflows.
- Write clean, well-documented, and maintainable code following best practices
REQUIREMENTS:
- 1-3+ years of experience in data science, machine learning, or a similar role.
- Demonstrated expertise with python, PyTorch, and TensorFlow.
- Graduated/Graduating with B.Tech/M.Tech/PhD degrees in Electrical Engg./Electronics Engg./Computer Science/Maths and Computing/Physics
- Has done coursework in Linear Algebra, Probability, Image Processing, Deep Learning and Machine Learning.
- Has demonstrated experience with Model Context Protocols (MCPs), DsPy, AI Agents, MLOps etc
WHO CAN APPLY:
Only those candidates will be considered who,
- have relevant skills and interests
- can commit full time
- Can show prior work and deployed projects
- can start immediately
Please note that we will reach out to ONLY those applicants who satisfy the criteria listed above.
SALARY DETAILS: Commensurate with experience.
JOINING DATE: Immediate
JOB TYPE: Full-time
ROLE & RESPONSIBILITIES:
We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.
KEY RESPONSIBILITIES:
1. Cloud Security (AWS)-
- Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
- Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
- Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
- Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
- Ensure encryption of data at rest/in transit across all cloud services.
2. DevOps Security (IaC, CI/CD, Kubernetes, Linux)-
Infrastructure as Code & Automation Security:
- Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
- Enforce misconfiguration scanning and automated remediation.
CI/CD Security:
- Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
- Implement secure build, artifact signing, and deployment workflows.
Containers & Kubernetes:
- Harden Docker images, private registries, runtime policies.
- Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
- Apply CIS Benchmarks for Kubernetes and Linux.
Monitoring & Reliability:
- Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
- Ensure audit logging across cloud/platform layers.
3. MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-
Pipeline & Workflow Security:
- Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
- Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.
ML Platform Security:
- Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
- Control model access, artifact protection, model registry security, and ML metadata integrity.
Data Security:
- Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
- Enforce data versioning security, lineage tracking, PII protection, and access governance.
ML Observability:
- Implement drift detection (data drift/model drift), feature monitoring, audit logging.
- Integrate ML monitoring with Grafana/Prometheus/CloudWatch.
4. Network & Endpoint Security-
- Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
- Conduct vulnerability assessments, penetration test coordination, and network segmentation.
- Secure remote workforce connectivity and internal office networks.
5. Threat Detection, Incident Response & Compliance-
- Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
- Build security alerts, automated threat detection, and incident workflows.
- Lead incident containment, forensics, RCA, and remediation.
- Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
- Maintain security policies, procedures, RRPs (Runbooks), and audits.
IDEAL CANDIDATE:
- 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
- Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
- Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
- Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
- Strong Linux security (CIS hardening, auditing, intrusion detection).
- Proficiency in Python, Bash, and automation/scripting.
- Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
- Understanding of microservices, API security, serverless security.
- Strong understanding of vulnerability management, penetration testing practices, and remediation plans.
EDUCATION:
- Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
- Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.
PERKS, BENEFITS AND WORK CULTURE:
- Competitive Salary Package
- Generous Leave Policy
- Flexible Working Hours
- Performance-Based Bonuses
- Health Care Benefits
Strong Senior Data Scientist (AI/ML/GenAI) Profile
Mandatory (Experience 1) – Must have a minimum of 5+ years of experience in designing, developing, and deploying Machine Learning / Deep Learning (ML/DL) systems in production
Mandatory (Experience 2) – Must have strong hands-on experience in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.
Mandatory (Experience 3) – Must have 1+ years of experience in fine-tuning Large Language Models (LLMs) using techniques like LoRA/QLoRA, and building RAG (Retrieval-Augmented Generation) pipelines.
Mandatory (Experience 4) – Must have experience with MLOps and production-grade systems including Docker, Kubernetes, Spark, model registries, and CI/CD workflows
mandatory (Note): Budget for upto 5 yoe is 25Lacs, Upto 7 years is 35 lacs and upto 12 years is 45 lacs, Also client can pay max 30-40% based on candidature
This opportunity through ClanX is for Parspec (direct payroll with Parspec)
Please note you would be interviewed for this role at Parspec while ClanX is a recruitment partner helping Parspec find the right candidates and manage the hiring process.
Parspec is hiring a Machine Learning Engineer to build state-of-the-art AI systems, focusing on NLP, LLMs, and computer vision in a high-growth, product-led environment.
Company Details:
Parspec is a fast-growing technology startup revolutionizing the construction supply chain. By digitizing and automating critical workflows, Parspec empowers sales teams with intelligent tools to drive efficiency and close more deals. With a mission to bring a data-driven future to the construction industry, Parspec blends deep domain knowledge with AI expertise.
Requirements:
- Bachelor’s or Master’s degree in Science or Engineering.
- 5-7 years of experience in ML and data science.
- Recent hands-on work with LLMs, including fine-tuning, RAG, agent flows, and integrations.
- Strong understanding of foundational models and transformers.
- Solid grasp of ML and DL fundamentals, with experience in CV and NLP.
- Recent experience working with large datasets.
- Python experience with ML libraries like numpy, pandas, sklearn, matplotlib, nltk and others.
- Experience with frameworks like Hugging Face, Spacy, BERT, TensorFlow, PyTorch, OpenRouter or Modal.
Requirements:
Must haves
- Design, develop, and deploy NLP, CV, and recommendation systems
- Train and implement deep learning models
- Research and explore novel ML architectures
- Build and maintain end-to-end ML pipelines
- Collaborate across product, design, and engineering teams
- Work closely with business stakeholders to shape product features
- Ensure high scalability and performance of AI solutions
- Uphold best practices in engineering and contribute to a culture of excellence
- Actively participate in R&D and innovation within the team
Good to haves
- Experience building scalable AI pipelines for extracting structured data from unstructured sources.
- Experience with cloud platforms, containerization, and managed AI services.
- Knowledge of MLOps practices, CI/CD, monitoring, and governance.
- Experience with AWS or Django.
- Familiarity with databases and web application architecture.
- Experience with OCR or PDF tools.
Responsibilities:
- Design, develop, and deploy NLP, CV, and recommendation systems
- Train and implement deep learning models
- Research and explore novel ML architectures
- Build and maintain end-to-end ML pipelines
- Collaborate across product, design, and engineering teams
- Work closely with business stakeholders to shape product features
- Ensure high scalability and performance of AI solutions
- Uphold best practices in engineering and contribute to a culture of excellence
- Actively participate in R&D and innovation within the team
Interview Process
- Technical interview (coding, ML concepts, project walkthrough)
- System design and architecture round
- Culture fit and leadership interaction
- Final offer discussion
This opportunity through ClanX is for Parspec (direct payroll with Parspec)
Please note you would be interviewed for this role at Parspec while ClanX is a recruitment partner helping Parspec find the right candidates and manage the hiring process.
Parspec is hiring a Machine Learning Engineer to build state-of-the-art AI systems, focusing on NLP, LLMs, and computer vision in a high-growth, product-led environment.
Company Details:
Parspec is a fast-growing technology startup revolutionizing the construction supply chain. By digitizing and automating critical workflows, Parspec empowers sales teams with intelligent tools to drive efficiency and close more deals. With a mission to bring a data-driven future to the construction industry, Parspec blends deep domain knowledge with AI expertise.
Requirements:
- 3 to 4 years of relevant experience in ML and AI roles
- Strong grasp of ML, deep learning, and model deployment
- Proficient in Python and libraries like numpy, pandas, sklearn, etc.
- Experience with TensorFlow/Keras or PyTorch
- Familiar with AWS/GCP platforms
- Strong coding skills and ability to ship production-ready solutions
- Bachelor's/Master's in Engineering or related field
- Curious, self-driven, and a fast learner
- Passionate about NLP, LLMs, and state-of-the-art AI technologies
- Comfortable with collaboration across globally distributed teams
Preferred (Not Mandatory):
- Experience with Django, databases, and full-stack environments
- Familiarity with OCR and PDF processing
- Competitive programming or Kaggle participation
- Prior work with distributed teams across time zones
Responsibilities:
- Design, develop, and deploy NLP, CV, and recommendation systems
- Train and implement deep learning models
- Research and explore novel ML architectures
- Build and maintain end-to-end ML pipelines
- Collaborate across product, design, and engineering teams
- Work closely with business stakeholders to shape product features
- Ensure high scalability and performance of AI solutions
- Uphold best practices in engineering and contribute to a culture of excellence
- Actively participate in R&D and innovation within the team
Interview Process
- Technical interview (coding, ML concepts, project walkthrough)
- System design and architecture round
- Culture fit and leadership interaction
- Final offer discussion
Review Criteria
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
About Newpage Solutions
Newpage Solutions is a global digital health innovation company helping people live longer, healthier lives. We partner with life sciences organizations—including pharmaceutical, biotech, and healthcare leaders—to build transformative AI and data-driven technologies addressing real-world health challenges.
From strategy and research to UX design and agile development, we deliver and validate impactful solutions using lean, human-centered practices.
We are proud to be Great Place to Work® certified for three consecutive years, hold a top Glassdoor rating, and were named among the "Top 50 Most Promising Healthcare Solution Providers" by CIOReview.
We foster creativity, continuous learning, and inclusivity, creating an environment where bold ideas thrive and make a measurable difference in people’s lives.
Newpage looks for candidates who are invested in long-term impact. Applications with a pattern of frequent job changes may not align with the values we prioritize.
Your Mission
We’re seeking a highly experienced, technically exceptional AI Development Lead to architect and deliver next-generation Generative AI and Agentic systems. You will drive end-to-end innovation—from model selection and orchestration design to scalable backend implementation—while collaborating with cross-functional teams to transform AI research into production-ready solutions.
This is an individual-contributor leadership role for someone who thrives on ownership, fast execution, and technical excellence. You will define the standards for quality, scalability, and innovation across all AI initiatives.
What You’ll Do
- Architect, build, and optimize production-grade Generative AI applications using modern frameworks such as LangChain, LlamaIndex, Semantic Kernel, or custom orchestration layers.
- Lead the design of Agentic AI frameworks (Agno, AutoGen, CrewAI, etc.), enabling intelligent, goal-driven workflows with memory, reasoning, and contextual awareness.
- Develop and deploy Retrieval-Augmented Generation (RAG) systems integrating LLMs, vector databases, and real-time data pipelines.
- Design robust prompt engineering and refinement frameworks to improve reasoning quality, adaptability, and user relevance.
- Deliver high-performance backend systems using Python (FastAPI, Flask, or similar) aligned with SOLID principles, OOP, and clean architecture.
- Own the complete SDLC, including design, implementation, code reviews, testing, CI/CD, observability, and post-deployment monitoring.
- Use AI-assisted environments (e.g., Cursor, GitHub Copilot, Claude Code) to accelerate development while maintaining code quality and maintainability.
- Collaborate closely with MLOps engineers to containerize, scale, and deploy models using Docker, Kubernetes, and modern CI/CD pipelines.
- Integrate APIs from OpenAI, Anthropic, Cohere, Mistral, or open-source LLMs (Llama 3, Mixtral, etc.).
- Leverage VectorDB such as FAISS, Pinecone, Weaviate, or Chroma for semantic search, RAG, and context retrieval.
- Develop custom tools, libraries, and frameworks that improve development velocity and reliability across AI teams.
- Partner with Product, Design, and ML teams to translate conceptual AI features into scalable user-facing products.
- Provide technical mentorship and guide team members in system design, architecture reviews, and AI best practices.
- Lead POCs, internal research experiments, and innovation sprints to explore and validate emerging AI techniques.
What You Bring
- 8+ years of total experience in software development, with at least 3 years in AI/ML systems engineering or Generative AI.
- Python experience with strong grasp of OOP, SOLID, and scalable microservice architecture.
- Proven track record developing and deploying GenAI/LLM-based systems in production.
- Hands-on work with LangChain, LlamaIndex, or custom orchestration frameworks.
- Deep familiarity with OpenAI, Anthropic, Hugging Face, or open-source LLM APIs.
- Advanced understanding of prompt construction, optimization, and evaluation techniques.
- End-to-end implementation experience using vector databases and retrieval pipelines.
- Understanding of MLOps, model serving, scaling, and monitoring workflows (e.g., BentoML, MLflow, Vertex AI, AWS Sagemaker).
- Experience with GitHub Actions, Docker, Kubernetes, and cloud-native deployments.
- Are obsessed with clean code, system scalability, and performance optimization.
- Can balance rapid prototyping with long-term maintainability.
- Excel at working independently while collaborating effectively across teams.
- Stay ahead of the curve on new AI models, frameworks, and best practices.
- Have a founder’s mindset and love solving ambiguous, high-impact technical challenges.
- Bachelor’s or Master’s in Computer Science, Machine Learning, or a related technical discipline.
What We Offer
At Newpage, we’re building a company that works smart and grows with agility—where driven individuals come together to do work that matters. We offer:
- A people-first culture – Supportive peers, open communication, and a strong sense of belonging.
- Smart, purposeful collaboration – Work with talented colleagues to create technologies that solve meaningful business challenges.
- Balance that lasts – We respect your time and support a healthy integration of work and life.
- Room to grow – Opportunities for learning, leadership, and career development, shaped around you.
- Meaningful rewards – Competitive compensation that recognizes both contribution and potential.
Core Responsibilities:
- The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
- Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
- Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
- Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
- System Integration: Integrate models into existing systems and workflows.
- Model Deployment: Deploy models to production environments and monitor performance.
- Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
- Continuous Improvement: Identify areas for improvement in model performance and systems.
Skills:
- Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
- Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaossearch logs, etc for troubleshooting; Other tech touch points are ScyllaDB (like BigTable), OpenSearch, Neo4J graph
- Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
- Knowledge of model monitoring and performance evaluation.
Required experience:
- Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sagemaker pipeline with ability to analyze gaps and recommend/implement improvements
- AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
- AWS data: Redshift, Glue
- Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
Skills: Aws, Aws Cloud, Amazon Redshift, Eks
Must-Haves
Machine Learning +Aws+ (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sagemaker
Notice period - 0 to 15days only
Hybrid work mode- 3 days office, 2 days at home
We are looking for a Senior AI / ML Engineer to join our fast-growing team and help build AI-driven data platforms and intelligent solutions. If you are passionate about AI, data engineering, and building real-world GenAI systems, this role is for you!
🔧 Key Responsibilities
• Develop and deploy AI/ML models for real-world applications
• Build scalable pipelines for data processing, training, and evaluation
• Work on LLMs, RAG, embeddings, and agent workflows
• Collaborate with data engineers, product teams, and software developers
• Write clean, efficient Python code and ensure high-quality engineering practices
• Handle model monitoring, performance tuning, and documentation
Required Skills
• 2–5 years of experience in AI/ML engineering
• Strong knowledge of Python, TensorFlow/PyTorch
• Experience with LLMs, GenAI, RAG, or NLP
• Knowledge of Databricks, MLOps or cloud platforms (AWS/Azure/GCP)
• Good understanding of APIs, distributed systems, and data pipelines
🎯 Good to Have
• Experience in healthcare, SaaS, or big data
• Exposure to Databricks Mosaic AI
• Experience building AI agents
Role Overview
Join our core tech team to build the intelligence layer of Clink's platform. You'll architect AI agents, design prompts, build ML models, and create systems powering personalized offers for thousands of restaurants. High-growth opportunity working directly with founders, owning critical features from day one.
Why Clink?
Clink revolutionizes restaurant loyalty using AI-powered offer generation and customer analytics:
- ML-driven customer behavior analysis (Pattern detection)
- Personalized offers via LLMs and custom AI agents
- ROI prediction and forecasting models
- Instagram marketing rewards integration
Tech Stack:
- Python,
- FastAPI,
- PostgreSQL,
- Redis,
- Docker,
- LLMs
You Will Work On:
AI Agents: Design and optimize AI agents
ML Models: Build redemption prediction, customer segmentation, ROI forecasting
Data & Analytics: Analyze data, build behavior pattern pipelines, create product bundling matrices
System Design: Architect scalable async AI pipelines, design feedback loops, implement A/B testing
Experimentation: Test different LLM approaches, explore hybrid LLM+ML architectures, prototype new capabilities
Must-Have Skills
Technical: 0-2 years AI/ML experience (projects/internships count), strong Python, LLM API knowledge, ML fundamentals (supervised learning, clustering), Pandas/NumPy proficiency
Mindset: Extreme curiosity, logical problem-solving, builder mentality (side projects/hackathons), ownership mindset
Nice to Have: Pydantic, FastAPI, statistical forecasting, PostgreSQL/SQL, scikit-learn, food-tech/loyalty domain interest
Experience: 5-8 years of professional experience in software engineering, with a strong
background in developing and deploying scalable applications.
● Technical Skills:
○ Architecture: Demonstrated experience in architecture/ system design for scale,
preferably as a digital public good
○ Full Stack: Extensive experience with full-stack development, including mobile
app development and backend technologies.
○ App Development: Hands-on experience building and launching mobile
applications, preferably for Android.
○ Cloud Infrastructure: Familiarity with cloud platforms and containerization
technologies (Docker, Kubernetes).
○ (Bonus) ML Ops: Proven experience with ML Ops practices and tools.
● Soft Skills:
○ Experience in hiring team members
○ A proactive and independent problem-solver, comfortable working in a fast-paced
environment.
○ Excellent communication and leadership skills, with the ability to mentor junior
engineers.
○ A strong desire to use technology for social good.
Preferred Qualifications
● Experience working in a startup or smaller team environment.
● Familiarity with the healthcare or public health sector.
● Experience in developing applications for low-resource environments.
● Experience with data management in privacy and security-sensitive applications.
We are looking for enthusiastic engineers passionate about building and maintaining solutioning platform components on cloud and Kubernetes infrastructure. The ideal candidate will go beyond traditional SRE responsibilities by collaborating with stakeholders, understanding the applications hosted on the platform, and designing automation solutions that enhance platform efficiency, reliability, and value.
[Technology and Sub-technology]
• ML Engineering / Modelling
• Python Programming
• GPU frameworks: TensorFlow, Keras, Pytorch etc.
• Cloud Based ML development and Deployment AWS or Azure
[Qualifications]
• Bachelor’s Degree in Computer Science, Computer Engineering or equivalent technical degree
• Proficient programming knowledge in Python or Java and ability to read and explain open source codebase.
• Good foundation of Operating Systems, Networking and Security Principles
• Exposure to DevOps tools, with experience integrating platform components into Sagemaker/ECR and AWS Cloud environments.
• 4-6 years of relevant experience working on AI/ML projects
[Primary Skills]:
• Excellent analytical & problem solving skills.
• Exposure to Machine Learning and GenAI technologies.
• Understanding and hands-on experience with AI/ML Modeling, Libraries, frameworks, and Tools (TensorFlow, Keras, Pytorch etc.)
• Strong knowledge of Python, SQL/NoSQL
• Cloud Based ML development and Deployment AWS or Azure
Review Criteria
- Strong Senior Data Scientist (AI/ML/GenAI) Profile
- 5+ years of experience in designing, developing, and deploying Machine Learning / Deep Learning (ML/DL) systems in production
- Must have strong hands-on experience in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.
- 1+ years of experience in fine-tuning Large Language Models (LLMs) using techniques like LoRA/QLoRA, and building RAG (Retrieval-Augmented Generation) pipelines.
- Must have experience with MLOps and production-grade systems including Docker, Kubernetes, Spark, model registries, and CI/CD workflows
Preferred
- Prior experience in open-source GenAI contributions, applied LLM/GenAI research, or large-scale production AI systems
- Preferred (Education) – B.S./M.S./Ph.D. in Computer Science, Data Science, Machine Learning, or a related field.
Job Specific Criteria
- CV Attachment is mandatory
- Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
- Are you okay with 3 Days WFO?
- Virtual Interview requires video to be on, are you okay with it?
Role & Responsibilities
Company is hiring a Senior Data Scientist with strong expertise in AI, machine learning engineering (MLE), and generative AI. You will play a leading role in designing, deploying, and scaling production-grade ML systems — including large language model (LLM)-based pipelines, AI copilots, and agentic workflows. This role is ideal for someone who thrives on balancing cutting-edge research with production rigor and loves mentoring while building impact-first AI applications.
Responsibilities:
- Own the full ML lifecycle: model design, training, evaluation, deployment
- Design production-ready ML pipelines with CI/CD, testing, monitoring, and drift detection
- Fine-tune LLMs and implement retrieval-augmented generation (RAG) pipelines
- Build agentic workflows for reasoning, planning, and decision-making
- Develop both real-time and batch inference systems using Docker, Kubernetes, and Spark
- Leverage state-of-the-art architectures: transformers, diffusion models, RLHF, and multimodal pipelines
- Collaborate with product and engineering teams to integrate AI models into business applications
- Mentor junior team members and promote MLOps, scalable architecture, and responsible AI best practices
Ideal Candidate
- 5+ years of experience in designing, deploying, and scaling ML/DL systems in production
- Proficient in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX
- Experience with LLM fine-tuning, LoRA/QLoRA, vector search (Weaviate/PGVector), and RAG pipelines
- Familiarity with agent-based development (e.g., ReAct agents, function-calling, orchestration)
- Solid understanding of MLOps: Docker, Kubernetes, Spark, model registries, and deployment workflows
- Strong software engineering background with experience in testing, version control, and APIs
- Proven ability to balance innovation with scalable deployment
- B.S./M.S./Ph.D. in Computer Science, Data Science, or a related field
- Bonus: Open-source contributions, GenAI research, or applied systems at scale
Job Title: Sensor Expert – MLFF (Multi-Lane Free Flow) Engagement Type: Consultant / External Associate Organization: Bosch - MPIN
Location: Bangalore, India
Purpose of the Role:
To provide technical expertise in sensing technologies for MLFF (Multi-Lane Free Flow) and ITMS (Intelligent Traffic Management System) solutions. The role focuses on camera systems, AI/ML based computer vision, and multi-sensor integration (camera, RFID, radar) to drive solution performance, optimization, and business success. Key
Responsibilities:
• Lead end-to-end sensor integration for MLFF and ITMS platforms.
• Manage camera systems, ANPR, and data packet processing.
• Apply AI/ML techniques for performance optimization in computer vision.
• Collaborate with System Integrators and internal teams on architecture and implementation.
• Support B2G proposals (smart city, mining, infrastructure projects) with domain expertise.
• Drive continuous improvement in deployed MLFF solutions.
Key Competencies:
• Deep understanding of camera and sensor technologies, AI/ML for vision systems, and system integration.
• Experience in PoC development and solution optimization.
• Strong analytical, problem-solving, and collaboration skills.
• Familiarity with B2G environments and public infrastructure tenders preferred.
Qualification & Experience:
• Bachelor’s/Master’s in Electronics, Electrical, or Computer Science.
• 8–10 years of experience in camera technology, AI/ML, and sensor integration.
• Proven track record in system design, implementation, and field optimization.
🚀 Join GuppShupp: Build Bharat's First AI Lifelong Friend
GuppShupp's mission is nothing short of building Bharat's First AI Lifelong Friend. This is more than just a chatbot—it's about creating a truly personalized, consistently available companion that understands and grows with the user over a lifetime. We are pioneering this deeply personal experience using cutting-edge Generative AI.
We're hiring a Founding AI Engineer (1+ Year Experience) to join our small team of A+ builders and craft the foundational LLM and infrastructure behind this mission.
If you are passionate about:
- Deep personalization and managing complex user state/memory.
- Building high-quality, high-throughput AI tools.
- Next-level infrastructure at an incredible scale (millions of users).
What you'll do (responsibilities)
We're looking for an experienced individual contributor who enjoys working alongside other experienced engineers and iterate on AI
Prompt Engineering & Testing
- Write, test, and iterate numerous prompt variations.
- Identify and fix failures, biases, or edge cases in AI responses.
Advance LLM and Development
- Engineer solutions for long-term conversational memory and statefulness in LLMs.
- Implement techniques (e.g., retrieval-augmented generation (RAG) or summarization) to effectively manage and extend the context window for complex tasks.
Collaboration & Optimization
- Work with product and growth teams to turn feature goals into effective technical prompts.
- Optimize prompts for diverse use cases (e.g., chat, content, personalization).
LLM Fine-Tuning & Management
- Prepare, clean, and format datasets for training.
- Run fine-tuning jobs on smaller, specialized language models.
- Assist in deploying, monitoring, and maintaining these models
What we're looking for (qualifications)
You are an AI Engineer who has successfully shipped systems in this domain for over a year—you won't need ramp-up time. We prioritize continuous learning and hands-on skill development over formal qualifications. Crucially, we are looking for a teammate driven by a sense of duty to the user and a passion for taking full ownership of their contributions.
Role: Azure AI Tech Lead
Exp-3.5-7 Years
Location: Remote / Noida (NCR)
Notice Period: Immediate to 15 days
Mandatory Skills: Python, Azure AI/ML, PyTorch, TensorFlow, JAX, HuggingFace, LangChain, Kubeflow, MLflow, LLMs, RAG, MLOps, Docker, Kubernetes, Generative AI, Model Deployment, Prometheus, Grafana
JOB DESCRIPTION
As the Azure AI Tech Lead, you will serve as the principal technical expert leading the design, development, and deployment of advanced AI and ML solutions on the Microsoft Azure platform. You will guide a team of engineers, establish robust architectures, and drive end-to-end implementation of AI projects—transforming proof-of-concepts into scalable, production-ready systems.
Key Responsibilities:
- Lead architectural design and development of AI/ML solutions using Azure AI, Azure OpenAI, and Cognitive Services.
- Develop and deploy scalable AI systems with best practices in MLOps across the full model lifecycle.
- Mentor and upskill AI/ML engineers through technical reviews, training, and guidance.
- Implement advanced generative AI techniques including LLM fine-tuning, RAG systems, and diffusion models.
- Collaborate cross-functionally to translate business goals into innovative AI solutions.
- Enforce governance, responsible AI practices, and performance optimization standards.
- Stay ahead of trends in LLMs, agentic AI, and applied research to shape next-gen solutions.
Qualifications:
- Bachelor’s or Master’s in Computer Science or related field.
- 3.5–7 years of experience delivering end-to-end AI/ML solutions.
- Strong expertise in Azure AI ecosystem and production-grade model deployment.
- Deep technical understanding of ML, DL, Generative AI, and MLOps pipelines.
- Excellent analytical and problem-solving abilities; applied research or open-source contributions preferred.
We're seeking an AI/ML Engineer to join our team-
As an AI/ML Engineer, you will be responsible for designing, developing, and implementing artificial intelligence (AI) and machine learning (ML) solutions to solve real world business problems. You will work closely with cross-functional teams, including data scientists, software engineers, and product managers, to deploy and integrate Applied AI/ML solutions into the products that are being built at NonStop. Your role will involve researching cutting-edge algorithms, data processing techniques, and implementing scalable solutions to drive innovation and improve the overall user experience.
Responsibilities
- Applied AI/ML engineering; Building engineering solutions on top of the AI/ML tooling available in the industry today. Eg: Engineering APIs around OpenAI
- AI/ML Model Development: Design, develop, and implement machine learning models and algorithms that address specific business challenges, such as natural language processing, computer vision, recommendation systems, anomaly detection, etc.
- Data Preprocessing and Feature Engineering: Cleanse, preprocess, and transform raw data into suitable formats for training and testing AI/ML models. Perform feature engineering to extract relevant features from the data
- Model Training and Evaluation: Train and validate AI/ML models using diverse datasets to achieve optimal performance. Employ appropriate evaluation metrics to assess model accuracy, precision, recall, and other relevant metrics
- Data Visualization: Create clear and insightful data visualizations to aid in understanding data patterns, model behavior, and performance metrics
- Deployment and Integration: Collaborate with software engineers and DevOps teams to deploy AI/ML models into production environments and integrate them into various applications and systems
- Data Security and Privacy: Ensure compliance with data privacy regulations and implement security measures to protect sensitive information used in AI/ML processes
- Continuous Learning: Stay updated with the latest advancements in AI/ML research, tools, and technologies, and apply them to improve existing models and develop novel solutions
- Documentation: Maintain detailed documentation of the AI/ML development process, including code, models, algorithms, and methodologies for easy understanding and future reference
Requirements
- Bachelor's, Master's or PhD in Computer Science, Data Science, Machine Learning, or a related field. Advanced degrees or certifications in AI/ML are a plus
- Proven experience as an AI/ML Engineer, Data Scientist, or related role, ideally with a strong portfolio of AI/ML projects
- Proficiency in programming languages commonly used for AI/ML. Preferably Python
- Familiarity with popular AI/ML libraries and frameworks, such as TensorFlow, PyTorch, scikit-learn, etc.
- Familiarity with popular AI/ML Models such as GPT3, GPT4, Llama2, BERT etc.
- Strong understanding of machine learning algorithms, statistics, and data structures
- Experience with data preprocessing, data wrangling, and feature engineering
- Knowledge of deep learning architectures, neural networks, and transfer learning
- Familiarity with cloud platforms and services (e.g., AWS, Azure, Google Cloud) for scalable AI/ML deployment
- Solid understanding of software engineering principles and best practices for writing maintainable and scalable code
- Excellent analytical and problem-solving skills, with the ability to think critically and propose innovative solutions
- Effective communication skills to collaborate with cross-functional teams and present complex technical concepts to non-technical stakeholders
MUST-HAVES:
- Machine Learning + Aws + (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sage maker
- Notice period - 0 to 15 days only
- Hybrid work mode- 3 days office, 2 days at home
SKILLS: AWS, AWS CLOUD, AMAZON REDSHIFT, EKS
ADDITIONAL GUIDELINES:
- Interview process: - 2 Technical round + 1 Client round
- 3 days in office, Hybrid model.
CORE RESPONSIBILITIES:
- The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
- Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
- Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
- Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
- System Integration: Integrate models into existing systems and workflows.
- Model Deployment: Deploy models to production environments and monitor performance.
- Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
- Continuous Improvement: Identify areas for improvement in model performance and systems.
SKILLS:
- Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
- Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaos search logs, etc. for troubleshooting; Other tech touch points are Scylla DB (like BigTable), OpenSearch, Neo4J graph
- Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
- Knowledge of model monitoring and performance evaluation.
REQUIRED EXPERIENCE:
- Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sage maker pipeline with ability to analyze gaps and recommend/implement improvements
- AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
- AWS data: Redshift, Glue
- Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
AI based systems design and development, entire pipeline from image/ video ingest, metadata ingest, processing, encoding, transmitting.
Implementation and testing of advanced computer vision algorithms.
Dataset search, preparation, annotation, training, testing, fine tuning of vision CNN models. Multimodal AI, LLMs, hardware deployment, explainability.
Detailed analysis of results. Documentation, version control, client support, upgrades.
Experience- 6 to 8 years
Location- Bangalore
Job Description-
- Extensive experience with machine learning utilizing the latest analytical models in Python. (i.e., experience in generating data-driven insights that play a key role in rapid decision-making and driving business outcomes.)
- Extensive experience using Tableau, table design, PowerApps, Power BI, Power Automate, and cloud environments, or equivalent experience designing/implementing data analysis pipelines and visualization.
- Extensive experience using AI agent platforms. (AI = data analysis: a required skill for data analysts.)
- A statistics major or equivalent understanding of statistical analysis results interpretation.
At iDreamCareer, we’re on a mission to democratize career guidance for millions of young learners across India and beyond. Technology is at the heart of this mission — and we’re looking for an Engineering Manager who thrives in high-ownership environments, thinks with an enterprising mindset, and gets excited about solving problems that genuinely change lives.
This is not just a management role. It’s a chance to shape the product, scale the platform, influence the engineering culture, and lead a team that builds with heart and hustle.
As an Director-Engineering here, you will:
- Lead a talented team of engineers while remaining hands-on with architecture and development.
- Champion the use of AI/ML, LLM-driven features, and intelligent systems to elevate learner experience.
- Inspire a culture of high performance, clear thinking, and thoughtful engineering.
- Partner closely with product, design, and content teams to deliver delightful, meaningful user experiences.
- Bring structure, clarity, and energy to complex problem-solving.
- This role is ideal for someone who loves building, mentoring, scaling, and thinking several steps ahead.
Key Responsibilities
Technical Leadership & Ownership
- Lead end-to-end development across backend, frontend, architecture, and infrastructure in partnership with product and design teams.
- Stay hands-on with the MERN stack, Python, and AI/ML technologies, while guiding and coaching a high-performance engineering team.
- Architect, develop, and maintain distributed microservices, event-driven systems, and robust APIs on AWS.
AI/ML Engineering
- Build and deploy AI-powered features, leveraging LLMs, RAG pipelines, embeddings, vector databases, and model evaluation frameworks.
- Drive prompt engineering, retrieval optimization, and continuous refinement of AI system performance.
- Champion the adoption of modern AI coding tools and emerging AI platforms to boost team productivity.
Cloud, Data, DevOps & Scaling
- Own deployments and auto-scaling on AWS (ECS, Lambda, CloudFront, SQS, SES, ELB, S3).
- Build and optimize real-time and batch data pipelines using BigQuery and other analytics tools.
- Implement CI/CD pipelines for Dockerized applications, ensuring strong observability through Prometheus, Loki, Grafana, CloudWatch.
- Enforce best practices around security, code quality, testing, and system performance.
Collaboration & Delivery Excellence
- Partner closely with product managers, designers, and QA to deliver features with clarity, speed, and reliability.
- Drive agile rituals, ensure engineering predictability, and foster a culture of ownership, innovation, and continuous improvement
Required Skills & Experience
- 8-15 years of experience in full-stack or backend engineering with at least 5+ years leading engineering teams.
- Strong hands-on expertise in the MERN stack and modern JavaScript/TypeScript ecosystems.
- 5+ years building and scaling production-grade applications and distributed systems.
- 2+ years building and deploying AI/ML products — including training, tuning, integrating, and monitoring AI models in production.
- Practical experience with SQL, NoSQL, vector databases, embeddings, and production-grade RAG systems.
- Strong understanding of LLM prompt optimization, evaluation frameworks, and AI-driven system design.
- Hands-on with AI developer tools, automation utilities, and emerging AI productivity platforms.
Preferred Skills
- Familiarity with LLM orchestration frameworks (LangChain, LlamaIndex, etc.) and advanced tool-calling workflows.
- Experience building async workflows, schedulers, background jobs, and offline processing systems.
- Exposure to modern frontend testing frameworks, QA automation, and performance testing.
Job description
Job Title: Python Trainer (Workshop Model Freelance / Part-time)
Location: Thrissur & Ernakulam
Program Duration: 30 or 60 Hours (Workshop Model)
Job Type: Freelance / Contract
About the Role:
We are seeking an experienced and passionate Python Trainer to deliver interactive, hands-on training sessions for students under a workshop model in Thrissur and Ernakulam locations. The trainer will be responsible for engaging learners with practical examples and real-time coding exercises.
Key Responsibilities:
Conduct offline workshop-style Python training sessions (30 or 60 hours total).
Deliver interactive lectures and coding exercises focused on Python programming fundamentals and applications.
Customize the curriculum based on learners skill levels and project needs.
Guide students through mini-projects, assignments, and coding challenges.
Ensure effective knowledge transfer through practical, real-world examples.
Requirements:
Experience: 15 years of training or industry experience in Python programming.
Technical Skills: Strong knowledge of Python, including OOPs concepts, file handling, libraries (NumPy, Pandas, etc.), and basic data visualization.
Prior experience in academic or corporate training preferred.
Excellent communication and presentation skills.
Mode: Offline Workshop (Thrissur / Ernakulam)
Duration: Flexible – 30 Hours or 60 Hours Total
Organization: KGiSL Microcollege
Role: Other
Industry Type: Education / Training
Department: Other
Employment Type: Full Time, Permanent
Role Category: Other
Education
UG: Any Graduate
Key Skills
Data Science,Artificial Intelligence
Role: Other
Industry Type: Education / Training
Department: Other
Employment Type: Full Time, Permanent
Role Category: Other
Education
UG: Any Graduate
Job description
Job Title: Python Trainer (Workshop Model Freelance / Part-time)
Location: Thrissur & Ernakulam
Program Duration: 30 or 60 Hours (Workshop Model)
Job Type: Freelance / Contract
About the Role:
We are seeking an experienced and passionate Python Trainer to deliver interactive, hands-on training sessions for students under a workshop model in Thrissur and Ernakulam locations. The trainer will be responsible for engaging learners with practical examples and real-time coding exercises.
Key Responsibilities:
Conduct offline workshop-style Python training sessions (30 or 60 hours total).
Deliver interactive lectures and coding exercises focused on Python programming fundamentals and applications.
Customize the curriculum based on learners skill levels and project needs.
Guide students through mini-projects, assignments, and coding challenges.
Ensure effective knowledge transfer through practical, real-world examples.
Requirements:
Experience: 15 years of training or industry experience in Python programming.
Technical Skills: Strong knowledge of Python, including OOPs concepts, file handling, libraries (NumPy, Pandas, etc.), and basic data visualization.
Prior experience in academic or corporate training preferred.
Excellent communication and presentation skills.
Mode: Offline Workshop (Thrissur / Ernakulam)
Duration: Flexible – 30 Hours or 60 Hours Total
Organization: KGiSL Microcollege
Role: Other
Industry Type: Education / Training
Department: Other
Employment Type: Full Time, Permanent
Role Category: Other
Education
UG: Any Graduate
Key Skills
Data Science,Artificial Intelligence
Role: Other
Industry Type: Education / Training
Department: Other
Employment Type: Full Time, Permanent
Role Category: Other
Education
UG: Any Graduate
About Us
Mobileum is a leading provider of Telecom analytics solutions for roaming, core network, security, risk management, domestic and international connectivity testing, and customer intelligence.
More than 1,000 customers rely on its Active Intelligence platform, which provides advanced analytics solutions, allowing customers to connect deep network and operational intelligence with real-time actions that increase revenue, improve customer experience, and reduce costs.
Headquartered in Silicon Valley, Mobileum has global offices in Australia, Dubai, Germany, Greece, India, Portugal, Singapore and UK with global HC of over 1800+.
Join Mobileum Team
At Mobileum we recognize that our team is the main reason for our success. What does work with us mean? Opportunities!
Role: GenAI/LLM Engineer – Domain-Specific AI Solutions (Telecom)
About the Job
We are seeking a highly skilled GenAI/LLM Engineer to design, fine-tune, and operationalize Large Language Models (LLMs) for telecom business applications. This role will be instrumental in building domain-specific GenAI solutions, including the development of domain-specific LLMs, to transform telecom operational processes, customer interactions, and internal decision-making workflows.
Roles & Responsibility:
- Build domain-specific LLMs by curating domain-relevant datasets and training/fine-tuning LLMs tailored for telecom use cases.
- Fine-tune pre-trained LLMs (e.g., GPT, Llama, Mistral) using telecom-specific datasets to improve task accuracy and relevance.
- Design and implement prompt engineering frameworks, optimize prompt construction and context strategies for telco-specific queries and processes.
- Develop Retrieval-Augmented Generation (RAG) pipelines integrated with vector databases (e.g., FAISS, Pinecone) to enhance LLM performance on internal knowledge.
- Build multi-agent LLM pipelines using orchestration tools (LangChain, LlamaIndex) to support complex telecom workflows.
- Collaborate cross-functionally with data engineers, product teams, and domain experts to translate telecom business logic into GenAI workflows.
- Conduct systematic model evaluation focused on minimizing hallucinations, improving domain-specific accuracy, and tracking performance improvements on business KPIs.
- Contribute to the development of internal reusable GenAI modules, coding standards, and best practices documentation.
Desired Profile
- Familiarity with multi-modal LLMs (text + tabular/time-series).
- Experience with OpenAI function calling, LangGraph, or agent-based orchestration.
- Exposure to telecom datasets (e.g., call records, customer tickets, network logs).
- Experience with low-latency inference optimization (e.g., quantization, distillation).
Technical skills
- Hands-on experience in fine-tuning transformer models, prompt engineering, and RAG architecture design.
- Experience delivering production-ready AI solutions in enterprise environments; telecom exposure is a plus.
- Advanced knowledge of transformer architectures, fine-tuning techniques (LoRA, PEFT, adapters), and transfer learning.
- Proficiency in Python, with significant experience using PyTorch, Hugging Face Transformers, and related NLP libraries.
- Practical expertise in prompt engineering, RAG pipelines, and LLM orchestration tools (LangChain, LlamaIndex).
- Ability to build domain-adapted LLMs, from data preparation to final model deployment.
Work Experience
7+ years of professional experience in AI/ML, with at least 2+ years of practical exposure to LLMs or GenAI deployments.
Educational Qualification
- Master’s or Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, Natural Language Processing, or a related field.
- Ph.D. preferred for foundational model work and advanced research focus.
We are looking for an 4-5 years of experienced AI/ML Engineer who can design, develop, and deploy scalable machine learning models and AI-driven solutions. The ideal candidate should have strong expertise in data processing, model building, and production deployment, along with solid programming and problem-solving skills.
Key Responsibilities
- Develop and deploy machine learning, deep learning, and NLP models for various business use cases.
- Build end-to-end ML pipelines including data preprocessing, feature engineering, training, evaluation, and production deployment.
- Optimize model performance and ensure scalability in production environments.
- Work closely with data scientists, product teams, and engineers to translate business requirements into AI solutions.
- Conduct data analysis to identify trends and insights.
- Implement MLOps practices for versioning, monitoring, and automating ML workflows.
- Research and evaluate new AI/ML techniques, tools, and frameworks.
- Document system architecture, model design, and development processes.
Required Skills
- Strong programming skills in Python (NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch, Keras).
- Hands-on experience in building and deploying ML/DL models in production.
- Good understanding of machine learning algorithms, neural networks, NLP, and computer vision.
- Experience with REST APIs, Docker, Kubernetes, and cloud platforms (AWS/GCP/Azure).
- Working knowledge of MLOps tools such as MLflow, Airflow, DVC, or Kubeflow.
- Familiarity with data pipelines and big data technologies (Spark, Hadoop) is a plus.
- Strong analytical skills and ability to work with large datasets.
- Excellent communication and problem-solving abilities.
Preferred Qualifications
- Bachelor’s or Master’s degree in Computer Science, Data Science, AI, Machine Learning, or related fields.
- Experience in deploying models using cloud services (AWS Sagemaker, GCP Vertex AI, etc.).
- Experience in LLM fine-tuning or generative AI is an added advantage.
About the Company
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position Summary
We are seeking a highly experienced and visionary Senior Engineering Manager – Inference Services to lead and scale our team responsible for building high-performance inference systems that power cutting-edge AI/ML products. This role requires a blend of strong technical expertise, leadership skills, and product-oriented thinking to drive innovation, scalability, and reliability of our inference infrastructure.
Key Responsibilities
Leadership & Strategy
- Lead, mentor, and grow a team of engineers focused on inference platforms, services, and optimizations.
- Define the long-term vision and roadmap for inference services in alignment with product and business goals.
- Partner with cross-functional leaders in ML, Product, Data Science, and Infrastructure to deliver robust, low-latency, and scalable inference solutions.
Engineering Excellence
- Architect and oversee development of distributed, production-grade inference systems ensuring scalability, efficiency, and reliability.
- Drive adoption of best practices for model deployment, monitoring, and continuous improvement of inference pipelines.
- Ensure high availability, cost optimization, and performance tuning of inference workloads across cloud and on-prem environments.
Innovation & Delivery
- Evaluate emerging technologies, frameworks, and hardware accelerators (GPUs, TPUs, etc.) to continuously improve inference efficiency.
- Champion automation and standardization of model deployment and lifecycle management.
- Balance short-term delivery with long-term architectural evolution.
People & Culture
- Build a strong engineering culture focused on collaboration, innovation, and accountability.
- Provide coaching, feedback, and career development opportunities to team members.
- Foster a growth mindset and data-driven decision-making.
Basic Qualifications
Experience
- 12+ years of software engineering experience with at least 4–5 years in engineering leadership roles.
- Proven track record of managing high-performing teams delivering large-scale distributed systems or ML platforms.
- Experience in building and operating inference systems, ML serving platforms, or real-time data systems at scale.
Technical Expertise
- Strong understanding of machine learning model deployment, serving, and optimization (batch & real-time).
- Proficiency in cloud-native technologies (Kubernetes, Docker, microservices architecture).
- Hands-on knowledge of inference frameworks (TensorFlow Serving, Triton Inference Server, TorchServe, etc.) and hardware accelerators.
- Solid background in programming languages (Python, Java, C++ or Go) and performance optimization techniques.
Preferred Qualifications
- Experience with MLOps platforms and end-to-end ML lifecycle management.
- Prior work in high-throughput, low-latency systems (ad-tech, search, recommendations, etc.).
- Knowledge of cost optimization strategies for large-scale inference workloads.
Position: QA Engineer – Machine Learning Systems (5 - 7 years)
Location: Remote (Company in Mumbai)
Company: Big Rattle Technologies Private Limited
Immediate Joiners only.
Summary:
The QA Engineer will own quality assurance across the ML lifecycle—from raw data validation through feature engineering checks, model training/evaluation verification, batch prediction/optimization validation, and end-to-end (E2E) workflow testing. The role is hands-on with Python automation, data profiling, and pipeline test harnesses in Azure ML and Azure DevOps. Success means probably correct data, models, and outputs at production scale and cadence.
Key Responsibilities:
Test Strategy & Governance
- ○ Define an ML-specific Test Strategy covering data quality KPIs, feature consistency
- checks, model acceptance gates (metrics + guardrails), and E2E run acceptance
- (timeliness, completeness, integrity).
- ○ Establish versioned test datasets & golden baselines for repeatable regression of
- features, models, and optimizers.
Data Quality & Transformation
- Validate raw data extracts and landed data lake data: schema/contract checks, null/outlier thresholds, time-window completeness, duplicate detection, site/material coverage.
- Validate transformed/feature datasets: deterministic feature generation, leakage detection, drift vs. historical distributions, feature parity across runs (hash or statistical similarity tests).
- Implement automated data quality checks (e.g., Great Expectations/pytest + Pandas/SQL) executed in CI and AML pipelines.
Model Training & Evaluation
- Verify training inputs (splits, windowing, target leakage prevention) and hyperparameter configs per site/cluster.
- Automate metric verification (e.g., MAPE/MAE/RMSE, uplift vs. last model, stability tests) with acceptance thresholds and champion/challenger logic.
- Validate feature importance stability and sensitivity/elasticity sanity checks (price/volume monotonicity where applicable).
- Gate model registration/promotion in AML based on signed test artifacts and reproducible metrics.
Predictions, Optimization & Guardrails
- Validate batch predictions: result shapes, coverage, latency, and failure handling.
- Test model optimization outputs and enforced guardrails: detect violations and prove idempotent writes to DB.
- Verify API push to third party system (idempotency keys, retry/backoff, delivery receipts).
Pipelines & E2E
- Build pipeline test harnesses for AML pipelines (data-gen nightly, training weekly,
- prediction/optimization) including orchestrated synthetic runs and fault injection
- (missing slice, late competitor data, SB backlog).
- Run E2E tests from raw data store -> ADLS -> AML -> RDBMS -> APIM/Frontend, assert
- freshness SLOs and audit event completeness (Event Hubs -> ADLS immutable).
Automation & Tooling
- Develop Python-based automated tests (pytest) for data checks, model metrics, and API contracts; integrate with Azure DevOps (pipelines, badges, gates).
- Implement data-driven test runners (parameterized by site/material/model-version) and store signed test artifacts alongside models in AML Registry.
- Create synthetic test data generators and golden fixtures to cover edge cases (price gaps, competitor shocks, cold starts).
Reporting & Quality Ops
- Publish weekly test reports and go/no-go recommendations for promotions; maintain a defect taxonomy (data vs. model vs. serving vs. optimization).
- Contribute to SLI/SLO dashboards (prediction timeliness, queue/DLQ, push success, data drift) used for release gates.
Required Skills (hands-on experience in the following):
- Python automation (pytest, pandas, NumPy), SQL (PostgreSQL/Snowflake), and CI/CD (Azure
- DevOps) for fully automated ML QA.
- Strong grasp of ML validation: leakage checks, proper splits, metric selection
- (MAE/MAPE/RMSE), drift detection, sensitivity/elasticity sanity checks.
- Experience testing AML pipelines (pipelines/jobs/components), and message-driven integrations
- (Service Bus/Event Hubs).
- API test skills (FastAPI/OpenAPI, contract tests, Postman/pytest-httpx) + idempotency and retry
- patterns.
- Familiar with feature stores/feature engineering concepts and reproducibility.
- Solid understanding of observability (App Insights/Log Analytics) and auditability requirements.
Required Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
- 5–7+ years in QA with 3+ years focused on ML/Data systems (data pipelines + model validation).
- Certification in Azure Data or ML Engineer Associate is a plus.
Why should you join Big Rattle?
Big Rattle Technologies specializes in AI/ ML Products and Solutions as well as Mobile and Web Application Development. Our clients include Fortune 500 companies. Over the past 13 years, we have delivered multiple projects for international and Indian clients from various industries like FMCG, Banking and Finance, Automobiles, Ecommerce, etc. We also specialise in Product Development for our clients.
Big Rattle Technologies Private Limited is ISO 27001:2022 certified and CyberGRX certified.
What We Offer:
- Opportunity to work on diverse projects for Fortune 500 clients.
- Competitive salary and performance-based growth.
- Dynamic, collaborative, and growth-oriented work environment.
- Direct impact on product quality and client satisfaction.
- 5-day hybrid work week.
- Certification reimbursement.
- Healthcare coverage.
How to Apply:
Interested candidates are invited to submit their resume detailing their experience. Please detail out your work experience and the kind of projects you have worked on. Ensure you highlight your contributions and accomplishments to the projects.
What We’re Looking For
- 3-5 years of Data Science & ML experience in consumer internet / B2C products.
- Degree in Statistics, Computer Science, or Engineering (or certification in Data Science).
- Machine Learning wizardry: recommender systems, NLP, user profiling, image processing, anomaly detection.
- Statistical chops: finding meaningful insights in large data sets.
- Programming ninja: R, Python, SQL + hands-on with Numpy, Pandas, scikit-learn, Keras, TensorFlow (or similar).
- Visualization skills: Redshift, Tableau, Looker, or similar.
- A strong problem-solver with curiosity hardwired into your DNA.
- Brownie Points
- Experience with big data platforms: Hadoop, Spark, Hive, Pig.
- Extra love if you’ve played with BI tools like Tableau or Looker.
Domain - Credit risk / Fintech
Roles and Responsibilities:
1. Development, validation and monitoring of Application and Behaviour score cards
for Retail loan portfolio
2. Improvement of collection efficiency through advanced analytics
3. Development and deployment of fraud scorecard
4. Upsell / Cross-sell strategy implementation using analytics
5. Create modern data pipelines and processing using AWS PAAS components (Glue,
Sagemaker studio, etc.)
6. Deploying software using CI/CD tools such as Azure DevOps, Jenkins, etc.
7. Experience with API tools such as REST, Swagger, and Postman
8. Model deployment in AWS and management of production environment
9. Team player who can work with cross-functional teams to gather data and derive
insights
Mandatory Technical skill set :
1. Previous experience in scorecard development and credit risk strategy development
2. Python and Jenkins
3. Logistic regression, Scorecard, ML and neural networks
4. Statistical analysis and A/B testing
5. AWS Sagemaker, S3 , Ec2, Dockers
6. REST API, Swagger and Postman
7. Excel
8. SQL
9. Visualisation tools such as Redash / Grafana
10. Bitbucket, Githubs and versioning tools
About Us: The Next Generation of WealthTech
We're Cambridge Wealth, an award-winning force in mutual fund distribution and Fintech. We're not just moving money; we're redefining wealth management for everyone from retail investors to ultra-HNIs (including the NRI segment). Our brand is synonymous with excellence, backed by accolades from the BSE and top Mutual Fund houses.
If you thrive on building high-performance, scalable systems that drive real-world financial impact, you'll feel right at home. Join us in Pune to build the future of finance.
[Learn more: www.cambridgewealth.in]
The Role: Engineering Meets Team Meets Customer
We're looking for an experienced, hands-on Tech Catalyst to accelerate our product innovation. This isn't just a coding job; it's a chance to blend deep backend expertise with product strategy. You will be the engine driving rapid, data-driven product experiments, leveraging AI and Machine Learning to create smart, personalized financial solutions. You'll lead by example, mentoring a small, dedicated team and ensuring technical excellence and rapid deployment in the high-stakes financial domain.
Key Impact Areas: Ship Fast, Break Ground
1. Backend & AI/ML Innovation
- Rapid Prototyping: Design and execute quick, iterative experiments to validate new features and market hypotheses, moving from concept to production in days, not months.
- AI-Powered Features: Build scalable Python-based backend services that integrate AI/ML models to enhance customer profiling, portfolio recommendation, and risk analysis.
- System Architecture: Own the performance, stability, and scalability of our core fintech platform, implementing best practices in modern backend development.
2. Product Leadership & Execution
- Agile Catalyst: Drive and optimize Agile sprints, ensuring clear technical milestones, efficient resource allocation, backlog grooming and maintaining a laser focus on preventing scope creep.
- Mentorship & Management: Provide technical guidance and mentorship to a team of developers, fostering a culture of high performance, code quality, and continuous learning.
- Domain Alignment: Translate complex financial requirements and market insights into precise, actionable technical specifications and seamless user stories.
- Problem Solver: Proactively identify and resolve technical and process bottlenecks, acting as the ultimate problem solver for the engineering and product teams.
3. Financial Domain Expertise
- High-Value Delivery: Apply deep knowledge of the mutual fund and broader fintech landscape to inform product decisions, ensuring our solutions are compliant, competitive, and truly valuable to our clients.
- Risk & Security: Proactively architect solutions with security and financial risk management baked in from the ground up, protecting client data and assets.
Your Tech Stack & Experience
The Must-Haves
- Mindset: A verifiable track record as a proactive First Principle Problem Solver with an intense Passion to Ship production-ready features frequently.
- Customer Empathy: Keeps the customer's experience in mind at all times.
- Team Leadership: Experience in leading, mentoring, or managing a small development team, driving technical excellence and project delivery.
- Systems Thinker: Diagnoses and solves problems by viewing the organization as an interconnected system to anticipate broad impacts and develop holistic, strategic solutions.
- Backend Powerhouse: 2+ years of professional experience with a strong focus on backend development.
- Python Guru: Expert proficiency in Python and related frameworks (e.g., Django, Flask) for building robust, scalable APIs and services.
- AI/ML Integration: Proven ability to leverage and integrate AI/ML models into production-level applications.
- Data Driven: Expert in SQL for complex data querying, analysis, and ETL processes.
- Financial Domain Acumen:Strong, demonstrable knowledge of financial products, especially mutual funds, wealth management, and key fintech metrics.
Nice-to-Haves
- Experience with cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes).
- Familiarity with Zoho Analytics, Zoho CRM and Zoho Deluge
- Familiarity with modern data analysis tools and visualization platforms (e.g., Mixpanel, Tableau, or custom dashboard tools).
- Understanding of Mutual Fund, AIF, PMS operations
Ready to Own the Backend and Shape Finance?
This is where your code meets the capital market. If you’re a Fintech-savvy Python expert ready to lead a team and build a scalable platform in Pune, we want to talk.
Apply now to join our award-winning, forward-thinking team.
Our High-Velocity Hiring Process:
- You Apply & Engage: Quick application and a few insightful questions. (5 min)
- Online Tech Challenge: Prove your tech mettle. (90 min)
- People Sync: A focused call to understand if there is cultural and value alignment. (30 min)
- Deep Dive Technical Interview: Discuss architecture and projects with our senior engineers. (1 hour)
- Founder's Vision Interview: Meet the leadership and discuss your impact. (1 hour)
- Offer & Onboarding: Reference and BGV check follow the successful offer.
What are you building right now that you're most proud of?
Job Title: AI Engineer
Location: Bengaluru
Experience: 3 Years
Working Days: 5 Days
About the Role
We’re reimagining how enterprises interact with documents and workflows—starting with BFSI and healthcare. Our AI-first platforms are transforming credit decisioning, document intelligence, and underwriting at scale. The focus is on Intelligent Document Processing (IDP), GenAI-powered analysis, and human-in-the-loop (HITL) automation to accelerate outcomes across lending, insurance, and compliance workflows.
As an AI Engineer, you’ll be part of a high-caliber engineering team building next-gen AI systems that:
- Power robust APIs and platforms used by underwriters, credit analysts, and financial institutions.
- Build and integrate GenAI agents.
- Enable “human-in-the-loop” workflows for high-assurance decisions in real-world conditions.
Key Responsibilities
- Build and optimize ML/DL models for document understanding, classification, and summarization.
- Apply LLMs and RAG techniques for validation, search, and question-answering tasks.
- Design and maintain data pipelines for structured and unstructured inputs (PDFs, OCR text, JSON, etc.).
- Package and deploy models as REST APIs or microservices in production environments.
- Collaborate with engineering teams to integrate models into existing products and workflows.
- Continuously monitor and retrain models to ensure reliability and performance.
- Stay updated on emerging AI frameworks, architectures, and open-source tools; propose improvements to internal systems.
Required Skills & Experience
- 2–5 years of hands-on experience in AI/ML model development, fine-tuning, and building ML solutions.
- Strong Python proficiency with libraries such as NumPy, Pandas, scikit-learn, PyTorch, or TensorFlow.
- Solid understanding of transformers, embeddings, and NLP pipelines.
- Experience working with LLMs (OpenAI, Claude, Gemini, etc.) and frameworks like LangChain.
- Exposure to OCR, document parsing, and unstructured text analytics.
- Familiarity with model serving, APIs, and microservice architectures (FastAPI, Flask).
- Working knowledge of Docker, cloud environments (AWS/GCP/Azure), and CI/CD pipelines.
- Strong grasp of data preprocessing, evaluation metrics, and model validation workflows.
- Excellent problem-solving ability, structured thinking, and clean, production-ready coding practices.
IMP: Please read through before applying!
Nature of role: Full-time; On-site
Location: Thiruvanmiyur, Chennai
Responsibilities:
Build and manage automation workflows using n8n, Make (Integromat), Zapier, or custom APIs.
Integrate tools across JugaadX, WhatsApp, Shopify, Meta, Google Workspace, CRMs, and internal systems.
Develop and maintain scalable, modular automation systems with clear documentation.
Integrate and experiment with AI tools and APIs such as OpenAI, Gemini, Claude, HeyGen, Runway, etc.
Create intelligent workflows — from chatbots and lead scorers to content generators and auto-responders.
Manage cloud infrastructure (VPS, Docker, SSL, security) for automations and dashboards.
Identify repetitive tasks and convert them into reliable automated processes.
Build centralized dashboards and automated reports for teams and clients.
Stay up-to-date with the latest in AI, automation, and LLM technologies, and bring new ideas to life within Jugaad’s ecosystem.
Requirements:
Hands-on experience with n8n, Make, or Zapier (or similar tools).
Familiarity with OpenAI, Gemini, HuggingFace, ElevenLabs, HeyGen, and other AI platforms.
Working knowledge of JavaScript and basic Python for API scripting.
Strong understanding of REST APIs, webhooks, and authentication.
Experience with Docker, VPS (AWS/DigitalOcean), and server management.
Proficiency with Google Sheets, Airtable, JSON, and basic SQL.
Clear communication and documentation skills — able to explain technical systems simply.
Who You Are:
A self-starter who loves automation, optimization, and innovation.
Comfortable building end-to-end tech solutions independently.
Excited to collaborate across creative, marketing, and tech teams.
Always experimenting with new AI tools and smarter ways to work.
Obsessed with efficiency, scalability, and impact — you love saving time and getting more done with less.
What You Get:
A strategic and hands-on role at the intersection of AI, automation, and operations.
The chance to shape the tech backbone of Jugaad and influence how we work, scale, and innovate.
Freedom to experiment, build, and deploy your ideas fast.
A young, fast-moving team where your work directly drives impact and growth.
Role Overview
We are looking for a highly skilled and intellectually curious Senior Data Scientist with 7+ years of experience in applying advanced machine learning and AI techniques to solve complex business problems. The ideal candidate will have deep expertise in Classical Machine Learning, Deep Learning, Natural Language Processing (NLP), and Generative AI (GenAI), along with strong hands-on coding skills and a proven track record of delivering impactful data science solutions. This role requires a blend of technical excellence, business acumen, and collaborative mindset.
Key Responsibilities
- Design, develop, and deploy ML models using classical algorithms (e.g., regression, decision trees, ensemble methods) and deep learning architectures (CNNs, RNNs, Transformers).
- Build NLP solutions for tasks such as text classification, entity recognition, summarization, and conversational AI.
- Develop and fine-tune GenAI models for use cases like content generation, code synthesis, and personalization.
- Architect and implement Retrieval-Augmented Generation (RAG) systems for enhanced contextual AI applications.
- Collaborate with data engineers to build scalable data pipelines and feature stores.
- Perform advanced feature engineering and selection to improve model accuracy and robustness.
- Work with large-scale structured and unstructured datasets using distributed computing frameworks.
- Translate business problems into data science solutions and communicate findings to stakeholders.
- Present insights and recommendations through compelling storytelling and visualization.
- Mentor junior data scientists and contribute to internal knowledge sharing and innovation.
Required Qualifications
- 7+ years of experience in data science, machine learning, and AI.
- Strong academic background in Computer Science, Statistics, Mathematics, or related field (Master’s or PhD preferred).
- Proficiency in Python, SQL, and ML libraries (scikit-learn, TensorFlow, PyTorch, Hugging Face).
- Experience with NLP and GenAI tools (e.g., Azure AI Foundry, Azure AI studio, GPT, LLaMA, LangChain).
- Hands-on experience with Retrieval-Augmented Generation (RAG) systems and vector databases.
- Familiarity with cloud platforms (Azure preferred, AWS/GCP acceptable) and MLOps tools (MLflow, Airflow, Kubeflow).
- Solid understanding of data structures, algorithms, and software engineering principles.
- Experience with Aure, Azure Copilot Studio, Azure Cognitive Services
- Experience with Azure AI Foundry would be a strong added advantage
Preferred Skills
- Exposure to LLM fine-tuning, prompt engineering, and GenAI safety frameworks.
- Experience in domains such as finance, healthcare, retail, or enterprise SaaS.
- Contributions to open-source projects, publications, or patents in AI/ML.
Soft Skills
- Strong analytical and problem-solving skills.
- Excellent communication and stakeholder engagement abilities.
- Ability to work independently and collaboratively in cross-functional teams.
- Passion for continuous learning and innovation.
We are building an AI-powered chatbot platform and looking for an AI/ML Engineer with strong backend skills as our first technical hire. You will be responsible for developing the core chatbot engine using LLMs, creating backend APIs, and building scalable RAG pipelines.
You should be comfortable working independently, shipping fast, and turning ideas into real product features. This role is ideal for someone who loves building with modern AI tools and wants to be part of a fast-growing product from day one.
Responsibilities
• Build the core AI chatbot engine using LLMs (OpenAI, Claude, Gemini, Llama etc.)
• Develop backend services and APIs using Python (FastAPI/Flask)
• Create RAG pipelines using vector databases (Pinecone, FAISS, Chroma)
• Implement embeddings, prompt flows, and conversation logic
• Integrate chatbot with web apps, WhatsApp, CRMs and 3rd-party APIs
• Ensure system reliability, performance, and scalability
• Work directly with the founder in shaping the product and roadmap
Requirements
• Strong experience with LLMs & Generative AI
• Excellent Python skills with FastAPI/Flask
• Hands-on experience with LangChain or RAG architectures
• Vector database experience (Pinecone/FAISS/Chroma)
• Strong understanding of REST APIs and backend development
• Ability to work independently, experiment fast, and deliver clean code
Nice to Have
• Experience with cloud (AWS/GCP)
• Node.js knowledge
• LangGraph, LlamaIndex
• MLOps or deployment experience
Full Stack Engineer
Position Description
Responsibilities
• Take design mockups provided by UX/UI designers and translate them into web pages or applications using HTML and CSS. Ensure that the design is faithfully replicated in the final product.
• Develop enabling frameworks and application E2E and enhance with data analytics and AI enablement
• Ensure effective Design, Development, Validation and Support activities in line with the Customer needs, architectural requirements, and ABB Standards.
• Support ABB business units through consulting engagements.
• Develop and implement machine learning models to solve specific business problems, such as predictive analytics, classification, and recommendation systems
• Perform exploratory data analysis, clean and preprocess data, and identify trends and patterns.
• Evaluate the performance of machine learning models and fine-tune them for optimal results.
• Create informative and visually appealing data visualizations to communicate findings and insights to non-technical stakeholders.
• Conduct statistical analysis, hypothesis testing, and A/B testing to support decision-making processes.
• Define the solution, Project plan, identifying and allocation of team members, project tracking; Work with data engineers to integrate, transform, and store data from various sources.
• Collaborate with cross-functional teams, including business analysts, data engineers, and domain experts, to understand business objectives and develop data science solutions.
• Prepare clear and concise reports and documentation to communicate results and methodologies.
• Stay updated with the latest data science and machine learning trends and techniques.
• Familiarity with ML Model Deployment as REST APIs.
Background
• Engineering graduate / Masters degree with rich exposure to Data science, from a reputed institution
• Create responsive web designs that adapt to different screen sizes and devices using media queries and responsive design techniques.
• Write and maintain JavaScript code to add interactivity and dynamic functionality to web pages. This may include user input handling, form validation, and basic animations.
• Familiarity with front-end JavaScript libraries and frameworks such as React, Angular, or Vue.js. Depending on the projects, you may be responsible for working within these frameworks
• Atleast 6+ years experience in AI ML concepts, Python (preferred), prefer knowledge in deep learning frameworks like PyTorch and TensorFlow
• Domain knowledge of Manufacturing/ process Industries, Physics and first principle based analysis
• Analytical thinking for translating data into meaningful insight and could be consumed by ML Model for Training and Prediction.
• Should be able to deploy Model using Cloud services like Azure Databricks or Azure ML Studio. Familiarity with technologies like Docker, Kubernetes and MLflow is good to have.
• Agile development of customer centric prototypes or ‘Proof of Concepts’ for focused digital solutions
• Good communication skills must be able to discuss the requirements effectively with the client teams, and with internal teams.
Mission
Own architecture across web + backend, ship reliably, and establish patterns the team can scale on.
Responsibilities
- Lead system architecture for Next.js (web) and FastAPI (backend); own code quality, reviews, and release cadence.
- Build and maintain the web app (marketing, auth, dashboard) and a shared TS SDK (@revilo/contracts, @revilo/sdk).
- Integrate Stripe, Maps, analytics; enforce accessibility and performance baselines.
- Define CI/CD (GitHub Actions), containerization (Docker), env/promotions (staging → prod).
- Partner with Mobile and AI engineers on API/tool schemas and developer experience.
Requirements
- 6–10+ years; expert TypeScript, strong Python.
- Next.js (App Router), TanStack Query, shadcn/ui; FastAPI, Postgres, pydantic/SQLModel.
- Auth (OTP/JWT/OAuth), payments, caching, pagination, API versioning.
- Practical CI/CD and observability (logs/metrics/traces).
Nice-to-haves
- OpenAPI typegen (Zod), feature flags, background jobs/queues, Vercel/EAS.
Key Outcomes (ongoing)
- Stable architecture with typed contracts; <2% crash/error on web, p95 API latency in budget, reliable weekly releases.
Review Criteria
- Strong DevOps /Cloud Engineer Profiles
- Must have 3+ years of experience as a DevOps / Cloud Engineer
- Must have strong expertise in cloud platforms – AWS / Azure / GCP (any one or more)
- Must have strong hands-on experience in Linux administration and system management
- Must have hands-on experience with containerization and orchestration tools such as Docker and Kubernetes
- Must have experience in building and optimizing CI/CD pipelines using tools like GitHub Actions, GitLab CI, or Jenkins
- Must have hands-on experience with Infrastructure-as-Code tools such as Terraform, Ansible, or CloudFormation
- Must be proficient in scripting languages such as Python or Bash for automation
- Must have experience with monitoring and alerting tools like Prometheus, Grafana, ELK, or CloudWatch
- Top tier Product-based company (B2B Enterprise SaaS preferred)
Preferred
- Experience in multi-tenant SaaS infrastructure scaling.
- Exposure to AI/ML pipeline deployments or iPaaS / reverse ETL connectors.
Role & Responsibilities
We are seeking a DevOps Engineer to design, build, and maintain scalable, secure, and resilient infrastructure for our SaaS platform and AI-driven products. The role will focus on cloud infrastructure, CI/CD pipelines, container orchestration, monitoring, and security automation, enabling rapid and reliable software delivery.
Key Responsibilities:
- Design, implement, and manage cloud-native infrastructure (AWS/Azure/GCP).
- Build and optimize CI/CD pipelines to support rapid release cycles.
- Manage containerization & orchestration (Docker, Kubernetes).
- Own infrastructure-as-code (Terraform, Ansible, CloudFormation).
- Set up and maintain monitoring & alerting frameworks (Prometheus, Grafana, ELK, etc.).
- Drive cloud security automation (IAM, SSL, secrets management).
- Partner with engineering teams to embed DevOps into SDLC.
- Troubleshoot production issues and drive incident response.
- Support multi-tenant SaaS scaling strategies.
Ideal Candidate
- 3–6 years' experience as DevOps/Cloud Engineer in SaaS or enterprise environments.
- Strong expertise in AWS, Azure, or GCP.
- Strong expertise in LINUX Administration.
- Hands-on with Kubernetes, Docker, CI/CD tools (GitHub Actions, GitLab, Jenkins).
- Proficient in Terraform/Ansible/CloudFormation.
- Strong scripting skills (Python, Bash).
- Experience with monitoring stacks (Prometheus, Grafana, ELK, CloudWatch).
- Strong grasp of cloud security best practices.
Job Details
- Job Title: ML Engineer II - Aws, Aws Cloud
- Industry: Technology
- Domain - Information technology (IT)
- Experience Required: 6-12 years
- Employment Type: Full Time
- Job Location: Pune
- CTC Range: Best in Industry
Job Description:
Core Responsibilities:
? The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
? Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
? Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
? Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
? System Integration: Integrate models into existing systems and workflows.
? Model Deployment: Deploy models to production environments and monitor performance.
? Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
? Continuous Improvement: Identify areas for improvement in model performance and systems.
Skills:
? Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
? Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaossearch logs, etc for troubleshooting; Other tech touch points are ScyllaDB (like BigTable), OpenSearch, Neo4J graph
? Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
? Knowledge of model monitoring and performance evaluation.
Required experience:
? Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sagemaker pipeline with ability to analyze gaps and recommend/implement improvements
? AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in
ML workflows
? AWS data: Redshift, Glue
? Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
Skills: Aws, Aws Cloud, Amazon Redshift, Eks
Must-Haves
Aws, Aws Cloud, Amazon Redshift, Eks
NP: Immediate – 30 Days
🎯 Ideal Candidate Profile:
This role requires a seasoned engineer/scientist with a strong academic background from a premier institution and significant hands-on experience in deep learning (specifically image processing) within a hardware or product manufacturing environment.
📋 Must-Have Requirements:
Experience & Education Combinations:
Candidates must meet one of the following criteria:
- Doctorate (PhD) + 2 years of related work experience
- Master's Degree + 5 years of related work experience
- Bachelor's Degree + 7 years of related work experience
Technical Skills:
- Minimum 5 years of hands-on experience in all of the following:
- Python
- Deep Learning (DL)
- Machine Learning (ML)
- Algorithm Development
- Image Processing
- 3.5 to 4 years of strong proficiency with PyTorch OR TensorFlow / Keras.
Industry & Institute:
- Education: Must be from a premier institute (IIT, IISC, IIIT, NIT, BITS) or a recognized regional tier 1 college.
- Industry: Current or past experience in a Product, Semiconductor, or Hardware Manufacturing company is mandatory.
- Preference: Candidates from engineering product companies are strongly preferred.
ℹ️ Additional Role Details:
- Interview Process: 3 technical rounds followed by 1 HR round.
- Work Model: Hybrid (requiring 3 days per week in the office).
Based on the job description you provided, here is a detailed breakdown of the Required Skills and Qualifications for this AI/ML/LLM role, formatted for clarity.
📝 Required Skills and Competencies:
💻 Programming & ML Prototyping:
- Strong Proficiency: Python, Data Structures, and Algorithms.
- Hands-on Experience: NumPy, Pandas, Scikit-learn (for ML prototyping).
🤖 Machine Learning Frameworks:
- Core Concepts: Solid understanding of:
- Supervised/Unsupervised Learning
- Regularization
- Feature Engineering
- Model Selection
- Cross-Validation
- Ensemble Methods: Experience with models like XGBoost and LightGBM.
🧠 Deep Learning Techniques:
- Frameworks: Proficiency with PyTorch OR TensorFlow / Keras.
- Architectures: Knowledge of:
- Convolutional Neural Networks (CNNs)
- Recurrent Neural Networks (RNNs)
- Long Short-Term Memory networks (LSTMs)
- Transformers
- Attention Mechanisms
- Optimization: Familiarity with optimization techniques (e.g., Adam, SGD), Dropout, and Batch Normalization.
💬 LLMs & RAG (Retrieval-Augmented Generation):
- Hugging Face: Experience with the Transformers library (tokenizers, embeddings, model fine-tuning).
- Vector Databases: Familiarity with Milvus, FAISS, Pinecone, or ElasticSearch.
- Advanced Techniques: Proficiency in:
- Prompt Engineering
- Function/Tool Calling
- JSON Schema Outputs
🛠️ Data & Tools:
- Data Management: SQL fundamentals; exposure to data wrangling and pipelines.
- Tools: Experience with Git/GitHub, Jupyter, and basic Docker.
🎓 Minimum Qualifications (Experience & Education Combinations):
Candidates must have experience building AI systems/solutions with Machine Learning, Deep Learning, and LLMs, meeting one of the following criteria:
- Doctorate (Academic) Degree + 2 years of related work experience.
- Master's Level Degree + 5 years of related work experience.
- Bachelor's Level Degree + 7 years of related work experience.
⭐ Preferred Traits and Mindset:
- Academic Foundation: Solid academic background with strong applied ML/DL exposure.
- Curiosity: Eagerness to learn cutting-edge AI and willingness to experiment.
- Communication: Clear communicator who can explain ML/LLM trade-offs simply.
- Ownership: Strong problem-solving and ownership mindset.
📍Company: Versatile Commerce
📍 Position: Data Scientists
📍 Experience: 3-9 yrs
📍 Location: Hyderabad (WFO)
📅 Notice Period: 0- 15 Days
What You’ll Be Doing:
● Own the architecture and roadmap for scalable, secure, and high-quality data pipelines
and platforms.
● Lead and mentor a team of data engineers while establishing engineering best practices,
coding standards, and governance models.
● Design and implement high-performance ETL/ELT pipelines using modern Big Data
technologies for diverse internal and external data sources.
● Drive modernization initiatives including re-architecting legacy systems to support
next-generation data products, ML workloads, and analytics use cases.
● Partner with Product, Engineering, and Business teams to translate requirements into
robust technical solutions that align with organizational priorities.
● Champion data quality, monitoring, metadata management, and observability across the
ecosystem.
● Lead initiatives to improve cost efficiency, data delivery SLAs, automation, and
infrastructure scalability.
● Provide technical leadership on data modeling, orchestration, CI/CD for data workflows,
and cloud-based architecture improvements.
Qualifications:
● Bachelor's degree in Engineering, Computer Science, or relevant field.
● 8+ years of relevant and recent experience in a Data Engineer role.
● 5+ years recent experience with Apache Spark and solid understanding of the
fundamentals.
● Deep understanding of Big Data concepts and distributed systems.
● Demonstrated ability to design, review, and optimize scalable data architectures across
ingestion.
● Strong coding skills with Scala, Python and the ability to quickly switch between them with
ease.
● Advanced working SQL knowledge and experience working with a variety of relational
databases such as Postgres and/or MySQL.
● Cloud Experience with DataBricks.
● Strong understanding of Delta Lake architecture and working with Parquet, JSON, CSV,
and similar formats.
● Experience establishing and enforcing data engineering best practices, including CI/CD
for data, orchestration and automation, and metadata management.
● Comfortable working in an Agile environment
● Machine Learning knowledge is a plus.
● Demonstrated ability to operate independently, take ownership of deliverables, and lead
technical decisions.
● Excellent written and verbal communication skills in English.
● Experience supporting and working with cross-functional teams in a dynamic
environment.
REPORTING: This position will report to Sr. Technical Manager or Director of Engineering as
assigned by Management.
EMPLOYMENT TYPE: Full-Time, Permanent
SHIFT TIMINGS: 10:00 AM - 07:00 PM IST
Hi Shanon, As discussed we have one open req for AI role. The right skill set I am looking for is experience with RAG pipelines, Performance monitoring of prompts and tuning. Multi model RAG and agentic AI soon but not night away. Needless to say Python and the NLP lib experience is a must. Database knowledge is also essential. The developer needs to be in Morgan Stanley offices 3 days per week. BLR location is preferred, if not Mumbai is also fine. The duration is long term since we are looking to expand the use cases Please let me know if you have any questions.






















