50+ Machine Learning (ML) Jobs in Bangalore (Bengaluru) | Machine Learning (ML) Job openings in Bangalore (Bengaluru)
Apply to 50+ Machine Learning (ML) Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Machine Learning (ML) Job opportunities across top companies like Google, Amazon & Adobe.
Senior Machine Learning Engineer
About the Role
We are looking for a Senior Machine Learning Engineer who can take business problems, design appropriate machine learning solutions, and make them work reliably in production environments.
This role is ideal for someone who not only understands machine learning models, but also knows when and how ML should be applied, what trade-offs to make, and how to take ownership from problem understanding to production deployment.
Beyond technical skills, we need someone who can lead a team of ML Engineers, design end-to-end ML solutions, and clearly communicate decisions and outcomes to both engineering teams and business stakeholders. If you enjoy solving real problems, making pragmatic decisions, and owning outcomes from idea to deployment, this role is for you.
What You’ll Be Doing
Building and Deploying ML Models
- Design, build, evaluate, deploy, and monitor machine learning models for real production use cases.
- Take ownership of how a problem is approached, including deciding whether ML is the right solution and what type of ML approach fits the problem.
- Ensure scalability, reliability, and efficiency of ML pipelines across cloud and on-prem environments.
- Work with data engineers to design and validate data pipelines that feed ML systems.
- Optimize solutions for accuracy, performance, cost, and maintainability, not just model metrics.
Leading and Architecting ML Solutions
- Lead a team of ML Engineers, providing technical direction, mentorship, and review of ML approaches.
- Architect ML solutions that integrate seamlessly with business applications and existing systems.
- Ensure models and solutions are explainable, auditable, and aligned with business goals.
- Drive best practices in MLOps, including CI/CD, model monitoring, retraining strategies, and operational readiness.
- Set clear standards for how ML problems are framed, solved, and delivered within the team.
Collaborating and Communicating
- Work closely with business stakeholders to understand problem statements, constraints, and success criteria.
- Translate business problems into clear ML objectives, inputs, and expected outputs.
- Collaborate with software engineers, data engineers, platform engineers, and product managers to integrate ML solutions into production systems.
- Present ML decisions, trade-offs, and outcomes to non-technical stakeholders in a simple and understandable way.
What We’re Looking For
Machine Learning Expertise
- Strong understanding of supervised and unsupervised learning, deep learning, NLP techniques, and large language models (LLMs).
- Experience choosing appropriate modeling approaches based on the problem, available data, and business constraints.
- Experience training, fine-tuning, and deploying ML and LLM models for real-world use cases.
- Proficiency in common ML frameworks such as TensorFlow, PyTorch, Scikit-learn, etc.
Production and Cloud Deployment
- Hands-on experience deploying and running ML systems in production environments on AWS, GCP, or Azure.
- Good understanding of MLOps practices, including CI/CD for ML models, monitoring, and retraining workflows.
- Experience with Docker, Kubernetes, or serverless architectures is a plus.
- Ability to think beyond deployment and consider operational reliability and long-term maintenance.
Data Handling
- Strong programming skills in Python.
- Proficiency in SQL and working with large-scale datasets.
- Ability to reason about data quality, data limitations, and how they impact ML outcomes.
- Familiarity with distributed computing frameworks like Spark or Dask is a plus.
Leadership and Communication
- Ability to lead and mentor ML Engineers and work effectively across teams.
- Strong communication skills to explain ML concepts, decisions, and limitations to business teams.
- Comfortable taking ownership and making decisions in ambiguous problem spaces.
- Passion for staying updated with advancements in ML and AI, with a practical mindset toward adoption.
Experience Needed
- 6+ years of experience in machine learning engineering or related roles.
- Proven experience designing, selecting, and deploying ML solutions used in production.
- Experience managing ML systems after deployment, including monitoring and iteration.
- Proven track record of working in cross-functional teams and leading ML initiatives.
Job Description:
Exp Range - [6y to 10y]
Qualifications:
- Minimum Bachelors Degree in Engineering or Computer Applications or AI/Data science
- Experience working in product companies/Startups for developing, validating, productionizing AI model in the recent projects in last 3 years.
- Prior experience in Python, Numpy, Scikit, Pandas, ETL/SQL, BI tools in previous roles preferred
Require Skills:
- Must Have – Direct hands-on experience working in Python for scripting automation analysis and Orchestration
- Must Have – Experience working with ML Libraries such as Scikit-learn, TensorFlow, PyTorch, Pandas, NumPy etc.
- Must Have – Experience working with models such as Random forest, Kmeans clustering, BERT…
- Should Have – Exposure to querying warehouses and APIs
- Should Have – Experience with writing moderate to complex SQL queries
- Should Have – Experience analyzing and presenting data with BI tools or Excel
- Must Have – Very strong communication skills to work with technical and non technical stakeholders in a global environment
Roles and Responsibilities:
- Work with Business stakeholders, Business Analysts, Data Analysts to understand various data flows and usage.
- Analyse and present insights about the data and processes to Business Stakeholders
- Validate and test appropriate AI/ML models based on the prioritization and insights developed while working with the Business Stakeholders
- Develop and deploy customized models on Production data sets to generate analytical insights and predictions
- Participate in cross functional team meetings and provide estimates of work as well as progress in assigned tasks.
- Highlight risks and challenges to the relevant stakeholders so that work is delivered in a timely manner.
- Share knowledge and best practices with broader teams to make everyone aware and more productive.
About Us
Mobileum is a leading provider of Telecom analytics solutions for roaming, core network, security, risk management, domestic and international connectivity testing, and customer intelligence.
More than 1,000 customers rely on its Active Intelligence platform, which provides advanced analytics solutions, allowing customers to connect deep network and operational intelligence with real-time actions that increase revenue, improve customer experience, and reduce costs.
Headquartered in Silicon Valley, Mobileum has global offices in Australia, Dubai, Germany, Greece, India, Portugal, Singapore and UK with global HC of over 1800+.
Join Mobileum Team
At Mobileum we recognize that our team is the main reason for our success. What does work with us mean? Opportunities!
Role: GenAI/LLM Engineer – Domain-Specific AI Solutions (Telecom)
About the Job
We are seeking a highly skilled GenAI/LLM Engineer to design, fine-tune, and operationalize Large Language Models (LLMs) for telecom business applications. This role will be instrumental in building domain-specific GenAI solutions, including the development of domain-specific LLMs, to transform telecom operational processes, customer interactions, and internal decision-making workflows.
Roles & Responsibility:
- Build domain-specific LLMs by curating domain-relevant datasets and training/fine-tuning LLMs tailored for telecom use cases.
- Fine-tune pre-trained LLMs (e.g., GPT, Llama, Mistral) using telecom-specific datasets to improve task accuracy and relevance.
- Design and implement prompt engineering frameworks, optimize prompt construction and context strategies for telco-specific queries and processes.
- Develop Retrieval-Augmented Generation (RAG) pipelines integrated with vector databases (e.g., FAISS, Pinecone) to enhance LLM performance on internal knowledge.
- Build multi-agent LLM pipelines using orchestration tools (LangChain, LlamaIndex) to support complex telecom workflows.
- Collaborate cross-functionally with data engineers, product teams, and domain experts to translate telecom business logic into GenAI workflows.
- Conduct systematic model evaluation focused on minimizing hallucinations, improving domain-specific accuracy, and tracking performance improvements on business KPIs.
- Contribute to the development of internal reusable GenAI modules, coding standards, and best practices documentation.
Desired Profile
- Familiarity with multi-modal LLMs (text + tabular/time-series).
- Experience with OpenAI function calling, LangGraph, or agent-based orchestration.
- Exposure to telecom datasets (e.g., call records, customer tickets, network logs).
- Experience with low-latency inference optimization (e.g., quantization, distillation).
Technical skills
- Hands-on experience in fine-tuning transformer models, prompt engineering, and RAG architecture design.
- Experience delivering production-ready AI solutions in enterprise environments; telecom exposure is a plus.
- Advanced knowledge of transformer architectures, fine-tuning techniques (LoRA, PEFT, adapters), and transfer learning.
- Proficiency in Python, with significant experience using PyTorch, Hugging Face Transformers, and related NLP libraries.
- Practical expertise in prompt engineering, RAG pipelines, and LLM orchestration tools (LangChain, LlamaIndex).
- Ability to build domain-adapted LLMs, from data preparation to final model deployment.
Work Experience
7+ years of professional experience in AI/ML, with at least 2+ years of practical exposure to LLMs or GenAI deployments.
Educational Qualification
- Master’s or Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, Natural Language Processing, or a related field.
- Ph.D. preferred for foundational model work and advanced research focus.
About Impacto Digifin Technologies
Impacto Digifin Technologies enables enterprises to adopt digital transformation through intelligent, AI-powered solutions. Our platforms reduce manual work, improve accuracy, automate complex workflows, and ensure compliance—empowering organizations to operate with speed, clarity, and confidence.
We combine automation where it’s fastest with human oversight where it matters most. This hybrid approach ensures trust, reliability, and measurable efficiency across fintech and enterprise operations.
Role Overview
We are looking for an AI Engineer Voice with strong applied experience in machine learning, deep learning, NLP, GenAI, and full-stack voice AI systems.
This role requires someone who can design, build, deploy, and optimize end-to-end voice AI pipelines, including speech-to-text, text-to-speech, real-time streaming voice interactions, voice-enabled AI applications, and voice-to-LLM integrations.
You will work across core ML/DL systems, voice models, predictive analytics, banking-domain AI applications, and emerging AGI-aligned frameworks. The ideal candidate is an applied engineer with strong fundamentals, the ability to prototype quickly, and the maturity to contribute to R&D when needed.
This role is collaborative, cross-functional, and hands-on.
Key Responsibilities
Voice AI Engineering
- Build end-to-end voice AI systems, including STT, TTS, VAD, audio processing, and conversational voice pipelines.
- Implement real-time voice pipelines involving streaming interactions with LLMs and AI agents.
- Design and integrate voice calling workflows, bi-directional audio streaming, and voice-based user interactions.
- Develop voice-enabled applications, voice chat systems, and voice-to-AI integrations for enterprise workflows.
- Build and optimize audio preprocessing layers (noise reduction, segmentation, normalization)
- Implement voice understanding modules, speech intent extraction, and context tracking.
Machine Learning & Deep Learning
- Build, deploy, and optimize ML and DL models for prediction, classification, and automation use cases.
- Train and fine-tune neural networks for text, speech, and multimodal tasks.
- Build traditional ML systems where needed (statistical, rule-based, hybrid systems).
- Perform feature engineering, model evaluation, retraining, and continuous learning cycles.
NLP, LLMs & GenAI
- Implement NLP pipelines including tokenization, NER, intent, embeddings, and semantic classification.
- Work with LLM architectures for text + voice workflows
- Build GenAI-based workflows and integrate models into production systems.
- Implement RAG pipelines and agent-based systems for complex automation.
Fintech & Banking AI
- Work on AI-driven features related to banking, financial risk, compliance automation, fraud patterns, and customer intelligence.
- Understand fintech data structures and constraints while designing AI models.
Engineering, Deployment & Collaboration
- Deploy models on cloud or on-prem (AWS / Azure / GCP / internal infra).
- Build robust APIs and services for voice and ML-based functionalities.
- Collaborate with data engineers, backend developers, and business teams to deliver end-to-end AI solutions.
- Document systems and contribute to internal knowledge bases and R&D.
Security & Compliance
- Follow fundamental best practices for AI security, access control, and safe data handling.
- Awareness of financial compliance standards (plus, not mandatory).
- Follow internal guidelines on PII, audio data, and model privacy.
Primary Skills (Must-Have)
Core AI
- Machine Learning fundamentals
- Deep Learning architectures
- NLP pipelines and transformers
- LLM usage and integration
- GenAI development
- Voice AI (STT, TTS, VAD, real-time pipelines)
- Audio processing fundamentals
- Model building, tuning, and retraining
- RAG systems
- AI Agents (orchestration, multi-step reasoning)
Voice Engineering
- End-to-end voice application development
- Voice calling & telephony integration (framework-agnostic)
- Realtime STT ↔ LLM ↔ TTS interactive flows
- Voice chat system development
- Voice-to-AI model integration for automation
Fintech/Banking Awareness
- High-level understanding of fintech and banking AI use cases
- Data patterns in core banking analytics (advantageous)
Programming & Engineering
- Python (strong competency)
- Cloud deployment understanding (AWS/Azure/GCP)
- API development
- Data processing & pipeline creation
Secondary Skills (Good to Have)
- MLOps & CI/CD for ML systems
- Vector databases
- Prompt engineering
- Model monitoring & evaluation frameworks
- Microservices experience
- Basic UI integration understanding for voice/chat
- Research reading & benchmarking ability
Qualifications
- 2–3 years of practical experience in AI/ML/DL engineering.
- Bachelor’s/Master’s degree in CS, AI, Data Science, or related fields.
- Proven hands-on experience building ML/DL/voice pipelines.
- Experience in fintech or data-intensive domains preferred.
Soft Skills
- Clear communication and requirement understanding
- Curiosity and research mindset
- Self-driven problem solving
- Ability to collaborate cross-functionally
- Strong ownership and delivery discipline
- Ability to explain complex AI concepts simply
Machine Learning Engineer | 3+ Years | Mumbai (Onsite)
Location: Ghansoli, Mumbai
Work Mode: Onsite | 5 days working
Notice Period: Immediate to 30 Days preferred
About the Role
We are hiring a Machine Learning Engineer with 3+ years of experience to build and deploy prediction, classification, and recommendation models. You’ll work on end-to-end ML pipelines and production-grade AI systems.
Must-Have Skills
- 3+ years of hands-on ML experience
- Strong Python (Pandas, NumPy, Scikit-learn, TensorFlow / PyTorch)
- Experience with feature engineering, model training & evaluation
- Hands-on with Azure ML / Azure Storage / Azure Functions
- Knowledge of modern AI concepts (embeddings, transformers, LLMs)
Good to Have
- MLOps tools (MLflow, Docker, CI/CD)
- Time-series forecasting
- Model serving using FastAPI
Why Join Us?
- Work on real-world ML use cases
- Exposure to modern AI & LLM-based systems
- Collaborative engineering environment
- High ownership & learning opportunities
Job Description: Applied Scientist
Location: Bangalore / Hybrid / Remote
Company: LodgIQ
Industry: Hospitality / SaaS / Machine Learning
About LodgIQ
Headquartered in New York, LodgIQ delivers a revolutionary B2B SaaS platform to the
travel industry. By leveraging machine learning and artificial intelligence, we enable precise
forecasting and optimized pricing for hotel revenue management. Backed by Highgate
Ventures and Trilantic Capital Partners, LodgIQ is a well-funded, high-growth startup with a
global presence.
About the Role
We are seeking a highly motivated Applied Scientist to join our Data Science team. This
individual will play a key role in enhancing and scaling our existing forecasting and pricing
systems and developing new capabilities that support our intelligent decision-making
platform.
We are looking for team members who: ● Are deeply curious and passionate about applying machine learning to real-world
problems. ● Demonstrate strong ownership and the ability to work independently. ● Excel in both technical execution and collaborative teamwork. ● Have a track record of shipping products in complex environments.
What You’ll Do ● Build, train, and deploy machine learning and operations research models for
forecasting, pricing, and inventory optimization. ● Work with large-scale, noisy, and temporally complex datasets. ● Collaborate cross-functionally with engineering and product teams to move models
from research to production. ● Generate interpretable and trusted outputs to support adoption of AI-driven rate
recommendations. ● Contribute to the development of an AI-first platform that redefines hospitality revenue
management.
Required Qualifications ● Bachelor's or Master’s degree or PhD in Computer Science or related field. ● 3-5 years of hands-on experience in a product-centric company, ideally with full model
lifecycle exposure.
Commented [1]: Leaving note here
Acceptable Degree types - Masters or PhD
Fields
Operations Research
Industrial/Systems Engineering
Computer Science
Applied Mathematics
● Demonstrated ability to apply machine learning and optimization techniques to solve
real-world business problems.
● Proficient in Python and machine learning libraries such as PyTorch, statsmodel,
LightGBM, scikit-learn, XGBoost
● Strong knowledge of Operations Research models (Stochastic optimization, dynamic
programming) and forecasting models (time-series and ML-based).
● Understanding of machine learning and deep learning foundations.
● Translate research into commercial solutions
● Strong written and verbal communication skills to explain complex technical concepts
clearly to cross-functional teams.
● Ability to work independently and manage projects end-to-end.
Preferred Experience
● Experience in revenue management, pricing systems, or demand forecasting,
particularly within the hotel and hospitality domain.
● Applied knowledge of reinforcement learning techniques (e.g., bandits, Q-learning,
model-based control).
● Familiarity with causal inference methods (e.g., DAGs, treatment effect estimation).
● Proven experience in collaborative product development environments, working closely
with engineering and product teams.
Why LodgIQ?
● Join a fast-growing, mission-driven company transforming the future of hospitality.
● Work on intellectually challenging problems at the intersection of machine learning,
decision science, and human behavior.
● Be part of a high-impact, collaborative team with the autonomy to drive initiatives from
ideation to production.
● Competitive salary and performance bonuses.
● For more information, visit https://www.lodgiq.com
Job Overview:
As a Technical Lead, you will be responsible for leading the design, development, and deployment of AI-powered Edtech solutions. You will mentor a team of engineers, collaborate with data scientists, and work closely with product managers to build scalable and efficient AI systems. The ideal candidate should have 8-10 years of experience in software development, machine learning, AI use case development and product creation along with strong expertise in cloud-based architectures.
Key Responsibilities:
AI Tutor & Simulation Intelligence
- Architect the AI intelligence layer that drives contextual tutoring, retrieval-based reasoning, and fact-grounded explanations.
- Build RAG (retrieval-augmented generation) pipelines and integrate verified academic datasets from textbooks and internal course notes.
- Connect the AI Tutor with the Simulation Lab, enabling dynamic feedback — the system should read experiment results, interpret them, and explain why outcomes occur.
- Ensure AI responses remain transparent, syllabus-aligned, and pedagogically accurate.
Platform & System Architecture
- Lead the development of a modular, full-stack platform unifying courses, explainers, AI chat, and simulation windows in a single environment.
- Design microservice architectures with API bridges across content systems, AI inference, user data, and analytics.
- Drive performance, scalability, and platform stability — every millisecond and every click should feel seamless.
Reliability, Security & Analytics
- Establish system observability and monitoring pipelines (usage, engagement, AI accuracy).
- Build frameworks for ethical AI, ensuring transparency, privacy, and student safety.
- Set up real-time learning analytics to measure comprehension and identify concept gaps.
Leadership & Collaboration
- Mentor and elevate engineers across backend, ML, and front-end teams.
- Collaborate with the academic and product teams to translate physics pedagogy into engineering precision.
- Evaluate and integrate emerging tools — multi-modal AI, agent frameworks, explainable AI — into the product roadmap.
Qualifications & Skills:
- 8–10 years of experience in software engineering, ML systems, or scalable AI product builds.
- Proven success leading cross-functional AI/ML and full-stack teams through 0→1 and scale-up phases.
- Expertise in cloud architecture (AWS/GCP/Azure) and containerization (Docker, Kubernetes).
- Experience designing microservices and API ecosystems for high-concurrency platforms.
- Strong knowledge of LLM fine-tuning, RAG pipelines, and vector databases (Pinecone, Weaviate, etc.).
- Demonstrated ability to work with educational data, content pipelines, and real-time systems.
Bonus Skills (Nice to Have):
- Experience with multi-modal AI models (text, image, audio, video).
- Knowledge of AI safety, ethical AI, and explain ability techniques.
- Prior work in AI-powered automation tools or AI-driven SaaS products.
We are seeking a hands-on eCommerce Analytics & Insights Lead to help establish and scale our newly launched eCommerce business. The ideal candidate is highly data-savvy, understands eCommerce deeply, and can lead KPI definition, performance tracking, insights generation, and data-driven decision-making.
You will work closely with cross-functional teams—Buying, Marketing, Operations, and Technology—to build dashboards, uncover growth opportunities, and guide the evolution of our online channel.
Key Responsibilities
Define & Monitor eCommerce KPIs
- Set up and track KPIs across the customer journey: traffic, conversion, retention, AOV/basket size, repeat rate, etc.
- Build KPI frameworks aligned with business goals.
Data Tracking & Infrastructure
- Partner with marketing, merchandising, operations, and tech teams to define data tracking requirements.
- Collaborate with eCommerce and data engineering teams to ensure data quality, completeness, and availability.
Dashboards & Reporting
- Build dashboards and automated reports to track:
- Overall site performance
- Category & product performance
- Marketing ROI and acquisition effectiveness
Insights & Performance Diagnosis
Identify trends, opportunities, and root causes of underperformance in areas such as:
- Product availability & stock health
- Pricing & promotions
- Checkout funnel drop-offs
- Customer retention & cohort behavior
- Channel acquisition performance
Conduct:
- Cohort analysis
- Funnel analytics
- Customer segmentation
- Basket analysis
Data-Driven Growth Initiatives
- Propose and evaluate experiments, optimization ideas, and quick wins.
- Help business teams interpret KPIs and take informed decisions.
Required Skills & Experience
- 2–5 years experience in eCommerce analytics (grocery retail experience preferred).
- Strong understanding of eCommerce metrics and analytics frameworks (Traffic → Conversion → Repeat → LTV).
- Proficiency with tools such as:
- Google Analytics / GA4
- Excel
- SQL
- Power BI or Tableau
- Experience working with:
- Digital marketing data
- CRM and customer data
- Product/category performance data
- Ability to convert business questions into analytical tasks and produce clear, actionable insights.
- Familiarity with:
- Customer journey mapping
- Funnel analysis
- Basket and behavioral analysis
- Comfortable working in fast-paced, ambiguous, and build-from-scratch environments.
- Strong communication and stakeholder management skills.
- Strong technical capability in at least one programming language: SQL or PySpark.
Good to Have
- Experience with eCommerce platforms (Shopify, Magento, Salesforce Commerce, etc.).
- Exposure to A/B testing, recommendation engines, or personalization analytics.
- Knowledge of Python/R for deeper analytics (optional).
- Experience with tracking setup (GTM, event tagging, pixel/event instrumentation).

Global digital transformation solutions provider.
Job Description – Senior Technical Business Analyst
Location: Trivandrum (Preferred) | Open to any location in India
Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST
About the Role
We are seeking highly motivated and analytically strong Senior Technical Business Analysts who can work seamlessly with business and technology stakeholders to convert a one-line problem statement into a well-defined project or opportunity. This role is ideal for fresh graduates who have a strong foundation in data analytics, data engineering, data visualization, and data science, along with a strong drive to learn, collaborate, and grow in a dynamic, fast-paced environment.
As a Technical Business Analyst, you will be responsible for translating complex business challenges into actionable user stories, analytical models, and executable tasks in Jira. You will work across the entire data lifecycle—from understanding business context to delivering insights, solutions, and measurable outcomes.
Key Responsibilities
Business & Analytical Responsibilities
- Partner with business teams to understand one-line problem statements and translate them into detailed business requirements, opportunities, and project scope.
- Conduct exploratory data analysis (EDA) to uncover trends, patterns, and business insights.
- Create documentation including Business Requirement Documents (BRDs), user stories, process flows, and analytical models.
- Break down business needs into concise, actionable, and development-ready user stories in Jira.
Data & Technical Responsibilities
- Collaborate with data engineering teams to design, review, and validate data pipelines, data models, and ETL/ELT workflows.
- Build dashboards, reports, and data visualizations using leading BI tools to communicate insights effectively.
- Apply foundational data science concepts such as statistical analysis, predictive modeling, and machine learning fundamentals.
- Validate and ensure data quality, consistency, and accuracy across datasets and systems.
Collaboration & Execution
- Work closely with product, engineering, BI, and operations teams to support the end-to-end delivery of analytical solutions.
- Assist in development, testing, and rollout of data-driven solutions.
- Present findings, insights, and recommendations clearly and confidently to both technical and non-technical stakeholders.
Required Skillsets
Core Technical Skills
- 6+ years of Technical Business Analyst experience within an overall professional experience of 8+ years
- Data Analytics: SQL, descriptive analytics, business problem framing.
- Data Engineering (Foundational): Understanding of data warehousing, ETL/ELT processes, cloud data platforms (AWS/GCP/Azure preferred).
- Data Visualization: Experience with Power BI, Tableau, or equivalent tools.
- Data Science (Basic/Intermediate): Python/R, statistical methods, fundamentals of ML algorithms.
Soft Skills
- Strong analytical thinking and structured problem-solving capability.
- Ability to convert business problems into clear technical requirements.
- Excellent communication, documentation, and presentation skills.
- High curiosity, adaptability, and eagerness to learn new tools and techniques.
Educational Qualifications
- BE/B.Tech or equivalent in:
- Computer Science / IT
- Data Science
What We Look For
- Demonstrated passion for data and analytics through projects and certifications.
- Strong commitment to continuous learning and innovation.
- Ability to work both independently and in collaborative team environments.
- Passion for solving business problems using data-driven approaches.
- Proven ability (or aptitude) to convert a one-line business problem into a structured project or opportunity.
Why Join Us?
- Exposure to modern data platforms, analytics tools, and AI technologies.
- A culture that promotes innovation, ownership, and continuous learning.
- Supportive environment to build a strong career in data and analytics.
Skills: Data Analytics, Business Analysis, Sql
Must-Haves
Technical Business Analyst (6+ years), SQL, Data Visualization (Power BI, Tableau), Data Engineering (ETL/ELT, cloud platforms), Python/R
******
Notice period - 0 to 15 days (Max 30 Days)
Educational Qualifications: BE/B.Tech or equivalent in: (Computer Science / IT) /Data Science
Location: Trivandrum (Preferred) | Open to any location in India
Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST
ML Intern
Hyperworks Imaging is a cutting-edge technology company based out of Bengaluru, India since 2016. Our team uses the latest advances in deep learning and multi-modal machine learning techniques to solve diverse real world problems. We are rapidly growing, working with multiple companies around the world.
JOB OVERVIEW
We are seeking a talented and results-oriented ML Intern to join our growing team in India. In this role, you will be responsible for developing and implementing new advanced ML algorithms and AI agents for creating assistants of the future.
The ideal candidate will work on a complete ML pipeline starting from extraction, transformation and analysis of data to developing novel ML algorithms. The candidate will implement latest research papers and closely work with various stakeholders to ensure data-driven decisions and integrate the solutions into a robust ML pipeline.
RESPONSIBILITIES:
- Create AI agents using Model Context Protocols (MCPs), Claude Code, DsPy etc.
- Develop custom evals for AI agents.
- Build and maintain ML pipelines
- Optimize and evaluate ML models to ensure accuracy and performance.
- Define system requirements and integrate ML algorithms into cloud based workflows.
- Write clean, well-documented, and maintainable code following best practices
REQUIREMENTS:
- 1-3+ years of experience in data science, machine learning, or a similar role.
- Demonstrated expertise with python, PyTorch, and TensorFlow.
- Graduated/Graduating with B.Tech/M.Tech/PhD degrees in Electrical Engg./Electronics Engg./Computer Science/Maths and Computing/Physics
- Has done coursework in Linear Algebra, Probability, Image Processing, Deep Learning and Machine Learning.
- Has demonstrated experience with Model Context Protocols (MCPs), DsPy, AI Agents, MLOps etc
WHO CAN APPLY:
Only those candidates will be considered who,
- have relevant skills and interests
- can commit full time
- Can show prior work and deployed projects
- can start immediately
Please note that we will reach out to ONLY those applicants who satisfy the criteria listed above.
SALARY DETAILS: Commensurate with experience.
JOINING DATE: Immediate
JOB TYPE: Full-time
Strong Senior Data Scientist (AI/ML/GenAI) Profile
Mandatory (Experience 1) – Must have a minimum of 5+ years of experience in designing, developing, and deploying Machine Learning / Deep Learning (ML/DL) systems in production
Mandatory (Experience 2) – Must have strong hands-on experience in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.
Mandatory (Experience 3) – Must have 1+ years of experience in fine-tuning Large Language Models (LLMs) using techniques like LoRA/QLoRA, and building RAG (Retrieval-Augmented Generation) pipelines.
Mandatory (Experience 4) – Must have experience with MLOps and production-grade systems including Docker, Kubernetes, Spark, model registries, and CI/CD workflows
mandatory (Note): Budget for upto 5 yoe is 25Lacs, Upto 7 years is 35 lacs and upto 12 years is 45 lacs, Also client can pay max 30-40% based on candidature
This opportunity through ClanX is for Parspec (direct payroll with Parspec)
Please note you would be interviewed for this role at Parspec while ClanX is a recruitment partner helping Parspec find the right candidates and manage the hiring process.
Parspec is hiring a Machine Learning Engineer to build state-of-the-art AI systems, focusing on NLP, LLMs, and computer vision in a high-growth, product-led environment.
Company Details:
Parspec is a fast-growing technology startup revolutionizing the construction supply chain. By digitizing and automating critical workflows, Parspec empowers sales teams with intelligent tools to drive efficiency and close more deals. With a mission to bring a data-driven future to the construction industry, Parspec blends deep domain knowledge with AI expertise.
Requirements:
- Bachelor’s or Master’s degree in Science or Engineering.
- 5-7 years of experience in ML and data science.
- Recent hands-on work with LLMs, including fine-tuning, RAG, agent flows, and integrations.
- Strong understanding of foundational models and transformers.
- Solid grasp of ML and DL fundamentals, with experience in CV and NLP.
- Recent experience working with large datasets.
- Python experience with ML libraries like numpy, pandas, sklearn, matplotlib, nltk and others.
- Experience with frameworks like Hugging Face, Spacy, BERT, TensorFlow, PyTorch, OpenRouter or Modal.
Requirements:
Must haves
- Design, develop, and deploy NLP, CV, and recommendation systems
- Train and implement deep learning models
- Research and explore novel ML architectures
- Build and maintain end-to-end ML pipelines
- Collaborate across product, design, and engineering teams
- Work closely with business stakeholders to shape product features
- Ensure high scalability and performance of AI solutions
- Uphold best practices in engineering and contribute to a culture of excellence
- Actively participate in R&D and innovation within the team
Good to haves
- Experience building scalable AI pipelines for extracting structured data from unstructured sources.
- Experience with cloud platforms, containerization, and managed AI services.
- Knowledge of MLOps practices, CI/CD, monitoring, and governance.
- Experience with AWS or Django.
- Familiarity with databases and web application architecture.
- Experience with OCR or PDF tools.
Responsibilities:
- Design, develop, and deploy NLP, CV, and recommendation systems
- Train and implement deep learning models
- Research and explore novel ML architectures
- Build and maintain end-to-end ML pipelines
- Collaborate across product, design, and engineering teams
- Work closely with business stakeholders to shape product features
- Ensure high scalability and performance of AI solutions
- Uphold best practices in engineering and contribute to a culture of excellence
- Actively participate in R&D and innovation within the team
Interview Process
- Technical interview (coding, ML concepts, project walkthrough)
- System design and architecture round
- Culture fit and leadership interaction
- Final offer discussion
Role Overview
Join our core tech team to build the intelligence layer of Clink's platform. You'll architect AI agents, design prompts, build ML models, and create systems powering personalized offers for thousands of restaurants. High-growth opportunity working directly with founders, owning critical features from day one.
Why Clink?
Clink revolutionizes restaurant loyalty using AI-powered offer generation and customer analytics:
- ML-driven customer behavior analysis (Pattern detection)
- Personalized offers via LLMs and custom AI agents
- ROI prediction and forecasting models
- Instagram marketing rewards integration
Tech Stack:
- Python,
- FastAPI,
- PostgreSQL,
- Redis,
- Docker,
- LLMs
You Will Work On:
AI Agents: Design and optimize AI agents
ML Models: Build redemption prediction, customer segmentation, ROI forecasting
Data & Analytics: Analyze data, build behavior pattern pipelines, create product bundling matrices
System Design: Architect scalable async AI pipelines, design feedback loops, implement A/B testing
Experimentation: Test different LLM approaches, explore hybrid LLM+ML architectures, prototype new capabilities
Must-Have Skills
Technical: 0-2 years AI/ML experience (projects/internships count), strong Python, LLM API knowledge, ML fundamentals (supervised learning, clustering), Pandas/NumPy proficiency
Mindset: Extreme curiosity, logical problem-solving, builder mentality (side projects/hackathons), ownership mindset
Nice to Have: Pydantic, FastAPI, statistical forecasting, PostgreSQL/SQL, scikit-learn, food-tech/loyalty domain interest
We are looking for enthusiastic engineers passionate about building and maintaining solutioning platform components on cloud and Kubernetes infrastructure. The ideal candidate will go beyond traditional SRE responsibilities by collaborating with stakeholders, understanding the applications hosted on the platform, and designing automation solutions that enhance platform efficiency, reliability, and value.
[Technology and Sub-technology]
• ML Engineering / Modelling
• Python Programming
• GPU frameworks: TensorFlow, Keras, Pytorch etc.
• Cloud Based ML development and Deployment AWS or Azure
[Qualifications]
• Bachelor’s Degree in Computer Science, Computer Engineering or equivalent technical degree
• Proficient programming knowledge in Python or Java and ability to read and explain open source codebase.
• Good foundation of Operating Systems, Networking and Security Principles
• Exposure to DevOps tools, with experience integrating platform components into Sagemaker/ECR and AWS Cloud environments.
• 4-6 years of relevant experience working on AI/ML projects
[Primary Skills]:
• Excellent analytical & problem solving skills.
• Exposure to Machine Learning and GenAI technologies.
• Understanding and hands-on experience with AI/ML Modeling, Libraries, frameworks, and Tools (TensorFlow, Keras, Pytorch etc.)
• Strong knowledge of Python, SQL/NoSQL
• Cloud Based ML development and Deployment AWS or Azure
Review Criteria
- Strong Senior Data Scientist (AI/ML/GenAI) Profile
- 5+ years of experience in designing, developing, and deploying Machine Learning / Deep Learning (ML/DL) systems in production
- Must have strong hands-on experience in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.
- 1+ years of experience in fine-tuning Large Language Models (LLMs) using techniques like LoRA/QLoRA, and building RAG (Retrieval-Augmented Generation) pipelines.
- Must have experience with MLOps and production-grade systems including Docker, Kubernetes, Spark, model registries, and CI/CD workflows
Preferred
- Prior experience in open-source GenAI contributions, applied LLM/GenAI research, or large-scale production AI systems
- Preferred (Education) – B.S./M.S./Ph.D. in Computer Science, Data Science, Machine Learning, or a related field.
Job Specific Criteria
- CV Attachment is mandatory
- Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
- Are you okay with 3 Days WFO?
- Virtual Interview requires video to be on, are you okay with it?
Role & Responsibilities
Company is hiring a Senior Data Scientist with strong expertise in AI, machine learning engineering (MLE), and generative AI. You will play a leading role in designing, deploying, and scaling production-grade ML systems — including large language model (LLM)-based pipelines, AI copilots, and agentic workflows. This role is ideal for someone who thrives on balancing cutting-edge research with production rigor and loves mentoring while building impact-first AI applications.
Responsibilities:
- Own the full ML lifecycle: model design, training, evaluation, deployment
- Design production-ready ML pipelines with CI/CD, testing, monitoring, and drift detection
- Fine-tune LLMs and implement retrieval-augmented generation (RAG) pipelines
- Build agentic workflows for reasoning, planning, and decision-making
- Develop both real-time and batch inference systems using Docker, Kubernetes, and Spark
- Leverage state-of-the-art architectures: transformers, diffusion models, RLHF, and multimodal pipelines
- Collaborate with product and engineering teams to integrate AI models into business applications
- Mentor junior team members and promote MLOps, scalable architecture, and responsible AI best practices
Ideal Candidate
- 5+ years of experience in designing, deploying, and scaling ML/DL systems in production
- Proficient in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX
- Experience with LLM fine-tuning, LoRA/QLoRA, vector search (Weaviate/PGVector), and RAG pipelines
- Familiarity with agent-based development (e.g., ReAct agents, function-calling, orchestration)
- Solid understanding of MLOps: Docker, Kubernetes, Spark, model registries, and deployment workflows
- Strong software engineering background with experience in testing, version control, and APIs
- Proven ability to balance innovation with scalable deployment
- B.S./M.S./Ph.D. in Computer Science, Data Science, or a related field
- Bonus: Open-source contributions, GenAI research, or applied systems at scale
Job Title: Sensor Expert – MLFF (Multi-Lane Free Flow) Engagement Type: Consultant / External Associate Organization: Bosch - MPIN
Location: Bangalore, India
Purpose of the Role:
To provide technical expertise in sensing technologies for MLFF (Multi-Lane Free Flow) and ITMS (Intelligent Traffic Management System) solutions. The role focuses on camera systems, AI/ML based computer vision, and multi-sensor integration (camera, RFID, radar) to drive solution performance, optimization, and business success. Key
Responsibilities:
• Lead end-to-end sensor integration for MLFF and ITMS platforms.
• Manage camera systems, ANPR, and data packet processing.
• Apply AI/ML techniques for performance optimization in computer vision.
• Collaborate with System Integrators and internal teams on architecture and implementation.
• Support B2G proposals (smart city, mining, infrastructure projects) with domain expertise.
• Drive continuous improvement in deployed MLFF solutions.
Key Competencies:
• Deep understanding of camera and sensor technologies, AI/ML for vision systems, and system integration.
• Experience in PoC development and solution optimization.
• Strong analytical, problem-solving, and collaboration skills.
• Familiarity with B2G environments and public infrastructure tenders preferred.
Qualification & Experience:
• Bachelor’s/Master’s in Electronics, Electrical, or Computer Science.
• 8–10 years of experience in camera technology, AI/ML, and sensor integration.
• Proven track record in system design, implementation, and field optimization.
🚀 Join GuppShupp: Build Bharat's First AI Lifelong Friend
GuppShupp's mission is nothing short of building Bharat's First AI Lifelong Friend. This is more than just a chatbot—it's about creating a truly personalized, consistently available companion that understands and grows with the user over a lifetime. We are pioneering this deeply personal experience using cutting-edge Generative AI.
We're hiring a Founding AI Engineer (1+ Year Experience) to join our small team of A+ builders and craft the foundational LLM and infrastructure behind this mission.
If you are passionate about:
- Deep personalization and managing complex user state/memory.
- Building high-quality, high-throughput AI tools.
- Next-level infrastructure at an incredible scale (millions of users).
What you'll do (responsibilities)
We're looking for an experienced individual contributor who enjoys working alongside other experienced engineers and iterate on AI
Prompt Engineering & Testing
- Write, test, and iterate numerous prompt variations.
- Identify and fix failures, biases, or edge cases in AI responses.
Advance LLM and Development
- Engineer solutions for long-term conversational memory and statefulness in LLMs.
- Implement techniques (e.g., retrieval-augmented generation (RAG) or summarization) to effectively manage and extend the context window for complex tasks.
Collaboration & Optimization
- Work with product and growth teams to turn feature goals into effective technical prompts.
- Optimize prompts for diverse use cases (e.g., chat, content, personalization).
LLM Fine-Tuning & Management
- Prepare, clean, and format datasets for training.
- Run fine-tuning jobs on smaller, specialized language models.
- Assist in deploying, monitoring, and maintaining these models
What we're looking for (qualifications)
You are an AI Engineer who has successfully shipped systems in this domain for over a year—you won't need ramp-up time. We prioritize continuous learning and hands-on skill development over formal qualifications. Crucially, we are looking for a teammate driven by a sense of duty to the user and a passion for taking full ownership of their contributions.
AI based systems design and development, entire pipeline from image/ video ingest, metadata ingest, processing, encoding, transmitting.
Implementation and testing of advanced computer vision algorithms.
Dataset search, preparation, annotation, training, testing, fine tuning of vision CNN models. Multimodal AI, LLMs, hardware deployment, explainability.
Detailed analysis of results. Documentation, version control, client support, upgrades.
Experience- 6 to 8 years
Location- Bangalore
Job Description-
- Extensive experience with machine learning utilizing the latest analytical models in Python. (i.e., experience in generating data-driven insights that play a key role in rapid decision-making and driving business outcomes.)
- Extensive experience using Tableau, table design, PowerApps, Power BI, Power Automate, and cloud environments, or equivalent experience designing/implementing data analysis pipelines and visualization.
- Extensive experience using AI agent platforms. (AI = data analysis: a required skill for data analysts.)
- A statistics major or equivalent understanding of statistical analysis results interpretation.
Job Title: AI Engineer
Location: Bengaluru
Experience: 3 Years
Working Days: 5 Days
About the Role
We’re reimagining how enterprises interact with documents and workflows—starting with BFSI and healthcare. Our AI-first platforms are transforming credit decisioning, document intelligence, and underwriting at scale. The focus is on Intelligent Document Processing (IDP), GenAI-powered analysis, and human-in-the-loop (HITL) automation to accelerate outcomes across lending, insurance, and compliance workflows.
As an AI Engineer, you’ll be part of a high-caliber engineering team building next-gen AI systems that:
- Power robust APIs and platforms used by underwriters, credit analysts, and financial institutions.
- Build and integrate GenAI agents.
- Enable “human-in-the-loop” workflows for high-assurance decisions in real-world conditions.
Key Responsibilities
- Build and optimize ML/DL models for document understanding, classification, and summarization.
- Apply LLMs and RAG techniques for validation, search, and question-answering tasks.
- Design and maintain data pipelines for structured and unstructured inputs (PDFs, OCR text, JSON, etc.).
- Package and deploy models as REST APIs or microservices in production environments.
- Collaborate with engineering teams to integrate models into existing products and workflows.
- Continuously monitor and retrain models to ensure reliability and performance.
- Stay updated on emerging AI frameworks, architectures, and open-source tools; propose improvements to internal systems.
Required Skills & Experience
- 2–5 years of hands-on experience in AI/ML model development, fine-tuning, and building ML solutions.
- Strong Python proficiency with libraries such as NumPy, Pandas, scikit-learn, PyTorch, or TensorFlow.
- Solid understanding of transformers, embeddings, and NLP pipelines.
- Experience working with LLMs (OpenAI, Claude, Gemini, etc.) and frameworks like LangChain.
- Exposure to OCR, document parsing, and unstructured text analytics.
- Familiarity with model serving, APIs, and microservice architectures (FastAPI, Flask).
- Working knowledge of Docker, cloud environments (AWS/GCP/Azure), and CI/CD pipelines.
- Strong grasp of data preprocessing, evaluation metrics, and model validation workflows.
- Excellent problem-solving ability, structured thinking, and clean, production-ready coding practices.
Full Stack Engineer
Position Description
Responsibilities
• Take design mockups provided by UX/UI designers and translate them into web pages or applications using HTML and CSS. Ensure that the design is faithfully replicated in the final product.
• Develop enabling frameworks and application E2E and enhance with data analytics and AI enablement
• Ensure effective Design, Development, Validation and Support activities in line with the Customer needs, architectural requirements, and ABB Standards.
• Support ABB business units through consulting engagements.
• Develop and implement machine learning models to solve specific business problems, such as predictive analytics, classification, and recommendation systems
• Perform exploratory data analysis, clean and preprocess data, and identify trends and patterns.
• Evaluate the performance of machine learning models and fine-tune them for optimal results.
• Create informative and visually appealing data visualizations to communicate findings and insights to non-technical stakeholders.
• Conduct statistical analysis, hypothesis testing, and A/B testing to support decision-making processes.
• Define the solution, Project plan, identifying and allocation of team members, project tracking; Work with data engineers to integrate, transform, and store data from various sources.
• Collaborate with cross-functional teams, including business analysts, data engineers, and domain experts, to understand business objectives and develop data science solutions.
• Prepare clear and concise reports and documentation to communicate results and methodologies.
• Stay updated with the latest data science and machine learning trends and techniques.
• Familiarity with ML Model Deployment as REST APIs.
Background
• Engineering graduate / Masters degree with rich exposure to Data science, from a reputed institution
• Create responsive web designs that adapt to different screen sizes and devices using media queries and responsive design techniques.
• Write and maintain JavaScript code to add interactivity and dynamic functionality to web pages. This may include user input handling, form validation, and basic animations.
• Familiarity with front-end JavaScript libraries and frameworks such as React, Angular, or Vue.js. Depending on the projects, you may be responsible for working within these frameworks
• Atleast 6+ years experience in AI ML concepts, Python (preferred), prefer knowledge in deep learning frameworks like PyTorch and TensorFlow
• Domain knowledge of Manufacturing/ process Industries, Physics and first principle based analysis
• Analytical thinking for translating data into meaningful insight and could be consumed by ML Model for Training and Prediction.
• Should be able to deploy Model using Cloud services like Azure Databricks or Azure ML Studio. Familiarity with technologies like Docker, Kubernetes and MLflow is good to have.
• Agile development of customer centric prototypes or ‘Proof of Concepts’ for focused digital solutions
• Good communication skills must be able to discuss the requirements effectively with the client teams, and with internal teams.
Hi Shanon, As discussed we have one open req for AI role. The right skill set I am looking for is experience with RAG pipelines, Performance monitoring of prompts and tuning. Multi model RAG and agentic AI soon but not night away. Needless to say Python and the NLP lib experience is a must. Database knowledge is also essential. The developer needs to be in Morgan Stanley offices 3 days per week. BLR location is preferred, if not Mumbai is also fine. The duration is long term since we are looking to expand the use cases Please let me know if you have any questions.
Role: Data Scientist (Python + R Expertise)
Exp: 8 -12 Years
CTC: up to 30 LPA
Required Skills & Qualifications:
- 8–12 years of hands-on experience as a Data Scientist or in a similar analytical role.
- Strong expertise in Python and R for data analysis, modeling, and visualization.
- Proficiency in machine learning frameworks (scikit-learn, TensorFlow, PyTorch, caret, etc.).
- Strong understanding of statistical modeling, hypothesis testing, regression, and classification techniques.
- Experience with SQL and working with large-scale structured and unstructured data.
- Familiarity with cloud platforms (AWS, Azure, or GCP) and deployment practices (Docker, MLflow).
- Excellent analytical, problem-solving, and communication skills.
Preferred Skills:
- Experience with NLP, time series forecasting, or deep learning projects.
- Exposure to data visualization tools (Tableau, Power BI, or R Shiny).
- Experience working in product or data-driven organizations.
- Knowledge of MLOps and model lifecycle management is a plus.
If interested kindly share your updated resume on 82008 31681
Role: Sr. Data Scientist
Exp: 4 -8 Years
CTC: up to 28 LPA
Technical Skills:
o Strong programming skills in Python, with hands-on experience in deep learning frameworks like TensorFlow, PyTorch, or Keras.
o Familiarity with Databricks notebooks, MLflow, and Delta Lake for scalable machine learning workflows.
o Experience with MLOps best practices, including model versioning, CI/CD pipelines, and automated deployment.
o Proficiency in data preprocessing, augmentation, and handling large-scale image/video datasets.
o Solid understanding of computer vision algorithms, including CNNs, transfer learning, and transformer-based vision models (e.g., ViT).
o Exposure to natural language processing (NLP) techniques is a plus.
Cloud & Infrastructure:
o Strong expertise in Azure cloud ecosystem,
o Experience working in UNIX/Linux environments and using command-line tools for automation and scripting.
If interested kindly share your updated resume at 82008 31681
Role: Sr. Data Scientist
Exp: 4-8 Years
CTC: up to 25 LPA
Technical Skills:
● Strong programming skills in Python, with hands-on experience in deep learning frameworks like TensorFlow, PyTorch, or Keras.
● Familiarity with Databricks notebooks, MLflow, and Delta Lake for scalable machine learning workflows.
● Experience with MLOps best practices, including model versioning, CI/CD pipelines, and automated deployment.
● Proficiency in data preprocessing, augmentation, and handling large-scale image/video datasets.
● Solid understanding of computer vision algorithms, including CNNs, transfer learning, and transformer-based vision models (e.g., ViT).
● Exposure to natural language processing (NLP) techniques is a plus.
• Educational Qualifications:
- B.E./B.Tech/M.Tech/MCA in Computer Science, Electronics & Communication, Electrical Engineering, or a related field.
- A master’s degree in computer science, Artificial Intelligence, or a specialization in Deep Learning or Computer Vision is highly preferred
If interested share your resume on 82008 31681
Role: Sr. Data Scientist
Exp: 4-8 Years
CTC: up to 25 LPA
Technical Skills:
● Strong programming skills in Python, with hands-on experience in deep learning frameworks like TensorFlow, PyTorch, or Keras.
● Familiarity with Databricks notebooks, MLflow, and Delta Lake for scalable machine learning workflows.
● Experience with MLOps best practices, including model versioning, CI/CD pipelines, and automated deployment.
● Proficiency in data preprocessing, augmentation, and handling large-scale image/video datasets.
● Solid understanding of computer vision algorithms, including CNNs, transfer learning, and transformer-based vision models (e.g., ViT).
● Exposure to natural language processing (NLP) techniques is a plus.
• Educational Qualifications:
- B.E./B.Tech/M.Tech/MCA in Computer Science, Electronics & Communication, Electrical Engineering, or a related field.
- A master’s degree in computer science, Artificial Intelligence, or a specialization in Deep Learning or Computer Vision is highly preferred
About HealthAsyst
HealthAsyst is a leading technology company based out of Bangalore India focusing on the US healthcare market with a product and services portfolio.
HealthAsyst IT services division offers a whole gamut of software services, helping clients effectively address their operational challenges. The services include product engineering, maintenance, quality assurance, custom-development, implementation & healthcare integration. The product division of HealthAsyst partners with leading EHR, PMS and RIS vendors to provide cutting-edge patient engagement solutions to small and large provider group in the US market.
Role and Responsibilities
- Ability to face customer as AI expert, assisting in client consultations.
- Own the solutioning process, Align AI projects with client requirements.
- Drive the development of AI and ML solutions that address business problems.
- Collaborate with development teams and solution architects to develop and integrate AI solutions.
- Monitor and evaluate the performance and impact of AI and ML solutions and ensure continuous improvement and optimization.
- Design and develop AI/ML models to solve complex business problems using supervised, unsupervised, and reinforcement learning techniques..
- Build, train, and evaluate machine learning pipelines, including data preprocessing, feature engineering, and model tuning.
- Establish and maintain best practices and standards for architecture, AI and ML models, innovation, and new technology evaluation.
- Collaborate with software developers to integrate AI capabilities into applications and workflows
- Develop APIs and microservices to serve AI models for real-time or batch inference.
- Foster a culture of innovation and collaboration within the COE, across teams and provide mentorship/guidance to the team members.
- Implement responsible AI practices, including model explainability, fairness, bias detection, and compliance with ethical standards.
- Experience in deploying AI models into production environments using tools like TensorFlow Serving, TorchServe, or container-based deployment (Docker, Kubernetes)
Qualifications
- 3+ years of experience in AI and ML projects.
- Proven track record of delivering successful AI and ML solutions that address complex business problems.
- Expertise in design, development, deployment and monitoring of AI ML solutions in production.
- Proficiency in various AI and ML techniques and tools, such as deep learning, NLP, computer vision, ML frameworks, cloud platforms, etc.
- 1+ year experience in building Generative AI applications leveraging Prompt Engineering and RAG
- Preference to candidates with experience in Agentic AI, MCP and A2A (Agent2Agent) protocol.
- Strong leadership, communication, presentation and stakeholder management skills.
- Ability to think strategically, creatively and analytically, and to translate business requirements into AI and ML solutions.
- Passion for learning and staying updated with the latest developments and trends in the field of AI and ML.
- Demonstrated commitment to ethical and socially responsible AI practices
Employee Benefits:
HealthAsyst provides the following health, and wellness benefits to cover a range of physical and mental well-being for its employees.
- Bi-Annual Salary Reviews
- Flexible working hours
- 3 days Hybrid model
- GMC (Group Mediclaim): Provides Insurance coverage of Rs. 3 lakhs + a corporate buffer of 2 Lakhs per family. This is a family floater policy, and the company covers all the employees, spouse, and up to two children
- Employee Wellness Program- HealthAsyst offers unlimited online doctor consultations for self and family from a range of 31 specialties for no cost to employees. And OPD consultations with GP Doctors are available in person for No Cost to employees
- GPA (Group Personal Accident): Provides insurance coverage of Rs. 20 lakhs to the employee against the risk of death/injury during the policy period sustained due to an accident
- GTL (Group Term Life): Provides life term insurance protection to employees in case of death. The coverage is one time of the employee’s CTC
- Employee Assistance Program: HealthAsyst offers complete confidential counselling services to employees & family members for mental wellbeing
- Sponsored upskills program: The company will sponsor up to 1 Lakh for certifications/higher education/skill upskilling.
- Flexible Benefits Plan – covering a range of components like
a. National Pension System.
b. Internet/Mobile Reimbursements.
c. Fuel Reimbursements.
d. Professional Education Reimbursements.
We're Hiring: Machine Learning & Data Science Engineer
Location: Gurugram / Bengaluru (Full-time, In-Office)
Salary: Up to ₹2.5 Cr
Preferred Qualifications: PhDs, Tier-1 Grads (IITs, IISc, top global universities)
Join a stealth, VC-backed startup operating across the US, India, and EU, shaping the future of AI-driven Observability utilizing LLMs, Generative AI, and cutting-edge ML technologies. Collaborate with visionary founders experienced in scaling billion-dollar products.
🔍 What You’ll Do:
- Develop advanced time series models for anomaly detection & forecasting
- Create LLM-powered Root Cause Analysis systems employing causal inference & ML techniques
- Innovate using LLMs for enhanced time series comprehension
- Build real-time ML pipelines & scalable MLOps workflows
- Utilize Bayesian methods, causality, counterfactuals, and agent evaluation frameworks
- Handle extensive datasets in Python (TensorFlow, PyTorch, Scikit-Learn, Statsmodels, etc.)
✅ What We’re Looking For:
- Minimum 5 years of experience in ML, time series, and causal analytics
- Proficiency in Python & the ML ecosystem
- In-depth understanding of causal inference, Bayesian statistics, LLMs
- Background in ML Ops, scalable systems, and production deployment
- Additional expertise in Observability, AI agents, or LLM Ops is a plus
💡 Why Join:
- Contribute to building a groundbreaking product from inception
- Tackle real-world impactful challenges alongside a top-tier team
- Engage in a culture that prioritizes ownership, agility, and creativity
Apply here: https://whitetable.ai/form/machine-learning-data-science-engineer-dc784b
Job Overview:
We are looking for a skilled Senior Backend Engineer to join our team. The ideal candidate will have a strong foundation in Java and Spring, with proven experience in building scalable microservices and backend systems. This role also requires familiarity with automation tools, Python development, and working knowledge of AI technologies.
Responsibilities:
- Design, develop, and maintain backend services and microservices.
- Build and integrate RESTful APIs across distributed systems.
- Ensure performance, scalability, and reliability of backend systems.
- Collaborate with cross-functional teams and participate in agile development.
- Deploy and maintain applications on AWS cloud infrastructure.
- Contribute to automation initiatives and AI/ML feature integration.
- Write clean, testable, and maintainable code following best practices.
- Participate in code reviews and technical discussions.
Required Skills:
- 4+ years of backend development experience.
- Strong proficiency in Java and Spring/Spring Boot frameworks.
- Solid understanding of microservices architecture.
- Experience with REST APIs, CI/CD, and debugging complex systems.
- Proficient in AWS services such as EC2, Lambda, S3.
- Strong analytical and problem-solving skills.
- Excellent communication in English (written and verbal).
Good to Have:
- Experience with automation tools like Workato or similar.
- Hands-on experience with Python development.
- Familiarity with AI/ML features or API integrations.
- Comfortable working with US-based teams (flexible hours).
Company name: JPMorgan (JPMC)
Job Category: Predictive Science
Location: Parcel 9, Embassy Tech Village, Outer Ring Road, Deverabeesanhalli Village, Varthur Hobli, Bengaluru
Job Schedule: Full time
JOB DESCRIPTION
JPMC is hiring the best talents to join the growing Asset and Wealth Management AI team. We are executing like a startup and building next-generation technology that combines JPMC unique data and full-service advantage to develop high impact AI applications and platforms in the financial services industry. We are looking for hands-on ML Engineering leader and expert who is excited about the opportunity.
As a senior ML and GenAI engineer, you will play a lead role as a senior member of our global team. Your responsibilities will entail hands on development of high-impact business solutions through data analysis, developing cutting edge ML and LLM models, and deploying these models to production environments on AWS or Azure.
You'll combine your years of proven development expertise with a never-ending quest to create innovative technology through solid engineering practices. Your passion and experience in one or more technology domains will help solve complex business problems to serve our Private Bank clients. As a constant learner and early adopter, you’re already embracing leading-edge technologies and methodologies; your example encourages others to follow suit.
Job responsibilities
• Hands-on architecture and implementation of lighthouse ML and LLM-powered solutions
• Close partnership with peers in a geographically dispersed team and colleagues across organizational lines
• Collaborate across JPMorgan AWM’s lines of business and functions to accelerate adoption of common AI capabilities
• Design and implement highly scalable and reliable data processing pipelines and deploy model inference services.
• Deploy solutions into public cloud infrastructure
• Experiment, develop and productionize high quality machine learning models, services, and platforms to make a huge technology and business impact
Required qualifications, capabilities, and skills
• Formal training or certification on software engineering concepts and 5+ years applied experience
• MS in Computer Science, Statistics, Mathematics or Machine Learning.
• Development experience, along with hands-on Machine Learning Engineering
• Proven leadership capacity, including new AI/ML idea generation and GenAI-based solutions
• Solid Python programming skills required; with other high-performance language such as Go a big plus
• Expert knowledge of one of the cloud computing platforms preferred: Amazon Web Services (AWS), Azure, Kubernetes.
• Experience in using LLMs (OpenAI, Claude or other models) to solve business problems, including full workflow toolset, such as tracing, evaluations and guardrails. Understanding of LLM fine-tuning and inference a plus
• Knowledge of data pipelines, both batch and real-time data processing on both SQL (such as Postgres) and NoSQL stores (such as OpenSearch and Redis)
• Expertise in application, data, and infrastructure architecture disciplines
• Deep knowledge in Data structures, Algorithms, Machine Learning, Data Mining, Information Retrieval, Statistics.
• Excellent communication skills and ability to communicate with senior technical and business partners
Preferred qualifications, capabilities, and skills
• Expert in at least one of the following areas: Natural Language Processing, Reinforcement Learning, Ranking and Recommendation, or Time Series Analysis.
• Knowledge of machine learning frameworks: Pytorch, Keras, MXNet, Scikit-Learn
• Understanding of finance or wealth management businesses is an added advantage
ABOUT US
JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation.
ABOUT THE TEAM
J.P. Morgan Asset & Wealth Management delivers industry-leading investment management and private banking solutions. Asset Management provides individuals, advisors and institutions with strategies and expertise that span the full spectrum of asset classes through our global network of investment professionals. Wealth Management helps individuals, families and foundations take a more intentional approach to their wealth or finances to better define, focus and realize their goals.
Job Description – Machine Learning Expert
Role: Machine Learning Expert
Experience: 6+ Years
Location: Bangalore
Education: B.Tech Degree (Computer Science / Information Technology / Data Science / related fields)
Work Mode: Hybrid – 3 Days Office + 3 Days Work from Home
Interview Mode: Candidate must be willing to attend Face-to-Face (F2F) L2 round at Bangalore location
About the Role
We are seeking a highly skilled Machine Learning Expert with a strong background in building, training, and deploying AI/ML models. The ideal candidate will bring hands-on expertise in designing intelligent systems that leverage advanced algorithms, deep learning, and data-driven insights to solve complex business challenges.
Key Responsibilities
- Develop and implement machine learning and deep learning models for real-world business use cases.
- Perform data cleaning, preprocessing, feature engineering, and model optimization.
- Research, design, and apply state-of-the-art ML techniques across domains such as NLP, Computer Vision, or Predictive Analytics.
- Collaborate with data engineers and software developers to ensure seamless end-to-end ML solution deployment.
- Deploy ML models to production environments and monitor performance for scalability and accuracy.
- Stay updated with the latest advancements in Artificial Intelligence and Machine Learning frameworks.
Required Skills & Qualifications
- B.Tech degree in Computer Science, Information Technology, Data Science, or related discipline.
- 6+ years of hands-on experience in Machine Learning, Artificial Intelligence, and Deep Learning.
- Strong expertise in Python and ML frameworks such as TensorFlow, PyTorch, Scikit-learn, Keras.
- Solid foundation in mathematics, statistics, algorithms, and probability.
- Experience in working with NLP, Computer Vision, Recommendation Systems, or Predictive Modeling.
- Knowledge of cloud platforms (AWS / GCP / Azure) for model deployment.
- Familiarity with MLOps tools and practices for lifecycle management.
- Excellent problem-solving skills and the ability to work in a collaborative environment.
Preferred Skills (Good to Have)
- Experience with Big Data frameworks (Hadoop, Spark).
- Exposure to Generative AI, LLMs (Large Language Models), and advanced AI research.
- Contributions to open-source projects, publications, or patents in AI/ML.
Work Mode & Interview Process
- Hybrid Model: 3 days in office (Bangalore) + 3 days remote.
- Interview: Candidate must be available for Face-to-Face L2 interview at Bangalore location.
Senior Cloud & ML Infrastructure Engineer
Location: Bangalore / Bengaluru, Hyderabad, Pune, Mumbai, Mohali, Panchkula, Delhi
Experience: 6–10+ Years
Night Shift - 9 pm to 6 am
About the Role:
We’re looking for a Senior Cloud & ML Infrastructure Engineer to lead the design,scaling, and optimization of cloud-native machine learning infrastructure. This role is ideal forsomeone passionate about solving complex platform engineering challenges across AWS, witha focus on model orchestration, deployment automation, and production-grade reliability. You’llarchitect ML systems at scale, provide guidance on infrastructure best practices, and work cross-functionally to bridge DevOps, ML, and backend teams.
Key Responsibilities:
● Architect and manage end-to-end ML infrastructure using SageMaker, AWS StepFunctions, Lambda, and ECR
● Design and implement multi-region, highly-available AWS solutions for real-timeinference and batch processing
● Create and manage IaC blueprints for reproducible infrastructure using AWS CDK
● Establish CI/CD practices for ML model packaging, validation, and drift monitoring
● Oversee infrastructure security, including IAM policies, encryption at rest/in-transit, andcompliance standards
● Monitor and optimize compute/storage cost, ensuring efficient resource usage at scale
● Collaborate on data lake and analytics integration
● Serve as a technical mentor and guide AWS adoption patterns across engineeringteams
Required Skills:
● 6+ years designing and deploying cloud infrastructure on AWS at scale
● Proven experience building and maintaining ML pipelines with services like SageMaker,ECS/EKS, or custom Docker pipelines
● Strong knowledge of networking, IAM, VPCs, and security best practices in AWS
● Deep experience with automation frameworks, IaC tools, and CI/CD strategies
● Advanced scripting proficiency in Python, Go, or Bash
● Familiarity with observability stacks (CloudWatch, Prometheus, Grafana)
Nice to Have:
● Background in robotics infrastructure, including AWS IoT Core, Greengrass, or OTA deployments
● Experience designing systems for physical robot fleet telemetry, diagnostics, and control
● Familiarity with multi-stage production environments and robotic software rollout processes
● Competence in frontend hosting for dashboard or API visualization
● Involvement with real-time streaming, MQTT, or edge inference workflows
● Hands-on experience with ROS 2 (Robot Operating System) or similar robotics frameworks, including launch file management, sensor data pipelines, and deployment to embedded Linux devices
🚀 We’re Hiring: Senior Cloud & ML Infrastructure Engineer 🚀
We’re looking for an experienced engineer to lead the design, scaling, and optimization of cloud-native ML infrastructure on AWS.
If you’re passionate about platform engineering, automation, and running ML systems at scale, this role is for you.
What you’ll do:
🔹 Architect and manage ML infrastructure with AWS (SageMaker, Step Functions, Lambda, ECR)
🔹 Build highly available, multi-region solutions for real-time & batch inference
🔹 Automate with IaC (AWS CDK, Terraform) and CI/CD pipelines
🔹 Ensure security, compliance, and cost efficiency
🔹 Collaborate across DevOps, ML, and backend teams
What we’re looking for:
✔️ 6+ years AWS cloud infrastructure experience
✔️ Strong ML pipeline experience (SageMaker, ECS/EKS, Docker)
✔️ Proficiency in Python/Go/Bash scripting
✔️ Knowledge of networking, IAM, and security best practices
✔️ Experience with observability tools (CloudWatch, Prometheus, Grafana)
✨ Nice to have: Robotics/IoT background (ROS2, Greengrass, Edge Inference)
📍 Location: Bengaluru, Hyderabad, Mumbai, Pune, Mohali, Delhi
5 days working, Work from Office
Night shifts: 9pm to 6am IST
👉 If this sounds like you (or someone you know), let’s connect!
Apply here:
Job Title: Data Scientist
Location: Bangalore (Hybrid/On-site depending on project needs)
About the Role
We are seeking a highly skilled Data Scientist to join our team in Bangalore. In this role, you will take ownership of data science components across client projects, build production-ready ML and GenAI-powered applications, and mentor junior team members. You will collaborate with engineering teams to design and deploy impactful solutions that leverage cutting-edge machine learning and large language model technologies.
Key Responsibilities
ML & Data Science
- Develop, fine-tune, and evaluate ML models (classification, regression, clustering, recommendation systems).
- Conduct exploratory data analysis, preprocessing, and feature engineering.
- Ensure model reproducibility, scalability, and alignment with business objectives.
GenAI & LLM Applications
- Prototype and design solutions leveraging LLMs (OpenAI, Claude, Mistral, Llama).
- Build RAG (Retrieval-Augmented Generation) pipelines, prompt templates, and evaluation frameworks.
- Integrate LLMs with APIs and vector databases (Pinecone, FAISS, Weaviate).
Product & Engineering Collaboration
- Partner with engineering teams to productionize ML/GenAI models.
- Contribute to API development, data pipelines, technical documentation, and client presentations.
Team & Growth
- Mentor junior data scientists and review technical contributions.
- Stay up to date with the latest ML & GenAI research and tools; share insights across the team.
Required Skills & Qualifications
- 4.5–9 years of applied data science experience.
- Strong proficiency in Python and ML libraries (scikit-learn, XGBoost, LightGBM).
- Hands-on experience with LLM APIs (OpenAI, Cohere, Claude) and frameworks (LangChain, LlamaIndex).
- Strong SQL, data wrangling, and analysis skills (pandas, NumPy).
- Experience working with APIs, Git, and cloud platforms (AWS/GCP).
Good-to-Have
- Deployment experience with FastAPI, Docker, or serverless frameworks.
- Familiarity with MLOps tools (MLflow, DVC).
- Experience working with embeddings, vector databases, and similarity search.
Title: Quantitative Developer
Location : Mumbai
Candidates preferred with Master's
Who We Are
At Dolat Capital, we are a collective of traders, puzzle solvers, and tech enthusiasts passionate about decoding the intricacies of financial markets. From navigating volatile trading conditions with precision to continuously refining cutting-edge technologies and quantitative strategies, our work thrives at the intersection of finance and engineering.
We operate a robust, ultra-low latency infrastructure built for market-making and active trading across Equities, Futures, and Options—with some of the highest fill rates in the industry. If you're excited by technology, trading, and critical thinking, this is the place to evolve your skills into world class capabilities.
What You Will Do
This role offers a unique opportunity to work across both quantitative development and high frequency trading. You'll engineer trading systems, design and implement algorithmic strategies, and directly participate in live trading execution and strategy enhancement.
1. Quantitative Strategy & Trading Execution
- Design, implement, and optimize quantitative strategies for trading derivatives, index options, and ETFs
- Trade across options, equities, and futures, using proprietary HFT platforms
- Monitor and manage PnL performance, targeting Sharpe ratios of 6+
- Stay proactive in identifying market opportunities and inefficiencies in real-time HFT environments
- Analyze market behavior, particularly in APAC indices, to adjust models and positions dynamically
2. Trading Systems Development
- Build and enhance low-latency, high-throughput trading systems
- Develop tools to simulate trading strategies and access historical market data
- Design performance-optimized data structures and algorithms for fast execution
- Implement real-time risk management and performance tracking systems
3. Algorithmic and Quantitative Analysis
- Collaborate with researchers and traders to integrate strategies into live environments
- Use statistical methods and data-driven analysis to validate and refine models
- Work with large-scale HFT tick data using Python / C++
4. AI/ML Integration
- Develop and train AI/ML models for market prediction, signal detection, and strategy enhancement
- Analyze large datasets to detect patterns and alpha signals
5. System & Network Optimization
- Optimize distributed and concurrent systems for high-transaction throughput
- Enhance platform performance through network and systems programming
- Utilize deep knowledge of TCP/UDP and network protocols
6. Collaboration & Mentorship
- Collaborate cross-functionally with traders, engineers, and data scientists
- Represent Dolat in campus recruitment and industry events as a technical mentor
What We Are Looking For:
- Strong foundation in data structures, algorithms, and object-oriented programming (C++).
- Experience with AI/ML frameworks like TensorFlow, PyTorch, or Scikit-learn.
- Hands-on experience in systems programming within a Linux environment.
- Proficient and Hands on programming using python/ C++
- Familiarity with distributed computing and high-concurrency systems.
- Knowledge of network programming, including TCP/UDP protocols.
- Strong analytical and problem-solving skills.
A passion for technology-driven solutions in the financial markets.
Proven experience as a Data Scientist or similar role with relevant experience of at least 4 years and total experience 6-8 years.
· Technical expertiseregarding data models, database design development, data mining and segmentation techniques
· Strong knowledge of and experience with reporting packages (Business Objects and likewise), databases, programming in ETL frameworks
· Experience with data movement and management in the Cloud utilizing a combination of Azure or AWS features
· Hands on experience in data visualization tools – Power BI preferred
· Solid understanding of machine learning
· Knowledge of data management and visualization techniques
· A knack for statistical analysis and predictive modeling
· Good knowledge of Python and Matlab
· Experience with SQL and NoSQL databases including ability to write complex queries and procedures
About the Role
We are seeking an innovative Data Scientist specializing in Natural Language Processing (NLP) to join our technology team in Bangalore. The ideal candidate will harness the power of language models and document extraction techniques to transform legal information into accessible, actionable insights for our clients.
Responsibilities
- Develop and implement NLP solutions to automate legal document analysis and extraction
- Create and optimize prompt engineering strategies for large language models
- Design search functionality leveraging semantic understanding of legal documents
- Build document extraction pipelines to process unstructured legal text data
- Develop data visualizations using PowerBI and Tableau to communicate insights
- Collaborate with product and legal teams to enhance our tech-enabled services
- Continuously improve model performance and user experience
Requirements
- Bachelor's degree in relevant field
- 1-5 years of professional experience in data science, with focus on NLP applications
- Demonstrated experience working with LLM APIs (e.g., OpenAI, Anthropic, )
- Proficiency in prompt engineering and optimization techniques
- Experience with document extraction and information retrieval systems
- Strong skills in data visualization tools, particularly PowerBI and Tableau
- Excellent programming skills in Python and familiarity with NLP libraries
- Strong understanding of legal terminology and document structures (preferred)
- Excellent communication skills in English
What We Offer
- Competitive salary and benefits package
- Opportunity to work at India's largest legal tech company
- Professional growth in the fast-evolving legal technology sector
- Collaborative work environment with industry experts
- Modern office located in Bangalore
- Flexible work arrangements
Qualified candidates are encouraged to apply with a resume highlighting relevant experience with NLP, prompt engineering, and data visualization tools.
Location: Bangalore, India
3+ years of experience in cybersecurity, with a focus on application and cloud security.
· Proficiency in security tools such as Burp Suite, Metasploit, Nessus, OWASP ZAP, and SonarQube.
· Familiarity with data privacy regulations (GDPR, CCPA) and best practices.
· Basic knowledge of AI/ML security frameworks and tools.
Role Overview:
Zolvit is looking for a highly skilled and self-driven Lead Machine Learning Engineer / Lead Data Scientist to lead the design and development of scalable, production-grade ML systems. This role is ideal for someone who thrives on solving complex problems using data, is deeply passionate about machine learning, and has a strong understanding of both classical techniques and modern AI systems like Large Language Models (LLMs).
You will work closely with engineering, product, and business teams to identify impactful ML use cases, build data pipelines, design training workflows, and ensure the deployment of robust, high-performance models at scale.
Key Responsibilities:
● Design and implement scalable ML systems, from experimentation to deployment.
● Build and maintain end-to-end data pipelines for data ingestion, preprocessing, feature engineering, and monitoring.
● Lead the development and deployment of ML models across a variety of use cases — including classical ML and LLM-based applications like summarization, classification, document understanding, and more.
● Define model training and evaluation pipelines, ensuring reproducibility and performance tracking.
● Apply statistical methods to interpret data, validate assumptions, and inform modeling decisions.
● Collaborate cross-functionally with engineers, data analysts, and product managers to solve high-impact business problems using ML.
● Ensure proper MLOps practices are in place for model versioning, monitoring, retraining, and performance management.
● Keep up-to-date with the latest advancements in AI/ML, and actively evaluate and incorporate LLM capabilities and frameworks into solutions.
● Mentor junior ML engineers and data scientists, and help scale the ML function across the organization.
Required Qualifications:
● 7+ years of hands-on experience in ML/AI, building real-world ML systems at scale.
● Proven experience with classical ML algorithms (e.g., regression, classification,
clustering, ensemble models).
● Deep expertise in modern LLM frameworks (e.g., OpenAI, HuggingFace, LangChain)
and their integration into production workflows.
● Strong experience with Python, and frameworks such as Scikit-learn, TensorFlow,
PyTorch, or equivalent.
● Solid background in statistics and the ability to apply statistical thinking to real-world
problems.
● Experience with data engineering tools and platforms (e.g., Spark, Airflow, SQL,
Pandas, AWS Glue, etc.).
● Familiarity with cloud services (AWS preferred) and containerization tools (Docker,
Kubernetes) is a plus.
● Strong communication and leadership skills, with experience mentoring and guiding
junior team members.
● Self-starter attitude with a bias for action and ability to thrive in fast-paced environments.
● Master’s degree in Machine Learning, Artificial Intelligence, Statistics, or a related
field is preferred.
Preferred Qualifications:
● Experience deploying ML systems in microservices or event-driven architectures.
● Hands-on experience with vector databases, embeddings, and retrieval-augmented
generation (RAG) systems.
● Understanding of Responsible AI principles and practices.
Why Join Us?
● Lead the ML charter in a mission-driven company solving real-world challenges.
● Work on cutting-edge LLM use cases and platformize ML capabilities for scale.
● Collaborate with a passionate and technically strong team in a high-impact environment.
● Competitive compensation, flexible working model, and ample growth opportunities.
Technical Skills – Must have
Lead the design and development of AI-driven test automation frameworks and solutions.
Collaborate with stakeholders (e.g., product managers, developers, data scientists) to understand testing requirements and identify areas where AI automation can be effectively implemented.
Develop and implement test automation strategies for AI-based systems, encompassing various aspects like data generation, model testing, and performance evaluation.
Evaluate and select appropriate tools and technologies for AI test automation, including AI frameworks, testing tools, and automation platforms.
Define and implement best practices for AI test automation, covering areas like code standards, test case design, test data management, and ethical considerations.
Lead and mentor a team of test automation engineers in designing, developing, and executing AI test automation solutions.
Collaborate with development teams to ensure the testability of AI models and systems, providing guidance and feedback throughout the development lifecycle.
Analyze test results and identify areas for improvement in the AI automation process, continuously optimizing testing effectiveness and efficiency.
Stay up-to-date with the latest advancements and trends in AI and automation technologies, actively adapting and implementing new knowledge to enhance testing capabilities.
Knowledge in Generative AI and Conversational AI for implementation in test automation strategies is highly desirable.
Proficiency in programming languages commonly used in AI, such as Python, Java, or R.
Knowledge on AI frameworks and libraries, such as TensorFlow, PyTorch, or scikit-learn.
Familiarity with testing methodologies and practices, including Agile and DevOps.
Working experience on Python/Java and Selenium also knowledge in prompt engineering.
Senior Data Engineer Job Description
Overview
The Senior Data Engineer will design, develop, and maintain scalable data pipelines and
infrastructure to support data-driven decision-making and advanced analytics. This role requires deep
expertise in data engineering, strong problem-solving skills, and the ability to collaborate with
cross-functional teams to deliver robust data solutions.
Key Responsibilities
Data Pipeline Development: Design, build, and optimize scalable, secure, and reliable data
pipelines to ingest, process, and transform large volumes of structured and unstructured data.
Data Architecture: Architect and maintain data storage solutions, including data lakes, data
warehouses, and databases, ensuring performance, scalability, and cost-efficiency.
Data Integration: Integrate data from diverse sources, including APIs, third-party systems,
and streaming platforms, ensuring data quality and consistency.
Performance Optimization: Monitor and optimize data systems for performance, scalability,
and cost, implementing best practices for partitioning, indexing, and caching.
Collaboration: Work closely with data scientists, analysts, and software engineers to
understand data needs and deliver solutions that enable advanced analytics, machine
learning, and reporting.
Data Governance: Implement data governance policies, ensuring compliance with data
security, privacy regulations (e.g., GDPR, CCPA), and internal standards.
Automation: Develop automated processes for data ingestion, transformation, and validation
to improve efficiency and reduce manual intervention.
Mentorship: Guide and mentor junior data engineers, fostering a culture of technical
excellence and continuous learning.
Troubleshooting: Diagnose and resolve complex data-related issues, ensuring high
availability and reliability of data systems.
Required Qualifications
Education: Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science,
or a related field.
Experience: 5+ years of experience in data engineering or a related role, with a proven track
record of building scalable data pipelines and infrastructure.
Technical Skills:
Proficiency in programming languages such as Python, Java, or Scala.
Expertise in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra).
Strong experience with cloud platforms (e.g., AWS, Azure, GCP) and their data services
(e.g., Redshift, BigQuery, Snowflake).
Hands-on experience with ETL/ELT tools (e.g., Apache Airflow, Talend, Informatica) and
data integration frameworks.
Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka) and distributed
systems.
Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes) is a
plus.
Soft Skills:
Excellent problem-solving and analytical skills.
Strong communication and collaboration abilities.
Ability to work in a fast-paced, dynamic environment and manage multiple priorities.
Certifications (optional but preferred): Cloud certifications (e.g., AWS Certified Data Analytics,
Google Professional Data Engineer) or relevant data engineering certifications.
Preferred Qualifica
Experience with real-time data processing and streaming architectures.
Familiarity with machine learning pipelines and MLOps practices.
Knowledge of data visualization tools (e.g., Tableau, Power BI) and their integration with data
pipelines.
Experience in industries with high data complexity, such as finance, healthcare, or
e-commerce.
Work Environment
Location: Hybrid/Remote/On-site (depending on company policy).
Team: Collaborative, cross-functional team environment with data scientists, analysts, and
business stakeholders.
Hours: Full-time, with occasional on-call responsibilities for critical data systems.
What You will do:
● Create beautiful software experiences for our clients using design thinking, lean, and agile methodology.
● Work on software products designed from scratch using the latest cutting-edge technologies, platforms, and languages such as NodeJS, JavaScript.
● Work in a dynamic, collaborative, transparent, non-hierarchical culture.
● Work in collaborative, fast-paced,d and value-driven teams to build innovative customer experiences for our clients.
● Help to grow the next generation of developers and have a positive impact on the industry.
Basic Qualifications :
● Experience: 4+ years.
● Hands-on development experience with a broad mix of languages such as NodeJS,
● Server-side development experience, mainly in NodeJS, can be considerable
● UI development experience in AngularJS
● Passion for software engineering and following the best coding concepts .
● Good to great problem-solving and communication skills.
Nice to have Qualifications:
● Product and customer-centric mindset.
● Great OO skills, including design patterns.
● Experience with devops, continuous integration & deployment.
● Exposure to big data technologies, Machine Learning, and NLP will be a plus.
What You will do:
● Play the role of Data Analyst / ML Engineer
● Collection, cleanup, exploration and visualization of data
● Perform statistical analysis on data and build ML models
● Implement ML models using some of the popular ML algorithms
● Use Excel to perform analytics on large amounts of data
● Understand, model and build to bring actionable business intelligence out of data that is available in different formats
● Work with data engineers to design, build, test and monitor data pipelines for ongoing business operations
Basic Qualifications:
● Experience: 4+ years.
● Hands-on development experience playing the role of Data Analyst and/or ML Engineer.
● Experience in working with excel for data analytics
● Experience with statistical modelling of large data sets
● Experience with ML models and ML algorithms
● Coding experience in Python
Nice to have Qualifications:
● Experience with wide variety of tools used in ML
● Experience with Deep learning
Benefits:
● Competitive salary.
● Hybrid work model.
● Learning and gaining experience rapidly.
● Reimbursement for basic working set up at home.
● Insurance (including a top up insurance for COVID
Responsibilities
• Model Development and Optimization: Design, build, and deploy NLP models, including transformer models (e.g., BERT, GPT, T5) and other SOTA architectures, as well as traditional machine learning algorithms (e.g., SVMs, Logistic Regression) for specific applications.
• Data Processing and Feature Engineering: Develop robust pipelines for text preprocessing, feature extraction, and data augmentation for structured and unstructured data.
• Model Fine-Tuning and Transfer Learning:
Fine-tune large language models for specific applications, leveraging transfer learning techniques, domain adaptation, and a mix of deep learning and traditional ML models.
• Performance Optimization: Optimize model performance for scalability and latency, applying techniques such as quantization, ONNX formats etc.
• Research and Innovation: Stay updated with the latest research in NLP, Deep Learning, and Generative AI, applying innovative solutions and techniques (e.g., RAG applications, Prompt engineering, Self-supervised learning).
• Stakeholder Communication: Collaborate with stakeholders to gather requirements, conduct due diligence, and communicate project updates effectively, ensuring alignment between technical solutions and business goals.
• Evaluation and Testing: Establish metrics, benchmarks, and methodologies for model evaluation, including cross-validation, and error analysis, ensuring models meet accuracy, fairness, and reliability standards. • Deployment and Monitoring: Oversee the deployment of NLP models in production, ensuring seamless integration, model monitoring, and retraining processes.
JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.
Mon-fri role, In office, with excellent perks and benefits!
Position Overview
We are seeking a Software Architect to lead the design and development of high-performance robotics and AI software stacks utilizing NVIDIA technologies. This role will focus on defining scalable, modular, and efficient architectures for robot perception, planning, simulation, and embedded AI applications. You will collaborate with cross-functional teams to build next-generation autonomous systems 9
Key Responsibilities:
1. System Architecture & Design
● Define scalable software architectures for robotics perception, navigation, and AI-driven decision-making.
● Design modular and reusable frameworks that leverage NVIDIA’s Jetson, Isaac ROS, Omniverse, and CUDA ecosystems.
● Establish best practices for real-time computing, GPU acceleration, and edge AI inference.
2. Perception & AI Integration
● Architect sensor fusion pipelines using LIDAR, cameras, IMUs, and radar with DeepStream, TensorRT, and ROS2.
● Optimize computer vision, SLAM, and deep learning models for edge deployment on Jetson Orin and Xavier.
● Ensure efficient GPU-accelerated AI inference for real-time robotics applications.
3. Embedded & Real-Time Systems
● Design high-performance embedded software stacks for real-time robotic control and autonomy.
● Utilize NVIDIA CUDA, cuDNN, and TensorRT to accelerate AI model execution on Jetson platforms.
● Develop robust middleware frameworks to support real-time robotics applications in ROS2 and Isaac SDK.
4. Robotics Simulation & Digital Twins
● Define architectures for robotic simulation environments using NVIDIA Isaac Sim & Omniverse.
● Leverage synthetic data generation (Omniverse Replicator) for training AI models.
● Optimize sim-to-real transfer learning for AI-driven robotic behaviors.
5. Navigation & Motion Planning
● Architect GPU-accelerated motion planning and SLAM pipelines for autonomous robots.
● Optimize path planning, localization, and multi-agent coordination using Isaac ROS Navigation.
● Implement reinforcement learning-based policies using Isaac Gym.
6. Performance Optimization & Scalability
● Ensure low-latency AI inference and real-time execution of robotics applications.
● Optimize CUDA kernels and parallel processing pipelines for NVIDIA hardware.
● Develop benchmarking and profiling tools to measure software performance on edge AI devices.
Required Qualifications:
● Master’s or Ph.D. in Computer Science, Robotics, AI, or Embedded Systems.
● Extensive experience (7+ years) in software development, with at least 3-5 years focused on architecture and system design, especially for robotics or embedded systems.
● Expertise in CUDA, TensorRT, DeepStream, PyTorch, TensorFlow, and ROS2.
● Experience in NVIDIA Jetson platforms, Isaac SDK, and GPU-accelerated AI.
● Proficiency in programming languages such as C++, Python, or similar, with deep understanding of low-level and high-level design principles.
● Strong background in robotic perception, planning, and real-time control.
● Experience with cloud-edge AI deployment and scalable architectures.
Preferred Qualifications
● Hands-on experience with NVIDIA DRIVE, NVIDIA Omniverse, and Isaac Gym
● Knowledge of robot kinematics, control systems, and reinforcement learning
● Expertise in distributed computing, containerization (Docker), and cloud robotics
● Familiarity with automotive, industrial automation, or warehouse robotics
● Experience designing architectures for autonomous systems or multi-robot systems.
● Familiarity with cloud-based solutions, edge computing, or distributed computing for robotics
● Experience with microservices or service-oriented architecture (SOA)
● Knowledge of machine learning and AI integration within robotic systems
● Knowledge of testing on edge devices with HIL and simulations (Isaac Sim, Gazebo, V-REP etc.)
Link to apply- https://tally.so/r/w8RpLk
About Us
At GreenStitch, we are on a mission to revolutionise the fashion and textile industry through cutting-edge climate-tech solutions. We are looking for a highly skilled Data Scientist-I with expertise in NLP and Deep Learning to drive innovation in our data-driven applications. This role requires a strong foundation in AI/ML, deep learning, and software engineering to develop impactful solutions aligned with our sustainability and business objectives.
The ideal candidate demonstrates up-to-date expertise in Deep Learning and NLP and applies it to the development, execution, and improvement of applications. This role involves supporting and aligning efforts to meet both customer and business needs.
You will play a key role in building strong relationships with stakeholders, identifying business challenges, and implementing AI-driven solutions. You should be adaptable to competing demands, organisational changes, and new responsibilities while upholding GreenStitch’s mission, values, and ethical standards.
What You’ll Do:
- AI-Powered Applications: Build and deploy AI-driven applications leveraging Generative AI and NLP to enhance user experience and operational efficiency.
- Model Development: Design and implement deep learning models and machine learning pipelines to solve complex business challenges.
- Cross-Functional Collaboration: Work closely with product managers, engineers, and business teams to identify AI/ML opportunities.
- Innovation & Experimentation: Stay up-to-date with the latest AI/ML advancements, rapidly prototype solutions, and iterate on ideas for continuous improvement.
- Scalability & Optimisation: Optimise AI models for performance, scalability, and real-world impact, ensuring production readiness.
- Knowledge Sharing & Thought Leadership: Contribute to publications, patents, and technical forums; represent GreenStitch in industry and academic discussions.
- Compliance & Ethics: Model compliance with company policies and ensure AI solutions align with ethical standards and sustainability goals.
- Communication: Translate complex AI/ML concepts into clear, actionable insights for business stakeholders.
What You’ll Bring:
- Education: Bachelors/Master’s degree in Data Science, Computer Science, AI, or a related field.
- Experience: 1-3 years of hands-on experience in AI/ML, NLP, and Deep Learning.
- Technical Expertise: Strong knowledge of transformer-based models (GPT, BERT, etc.), deep learning frameworks (TensorFlow, PyTorch), and cloud platforms (Azure, AWS, GCP).
- Software Development: Experience in Python, Java, or similar languages, and familiarity with MLOps tools.
- Problem-Solving: Ability to apply AI/ML techniques to real-world challenges with a data-driven approach.
- Growth Mindset: Passion for continuous learning and innovation in AI and climate-tech applications.
- Collaboration & Communication: Strong ability to work with cross-functional teams, communicate complex ideas, and drive AI adoption.
Why GreenStitch?
GreenStitch is at the forefront of climate-tech innovation, helping businesses in the fashion and textile industry reduce their environmental footprint. By joining us, you will:
- Work on high-impact AI projects that contribute to sustainability and decarbonisation.
- Be part of a dynamic and collaborative team committed to making a difference.
- Enjoy a flexible, hybrid work model that supports professional growth and work-life balance.
- Receive competitive compensation and benefits, including healthcare, parental leave, and learning opportunities.
Location: Bangalore(India)
Employment Type: Full Time, Permanent
Industry Type: Climate-Tech / Fashion-Tech
Department: Data Science & Machine Learning
Join us in shaping the future of sustainable fashion with cutting-edge AI solutions! 🚀
Level of skills and experience:
5 years of hands-on experience in using Python, Spark,Sql.
Experienced in AWS Cloud usage and management.
Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).
Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.
Experience with orchestrators such as Airflow and Kubeflow.
Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).
Fundamental understanding of Parquet, Delta Lake and other data file formats.
Proficiency on an IaC tool such as Terraform, CDK or CloudFormation.
Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst
We are seeking a Senior Data Scientist with hands-on experience in Generative AI (GenAI) and Large Language Models (LLM). The ideal candidate will have expertise in building, fine-tuning, and deploying LLMs, as well as managing the lifecycle of AI models through LLMOps practices. You will play a key role in driving AI innovation, developing advanced algorithms, and optimizing model performance for various business applications.
Key Responsibilities:
- Develop, fine-tune, and deploy Large Language Models (LLM) for various business use cases.
- Implement and manage the operationalization of LLMs using LLMOps best practices.
- Collaborate with cross-functional teams to integrate AI models into production environments.
- Optimize and troubleshoot model performance to ensure high accuracy and scalability.
- Stay updated with the latest advancements in Generative AI and LLM technologies.
Required Skills and Qualifications:
- Strong hands-on experience with Generative AI, LLMs, and NLP techniques.
- Proven expertise in LLMOps, including model deployment, monitoring, and maintenance.
- Proficiency in programming languages like Python and frameworks such as TensorFlow, PyTorch, or Hugging Face.
- Solid understanding of AI/ML algorithms and model optimization.
Overview
C5i is a pure-play AI & Analytics provider that combines the power of human perspective with AI technology to deliver trustworthy intelligence. The company drives value through a comprehensive solution set, integrating multifunctional teams that have technical and business domain expertise with a robust suite of products, solutions, and accelerators tailored for various horizontal and industry-specific use cases. At the core, C5i’s focus is to deliver business impact at speed and scale by driving adoption of AI-assisted decision-making.
C5i caters to some of the world’s largest enterprises, including many Fortune 500 companies. The company’s clients span Technology, Media, and Telecom (TMT), Pharma & Lifesciences, CPG, Retail, Banking, and other sectors. C5i has been recognized by leading industry analysts like Gartner and Forrester for its Analytics and AI capabilities and proprietary AI-based platforms.
Global offices
United States | Canada | United Kingdom | United Arab of Emirates | India
Job Responsibilities
- Solve complex business problems by applying NLP, CV and Machine Learning techniques. Integrate these algorithms into multiple product and solution offerings.
- Conduct research to advance the field of generative AI, including staying up-to-date with the latest developments and techniques. Develop new generative models or improve existing ones.
- Build cognitive capabilities in multiple product and solution offerings.
- Systemic study of latest research and industry developments in Artificial Intelligence. Leverage these developments to solve real-world business problems.
- Envisage next-generation challenges and recommend solutions to solve them.
- Facilitate the IP generation process in the organization and works towards patenting of inventions.
Requirements & Qualifications:
- B.E/B.TECH/MS/M.TECH/MCA.
- Knowledge / Competency / Skills: - Artificial Intelligence, Machine Learning, Deep Learning, Computer Vision, Natural Language Processing (NLP).
- Knowledge of various Generative AI models and Prompt Engineering.
- Hands-on coding skills in Python, at least 4 years’ python experience. (Preferred) Strong NLP skills.
C5i is proud to be an equal opportunity employer. We are committed to equal employment opportunity regardless of race, color, religion, sex, sexual orientation, age, marital status, disability, gender identity, etc. If you have a disability or special need that requires accommodation, please keep us informed about the same at the hiring stages for us to factor necessary accommodations.
Key Responsibilities:
- Develop and maintain scalable Python applications for AI/ML projects.
- Design, train, and evaluate machine learning models for classification, regression, NLP, computer vision, or recommendation systems.
- Collaborate with data scientists, ML engineers, and software developers to integrate models into production systems.
- Optimize model performance and ensure low-latency inference in real-time environments.
- Work with large datasets to perform data cleaning, feature engineering, and data transformation.
- Stay current with new developments in machine learning frameworks and Python libraries.
- Write clean, testable, and efficient code following best practices.
- Develop RESTful APIs and deploy ML models via cloud or container-based solutions (e.g., AWS, Docker, Kubernetes).
Share Cv to
Thirega@ vysystems dot com - WhatsApp - 91Five0033Five2Three





















