50+ Machine Learning (ML) Jobs in India
Apply to 50+ Machine Learning (ML) Jobs on CutShort.io. Find your next job, effortlessly. Browse Machine Learning (ML) Jobs and apply today!



About HealthAsyst
HealthAsyst is a leading technology company based out of Bangalore India focusing on the US healthcare market with a product and services portfolio.
HealthAsyst IT services division offers a whole gamut of software services, helping clients effectively address their operational challenges. The services include product engineering, maintenance, quality assurance, custom-development, implementation & healthcare integration. The product division of HealthAsyst partners with leading EHR, PMS and RIS vendors to provide cutting-edge patient engagement solutions to small and large provider group in the US market.
Role and Responsibilities
- Ability to face customer as AI expert, assisting in client consultations.
- Own the solutioning process, Align AI projects with client requirements.
- Drive the development of AI and ML solutions that address business problems.
- Collaborate with development teams and solution architects to develop and integrate AI solutions.
- Monitor and evaluate the performance and impact of AI and ML solutions and ensure continuous improvement and optimization.
- Design and develop AI/ML models to solve complex business problems using supervised, unsupervised, and reinforcement learning techniques..
- Build, train, and evaluate machine learning pipelines, including data preprocessing, feature engineering, and model tuning.
- Establish and maintain best practices and standards for architecture, AI and ML models, innovation, and new technology evaluation.
- Collaborate with software developers to integrate AI capabilities into applications and workflows
- Develop APIs and microservices to serve AI models for real-time or batch inference.
- Foster a culture of innovation and collaboration within the COE, across teams and provide mentorship/guidance to the team members.
- Implement responsible AI practices, including model explainability, fairness, bias detection, and compliance with ethical standards.
- Experience in deploying AI models into production environments using tools like TensorFlow Serving, TorchServe, or container-based deployment (Docker, Kubernetes)
Qualifications
- 3+ years of experience in AI and ML projects.
- Proven track record of delivering successful AI and ML solutions that address complex business problems.
- Expertise in design, development, deployment and monitoring of AI ML solutions in production.
- Proficiency in various AI and ML techniques and tools, such as deep learning, NLP, computer vision, ML frameworks, cloud platforms, etc.
- 1+ year experience in building Generative AI applications leveraging Prompt Engineering and RAG
- Preference to candidates with experience in Agentic AI, MCP and A2A (Agent2Agent) protocol.
- Strong leadership, communication, presentation and stakeholder management skills.
- Ability to think strategically, creatively and analytically, and to translate business requirements into AI and ML solutions.
- Passion for learning and staying updated with the latest developments and trends in the field of AI and ML.
- Demonstrated commitment to ethical and socially responsible AI practices
Employee Benefits:
HealthAsyst provides the following health, and wellness benefits to cover a range of physical and mental well-being for its employees.
- Bi-Annual Salary Reviews
- Flexible working hours
- 3 days Hybrid model
- GMC (Group Mediclaim): Provides Insurance coverage of Rs. 3 lakhs + a corporate buffer of 2 Lakhs per family. This is a family floater policy, and the company covers all the employees, spouse, and up to two children
- Employee Wellness Program- HealthAsyst offers unlimited online doctor consultations for self and family from a range of 31 specialties for no cost to employees. And OPD consultations with GP Doctors are available in person for No Cost to employees
- GPA (Group Personal Accident): Provides insurance coverage of Rs. 20 lakhs to the employee against the risk of death/injury during the policy period sustained due to an accident
- GTL (Group Term Life): Provides life term insurance protection to employees in case of death. The coverage is one time of the employee’s CTC
- Employee Assistance Program: HealthAsyst offers complete confidential counselling services to employees & family members for mental wellbeing
- Sponsored upskills program: The company will sponsor up to 1 Lakh for certifications/higher education/skill upskilling.
- Flexible Benefits Plan – covering a range of components like
a. National Pension System.
b. Internet/Mobile Reimbursements.
c. Fuel Reimbursements.
d. Professional Education Reimbursements.

We're Hiring: Machine Learning & Data Science Engineer
Location: Gurugram / Bengaluru (Full-time, In-Office)
Salary: Up to ₹2.5 Cr
Preferred Qualifications: PhDs, Tier-1 Grads (IITs, IISc, top global universities)
Join a stealth, VC-backed startup operating across the US, India, and EU, shaping the future of AI-driven Observability utilizing LLMs, Generative AI, and cutting-edge ML technologies. Collaborate with visionary founders experienced in scaling billion-dollar products.
🔍 What You’ll Do:
- Develop advanced time series models for anomaly detection & forecasting
- Create LLM-powered Root Cause Analysis systems employing causal inference & ML techniques
- Innovate using LLMs for enhanced time series comprehension
- Build real-time ML pipelines & scalable MLOps workflows
- Utilize Bayesian methods, causality, counterfactuals, and agent evaluation frameworks
- Handle extensive datasets in Python (TensorFlow, PyTorch, Scikit-Learn, Statsmodels, etc.)
✅ What We’re Looking For:
- Minimum 5 years of experience in ML, time series, and causal analytics
- Proficiency in Python & the ML ecosystem
- In-depth understanding of causal inference, Bayesian statistics, LLMs
- Background in ML Ops, scalable systems, and production deployment
- Additional expertise in Observability, AI agents, or LLM Ops is a plus
💡 Why Join:
- Contribute to building a groundbreaking product from inception
- Tackle real-world impactful challenges alongside a top-tier team
- Engage in a culture that prioritizes ownership, agility, and creativity
Apply here: https://whitetable.ai/form/machine-learning-data-science-engineer-dc784b


🌍 We’re Hiring: Senior Field AI Engineer | Remote | Full-time
Are you passionate about pioneering enterprise AI solutions and shaping the future of agentic AI?
Do you thrive in strategic technical leadership roles where you bridge advanced AI engineering with enterprise business impact?
We’re looking for a Senior Field AI Engineer to serve as the technical architect and trusted advisor for enterprise AI initiatives. You’ll translate ambitious business visions into production-ready applied AI systems, implementing agentic AI solutions for large enterprises.
What You’ll Do:
🔹 Design and deliver custom agentic AI solutions for mid-to-large enterprises
🔹 Build and integrate intelligent agent systems using frameworks like LangChain, LangGraph, CrewAI
🔹 Develop advanced RAG pipelines and production-grade LLM solutions
🔹 Serve as the primary technical expert for enterprise accounts and build long-term customer relationships
🔹 Collaborate with Solutions Architects, Engineering, and Product teams to drive innovation
🔹 Represent technical capabilities at industry conferences and client reviews
What We’re Looking For:
✔️ 7+ years of experience in AI/ML engineering with production deployment expertise
✔️ Deep expertise in agentic AI frameworks and multi-agent system design
✔️ Advanced Python programming and scalable backend service development
✔️ Hands-on experience with LLM platforms (GPT, Gemini, Claude) and prompt engineering
✔️ Experience with vector databases (Pinecone, Weaviate, FAISS) and modern ML infrastructure
✔️ Cloud platform expertise (AWS, Azure, GCP) and MLOps/CI-CD knowledge
✔️ Strategic thinker able to balance technical vision with hands-on delivery in fast-paced environments
✨ Why Join Us:
- Drive enterprise AI transformation for global clients
- Work with a category-defining AI platform bridging agents and experts
- High-impact, customer-facing role with strategic influence
- Competitive benefits: medical, vision, dental insurance, 401(k)


🌍 We’re Hiring: Customer Facing Data Scientist (CFDS) | Remote | Full-time
Are you passionate about applied data science and enjoy partnering directly with enterprise customers to deliver measurable business impact?
Do you thrive in fast-paced, cross-functional environments and want to be the face of a cutting-edge AI platform?
We’re looking for a Customer Facing Data Scientist to design, develop, and deploy machine learning applications with our clients, helping them unlock the value of their data while building strong, trusted relationships.
What You’ll Do:
🔹 Collaborate directly with customers to understand their business challenges and design ML solutions
🔹 Manage end-to-end data science projects with a customer success mindset
🔹 Build long-term trusted relationships with enterprise stakeholders
🔹 Work across industries: Banking, Finance, Health, Retail, E-commerce, Oil & Gas, Marketing
🔹 Evangelize the platform, teach, enable, and support customers in building AI solutions
🔹 Collaborate internally with Data Science, Engineering, and Product teams to deliver robust solutions
What We’re Looking For:
✔️ 5–10 years of experience solving complex data problems using Machine Learning
✔️ Expert in ML modeling and Python coding
✔️ Excellent customer-facing communication and presentation skills
✔️ Experience in AI services or startup environments preferred
✔️ Domain expertise in Finance is a plus
✔️ Applied experience with Generative AI / LLM-based solutions is a plus
✨ Why Join Us:
- High-impact opportunity to shape a new business vertical
- Work with next-gen AI technology to solve real enterprise problems
- Backed by top-tier investors with experienced leadership
- Recognized as a Top 5 Data Science & ML platform by G2
- Comprehensive benefits: medical, vision, dental insurance, 401(k)

🌍 We’re Hiring: AI Success & Solutions (Lead / Manager / Director) | Remote | Full-time
Are you passionate about bridging AI technology with real business impact?
Do you enjoy working customer-facing, driving solution delivery, and ensuring measurable outcomes?
We’re looking for specialists in AI Success & Solutions to lead end-to-end delivery of enterprise AI projects. This role blends Solutions Architecture with Customer Success, partnering with clients to turn AI capabilities into tangible business value.
What You’ll Do:
🔹 Partner with Sales & Data Science to understand client challenges and define success criteria
🔹 Design solution architecture and data pipelines tailored to each use case
🔹 Own post-sale execution: manage projects, facilitate deployment, and track measurable outcomes
🔹 Ensure operational excellence with documentation, version control, and team coordination
🔹 Identify growth opportunities and act as a trusted strategic advisor to clients
What We’re Looking For:
✔️ 6+ years in hybrid technical + customer-facing roles (Solutions Architect, Customer Success, Engagement Manager)
✔️ Experience with applied Data Science, ML, GenAI (LLM prompting, RAG)
✔️ Proven ability to deliver AI or data-driven solutions in consulting or startup environments
✔️ Excellent storytelling and business value articulation for technical and executive audiences
✔️ Strong project management, ownership, and attention to detail
✔️ Global client experience preferred; professional fluency in English required
✨ Why Join Us:
- Drive measurable impact for Fortune 500 customers worldwide
- Be part of a category-defining AI company bridging agents and experts
- Own strategic accounts end-to-end and shape modern AI success
- Work with a high-performance, cross-functional team
- Globally competitive compensation & benefits


🚀 We’re Hiring: Senior AI Engineer (Customer Facing) | Remote
Are you passionate about building and deploying enterprise-grade AI solutions?
Do you enjoy combining deep technical expertise with customer-facing problem-solving?
We’re looking for a Senior AI Engineer to design, deliver, and integrate cutting-edge AI/LLM applications for global enterprise clients.
What You’ll Do:
🔹 Partner directly with enterprise customers to understand business requirements & deliver AI solutions
🔹 Architect and integrate intelligent agent systems (LangChain, LangGraph, CrewAI)
🔹 Build LLM pipelines with RAG and client-specific knowledge
🔹 Collaborate with internal teams to ensure seamless integration
🔹 Champion engineering best practices with production-grade Python code
What We’re Looking For:
✔️ 5+ years of hands-on experience in AI/ML engineering or backend systems
✔️ Proven track record with LLMs & intelligent agents
✔️ Strong Python and backend expertise
✔️ Experience with vector databases (Pinecone, We aviate, FAISS)
✔️ Excellent communication & customer-facing skills
Preferred: Cloud (AWS/Azure/GCP), MLOps knowledge, and startup/AI services experience.
🌍 Remote role | High-impact opportunity | Backed by strong leadership & growth
If this sounds like you (or someone in your network), let’s connect!
Job Overview:
We are looking for a skilled Senior Backend Engineer to join our team. The ideal candidate will have a strong foundation in Java and Spring, with proven experience in building scalable microservices and backend systems. This role also requires familiarity with automation tools, Python development, and working knowledge of AI technologies.
Responsibilities:
- Design, develop, and maintain backend services and microservices.
- Build and integrate RESTful APIs across distributed systems.
- Ensure performance, scalability, and reliability of backend systems.
- Collaborate with cross-functional teams and participate in agile development.
- Deploy and maintain applications on AWS cloud infrastructure.
- Contribute to automation initiatives and AI/ML feature integration.
- Write clean, testable, and maintainable code following best practices.
- Participate in code reviews and technical discussions.
Required Skills:
- 4+ years of backend development experience.
- Strong proficiency in Java and Spring/Spring Boot frameworks.
- Solid understanding of microservices architecture.
- Experience with REST APIs, CI/CD, and debugging complex systems.
- Proficient in AWS services such as EC2, Lambda, S3.
- Strong analytical and problem-solving skills.
- Excellent communication in English (written and verbal).
Good to Have:
- Experience with automation tools like Workato or similar.
- Hands-on experience with Python development.
- Familiarity with AI/ML features or API integrations.
- Comfortable working with US-based teams (flexible hours).


Location: Bangalore/ Mangalore
Experience required: 2-6 years.
Role Summary:
We are seeking a skilled and motivated AI/ML Developer to join our dynamic team. In this role, you will be instrumental in designing, developing, and deploying machine learning models that solve real-world challenges and drive our business forward.
The ideal candidate has a strong foundation in software engineering, a passion for machine learning, and hands-on experience building and scaling AI-powered solutions.
Key Responsibilities:
• Model Development & Training: Design, build, train, and validate machine learning models using state-of-the-art frameworks like TensorFlow, PyTorch, or Scikit-learn.
• Data Engineering: Develop robust data pipelines for collecting, cleaning, and preprocessing large datasets to prepare them for model training.
• Algorithm Research: Stay current with the latest advancements in AI/ML and apply cutting-edge algorithms and techniques to enhance our solutions.
• Collaboration: Work closely with cross-functional teams, including data scientists, software engineers, and product managers, to translate business requirements into technical solutions.
• Performance Optimization: Analyze and optimize the performance, scalability, and efficiency of ML systems.
Required Skills & Qualifications
• Experience: 2-6 years of professional experience in a software development or machine learning role.
• Programming: High proficiency in Python is essential. Experience with other languages like Java, C++, or Scala is a plus.
• ML Frameworks: Hands-on experience with core machine learning libraries and frameworks such as Scikit-learn, TensorFlow, PyTorch, or Keras.
• Cloud Computing: Proven experience with at least one major cloud platform (AWS, Google Cloud, or Azure) and their associated AI/ML services (e.g., SageMaker, Vertex AI, Azure ML).
• Data Tools: Experience with data processing technologies like SQL, Pandas, NumPy, and Spark.
• Software Engineering: Strong understanding of software development principles, including version control (Git), testing, and CI/CD pipelines.
About the Company:
Pace Wisdom Solutions is a deep-tech Product engineering and consulting firm. We have offices in San Francisco, Bengaluru, and Singapore.
We specialize in designing and developing bespoke software solutions that cater to solving niche business problems.
We engage with our clients at various stages:
• Right from the idea stage to scope out business requirements.
• Design & architect the right solution and define tangible milestones.
• Setup dedicated and on-demand tech teams for agile delivery.
• Take accountability for successful deployments to ensure efficient go-to-market Implementations.
Pace Wisdom has been working with Fortune 500 Enterprises and growth-stage startups/SMEs since 2012. We also work as an extended Tech team and at times we have played the role of a Virtual CTO too. We believe in building lasting relationships and providing value-add every time and going beyond business


About the Job
AI/ML Engineer
Experience: 1–5 Years
Salary: Competitive
Preferred Notice Period: Immediate to 30 Days
Opportunity Type: Remote (Global)
Placement Type: Freelance/Contract
(Note: This is a requirement for one of TalentLo’s Clients)
About TalentLo
TalentLo is a revolutionary talent platform connecting exceptional tech professionals with high-quality clients worldwide. We’re building a carefully curated pool of skilled experts to match with companies actively seeking specialized talent for impactful projects.
Role Overview
We’re seeking experienced AI/ML Engineers with 1–5 years of professional experience to design, build, and deploy practical machine learning solutions. This is a freelance/contract opportunity where you’ll work remotely with global clients on innovative AI-driven projects that create real-world business impact.
Responsibilities
- Design and implement end-to-end machine learning solutions
- Select and apply appropriate algorithms/models for business problems
- Build and optimize data pipelines and feature engineering workflows
- Deploy, monitor, and scale ML models in production environments
- Ensure performance optimization and scalability of solutions
- Translate business requirements into ML/AI specifications
- Collaborate with cross-functional teams to integrate ML solutions
- Communicate technical concepts clearly to non-technical stakeholders
Requirements
- 1–5 years of professional AI/ML development experience
- Strong proficiency in Python and ML frameworks (TensorFlow, PyTorch, scikit-learn)
- Hands-on experience deploying models into production environments
- Solid knowledge of feature engineering and data preprocessing techniques
- Experience with cloud ML services (AWS Sagemaker, GCP Vertex AI, Azure ML)
- Understanding of statistical concepts, validation methods, and ML evaluation metrics
- Familiarity with data engineering workflows and data pipelines
- Version control and collaboration experience (Git, GitHub, GitLab)
How to Apply
- Create your profile on TalentLo’s platform → https://www.talentlo.com/signup
- Submit your GitHub, portfolio, or sample AI/ML projects
- Get shortlisted & connect with the client
✨ If you’re ready to work on cutting-edge AI projects, collaborate with global teams, and take your career to the next level — apply today!

Designation: Python Developer
Experienced in AI/ML
Location: Turbhe, Navi Mumbai
CTC: 6-12 LPA
Years of Experience: 2-5 years
At Arcitech.ai, we’re redefining the future with AI-powered software solutions across education, recruitment, marketplaces, and beyond. We’re looking for a Python Developer passionate about AI/ML, who’s ready to work on scalable, cloud-native platforms and help build the next generation of intelligent, LLM-driven products.
💼 Your Responsibilities
AI/ML Engineering
- Develop, train, and optimize ML models using PyTorch/TensorFlow/Keras.
- Build end-to-end LLM and RAG (Retrieval-Augmented Generation) pipelines using LangChain.
- Collaborate with data scientists to convert prototypes into production-grade AI applications.
- Integrate NLP, Computer Vision, and Recommendation Systems into scalable products.
- Work with transformer-based architectures (BERT, GPT, LLaMA, etc.) for real-world AI use cases.
Backend & Systems Development
- Design, develop, and maintain robust Python microservices with REST/GraphQL APIs.
- Implement real-time communication with Django Channels/WebSockets.
- Containerize AI services with Docker and deploy on Kubernetes (EKS/GKE/AKS).
- Configure and manage AWS (EC2, S3, RDS, SageMaker, CloudWatch) for AI/ML workloads.
Reliability & Automation
- Develop background task queues with Celery, ensuring smart retries and monitoring.
- Implement CI/CD pipelines for automated model training, testing, and deployment.
- Write automated unit & integration tests (pytest/unittest) with ≥80% coverage.
Collaboration
- Contribute to MLOps best practices and mentor peers in LangChain/AI integration.
- Participate in tech talks, code reviews, and AI learning sessions within the team.
🎓 Required Qualifications
- Bachelor’s or Master’s degree in Computer Science, AI/ML, or related field.
- 2–5 years of experience in Python development with strong AI/ML exposure.
- Hands-on experience with LangChain for building LLM-powered workflows and RAG systems.
- Deep learning experience with PyTorch or TensorFlow.
- Experience deploying ML models and LLM apps into production systems.
- Familiarity with REST/GraphQL APIs and cloud platforms (AWS/Azure/GCP).
- Skilled in Git workflows, automated testing, and CI/CD practices.
🌟 Nice to Have
- Experience with vector databases (Pinecone, Weaviate, FAISS, Milvus) for retrieval pipelines.
- Knowledge of LLM fine-tuning, prompt engineering, and evaluation frameworks.
- Familiarity with Airflow/Prefect/Dagster for data and model pipelines.
- Background in statistics, optimization, or applied mathematics.
- Contributions to AI/ML or LangChain open-source projects.
- Experience with model monitoring and drift detection in production.
🎁 Why Join Us
- Competitive compensation and benefits 💰
- Work on cutting-edge LLM and AI/ML applications 🤖
- A collaborative, innovation-driven work culture 📚
- Opportunities to grow into AI/ML leadership roles 🚀

form :https://forms.gle/ncGqEJrJDvEDhXtL7
AI Researcher, Speech & Audio, Intern -
Internship Opportunity at JoshTalks AI Lab
(ai.joshtalks.com)
Location: Gurgaon, India
Type: Full-time Internship (6–12 months)
Who: Final-year engineering students or recent graduates passionate about AI/ML
in speech
About Us
At JoshTalks AI Lab, we believe that voice will be the primary medium of interaction
between man and machine. Our mission is simple yet ambitious:
● Help machines talk like humans.
● Build the benchmarks and datasets that become the backbone of global
progress in speech AI.
● Drive improvements not just through compute or algorithms — but through
high-quality, diverse, real-world data.
Our datasets today power some of the largest and most widely used speech
models in the world (you’ve definitely used them, even if we can’t name them
😉).
What You’ll Work On
This is not a “just another internship.” You’ll be directly contributing to the global
race to perfect speech AI:
1. Benchmarking the world’s speech models
○ Design and run evaluations for ASR and speech-to-speech systems.
○ Create benchmarks that will guide top AI labs on where their models
fail and where they shine.
2. Modeling & Fine-Tuning
○ Fine-tune speech recognition systems (like Whisper/wav2vec2) to push
Word Error Rates toward ~5%.
○ Experiment with multilingual, code-switched, and noisy speech
to mimic real-world conditions.
3. Impact at Scale
○ Your work won’t just sit in a paper. It will influence how the
world’s largest AI models get built, tested, and improved.
Who We’re Looking For
● Final-year undergraduates (B.Tech/B.E.) in CSE, EE, AI/ML, or related fields.
● Strong interest in speech, audio, NLP, or multimodal AI.
● Hands-on experience in one or more of:
○ Fine-tuning speech or language models (Whisper, wav2vec2,
HuBERT, SER, etc.)
○ Building speech-driven projects (assistants, classifiers, chatbots,
SER systems)
○ Working with PyTorch, TensorFlow, or Hugging Face transformers.
● Bonus: past projects on GitHub, Kaggle, or research papers.
Why Join Us
● Ownership: Even as a final-year student, you’ll get the chance to own
problems of global importance — from reducing ASR word error rates toward
5% to building benchmarks that influence how the next generation of
speech-to-speech models are developed. These are not side projects:
the problems you’ll work on may define how billions of people interact
with machines in the future.
● Front-row seat in speech AI: Your work will shape benchmarks and datasets
used by the world’s top model labs.
● Learning: Work with experts solving speech challenges across 20+
Indian languages and noisy, real-world audio.
● Impactful projects: The benchmarks and models you help build will
set direction for global AI progress.
● Startup energy, global scale: Small team, big impact — perfect for ambitious
builders.
● Co-Authorship: If any of the work you contribute to is published as a paper,
benchmark report, or dataset release, you will be credited as a co-author.
This means your contributions won’t just stay inside the lab — they’ll be
visible to the wider research community and part of the academic and
industry record.
Details
● Location: Gurgaon (on-site preferred for collaboration)
● Duration: 6–12 months
● Type: Paid Internship (full-time)
● Start Date: Flexible for final-year students (aligns with academic calendar)
If you’re someone who dreams of making speech AI as natural as human
conversation, this is your chance to work on the real frontier. Super interested?


What We’re Looking For
As a Senior AI/ML Engineer at Meltwater, you’ll play a vital role in building cutting-edge social solutions for our global client base within the Explore mission. We’re seeking a proactive, quick-learning engineer who thrives in a collaborative environment.
Our culture values continuous learning, team autonomy, and a DevOps mindset. Meltwater development teams take full ownership of their subsystems and infrastructure, including running on-call rotations.
With a heavy reliance on Software Engineering in AI/ML and Data Science, we seek individuals with experience in:
- Cloud infrastructure and containerisation (Docker, Azure or AWS – Azure preferred)
- Data preparation
- Model lifecycle (training, serving, registries)
- Natural Language Processing (NLP) and Large Language Models (LLMs)
In this role, you’ll have the opportunity to:
- Push the boundaries of our technology stack
- Modify open-source libraries
- Innovate with existing technologies
- Work on distributed systems at scale
- Extract insights from vast amounts of data
What You’ll Do
- Lead and mentor a small team while doing hands-on coding.
- Demonstrate excellent communication and collaboration skills.
What You’ll Bring
- Bachelor’s or Master’s degree in Computer Science (or equivalent) OR demonstrable experience.
- Proven experience as a Lead Software Engineer in AI/ML and Data Science.
- 8+ years of working experience.
- 2+ years of leadership experience as Tech Lead or Team Lead.
- 5+ years strong knowledge of Python and software engineering principles.
- 5+ years strong knowledge of cloud infrastructure and containerization.
- Docker (required).
- Azure or AWS (required, Azure preferred).
- 5+ years strong working knowledge of TensorFlow / PyTorch.
- 3+ years good working knowledge of ML-Ops principles.
- Data preparation.
- Model lifecycle (training, serving, registries).
- Theoretical knowledge of AI / Data Science in one or more of:
- Natural Language Processing (NLP) and LLMs
- Neural Networks
- Topic modelling and clustering
- Time Series Analysis (TSA): anomaly detection, trend analysis, forecasting
- Retrieval Augmented Generation
- Speech to Text
- Excellent communication and collaboration skills.
What We Offer
- Flexible paid time off options for enhanced work-life balance.
- Comprehensive health insurance tailored for you.
- Employee assistance programs covering mental health, legal, financial, wellness, and behavioural support.
- Complimentary Calm App subscription for you and your loved ones.
- Energetic work environment with a hybrid work style.
- Family leave program that grows with your tenure.
- Inclusive community with professional development opportunities.
Our Story
At Meltwater, we believe that when you have the right people in the right environment, great things happen.
Our best-in-class technology empowers 27,000 customers worldwide to make better business decisions through data. But we can’t do that without our global team of developers, innovators, problem-solvers, and high-performers who embrace challenges and find new solutions.
Our award-winning global culture drives everything we do. Employees can make an impact, learn every day, feel a sense of belonging, and celebrate successes together.
We are innovators at the core who see potential in people, ideas, and technologies. Together, we challenge ourselves to go big, be bold, and build best-in-class solutions.
- 2,200+ employees
- 50 locations across 25 countries
We are Meltwater. We love working here, and we think you will too.
"Inspired by innovation, powered by people."


What You’ll Do:
As an AI/ML Engineer at Meltwater, you’ll play a vital role in building cutting-edge social solutions for our global client base within the Explore mission. We’re seeking a proactive, quick-learning engineer who thrives in a collaborative environment. Our culture values continuous learning, team autonomy, and a DevOps mindset.
Meltwater development teams take full ownership of their subsystems and infrastructure, including running on-call rotations. With a heavy reliance on Software Engineer in AI/ML and Data Science, we seek individuals with experience in:
- Cloud infrastructure and containerization (Docker, Azure or AWS is required; Azure is preferred)
- Data Preparation
- Model Lifecycle (training, serving, and registries)
- Natural Language Processing (NLP) and LLMs
In this role, you’ll have the opportunity to push the boundaries of our technology stack, from modifying open-source libraries to innovating with existing technologies. If you’re passionate about distributed systems at scale and finding new ways to extract insights from vast amounts of data, we invite you to join us in this exciting journey.
What You’ll Bring:
- Bachelor’s or master’s degree in computer science or equivalent degree or demonstrable experience.
- Proven experience as a Software Engineer in AI/ML and Data Science.
- Minimum of 2-4 years of working experience.
- Strong working experience in Python and software engineering principles (2+ Years).
- Experience with cloud infrastructure and containerization (1+ Years).
- Docker is required.
- Experience with TensorFlow / PyTorch (2+ Years).
- Experience with ML-Ops Principles (1+ Years).
- Data Preparation
- Model Lifecycle (training, serving, and registries)
- Sound knowledge on any cloud (AWS/Azure).
- Good theoretical knowledge of AI / Data Science in one or more of the following areas:
- Natural Language Processing (NLP) and LLMs
- Neural Networks
- Topic Modelling and Clustering
- Time Series Analysis (TSA), including anomaly detection, trend analysis, and forecasting
- Retrieval Augmented Generation
- Speech to Text
- Excellent communication and collaboration skills
What We Offer:
- Enjoy comprehensive paid time off options for enhanced work-life balance.
- Comprehensive health insurance tailored for you.
- Employee assistance programs covering mental health, legal, financial, wellness, and behaviour areas to ensure your overall well-being.
- Energetic work environment with a hybrid work style, providing the balance you need.
- Benefit from our family leave program, which grows with your tenure at Meltwater.
- Thrive within our inclusive community and seize ongoing professional development opportunities to elevate your career.
Where You’ll Work:
HITEC City, Hyderabad.
Our Story:
The sky is the limit at Meltwater.
At Meltwater, we believe that when you have the right people in the right working environment, great things happen. Our best-in-class technology empowers our 27,000 customers around the world to analyse over a billion pieces of data each day and make better business decisions.
Our award-winning culture is our north star and drives everything we do – from striving to create an environment where all employees do their best work, to delivering customer value by continuously innovating our products — and making sure to celebrate our successes and have fun along the way.
We’re proud of our diverse team of 2,300+ employees in 50 locations across 25 countries around the world. No matter where you are, you’ll work with people who care about your success and get the support you need to reach your goals.
So, in a nutshell, that’s Meltwater. We love working here, and we think you will too.

About Unilog
Unilog is the only connected product content and eCommerce provider serving the Wholesale Distribution, Manufacturing, and Specialty Retail industries. Our flagship CX1 Platform is at the center of some of the most successful digital transformations in North America. CX1 Platform’s syndicated product content, integrated eCommerce storefront, and automated PIM tool simplify our customers' path to success in the digital marketplace.
With more than 500 customers, Unilog is uniquely positioned as the leader in eCommerce and product content for Wholesale distribution, manufacturing, and specialty retail.
About the Role
We are looking for a highly motivated Innovation Engineer to join our CTO Office and drive the exploration, prototyping, and adoption of next-generation technologies. This role offers a unique opportunity to work at the forefront of AI/ML, Generative AI (Gen AI), Large Language Models (LLMs), Vertex AI, MCP, Vector Databases, AI Search, Agentic AI, Automation.
As an Innovation Engineer, you will be responsible for identifying emerging technologies, building proof-of-concepts (PoCs), and collaborating with cross-functional teams to define the future of AI-driven solutions. Your work will directly influence the company’s technology strategy and help shape disruptive innovations.
Key Responsibilities
- Research Implementation: Stay ahead of industry trends, evaluate emerging AI/ML technologies, and prototype novel solutions in areas like Gen AI, Vector Search, AI Agents, VertexAI, MCP and Automation.
- Proof-of-Concept Development: Rapidly build, test, and iterate PoCs to validate new technologies for potential business impact.
- AI/ML Engineering: Design and develop AI/ML models, AI Agents, LLMs, intelligent search capabilities leveraging Vector embeddings.
- Vector AI Search: Explore vector databases and optimize retrieval-augmented generation (RAG) workflows.
- Automation AI Agents: Develop autonomous AI agents and automation frameworks to enhance business processes.
- Collaboration Thought Leadership: Work closely with software developers and product teams to integrate innovations into production-ready solutions.
- Innovation Strategy: Contribute to the technology roadmap, patents, and research papers to establish leadership in emerging domains.
Required Qualifications
- 4–10 years of experience in AI/ML, software engineering, or a related field.
- Strong hands-on expertise in Python, TensorFlow, PyTorch, LangChain, Hugging Face, OpenAI APIs, Claude, Gemini, VertexAI, MCP.
- Experience with LLMs, embeddings, AI search, vector databases (e.g., Pinecone, FAISS, Weaviate, PGVector), MCP and agentic AI (Vertex, Autogen, ADK)
- Familiarity with cloud platforms (AWS, Azure, GCP) and AI/ML infrastructure.
- Strong problem-solving skills and a passion for innovation.
- Ability to communicate complex ideas effectively and work in a fast-paced, experimental environment.
Preferred Qualifications
- Experience with multi-modal AI (text, vision, audio), reinforcement learning, or AI security.
- Knowledge of data pipelines, MLOps, and AI governance.
- Contributions to open-source AI/ML projects or published research papers.
Why Join Us?
- Work on cutting-edge AI/ML innovations with the CTO Office.
- Influence the company’s future AI strategy and shape emerging technologies.
- Competitive compensation, growth opportunities, and a culture of continuous learning.
About Our Benefits
Unilog offers a competitive total rewards package including competitive salary, multiple medical, dental, and vision plans to meet all our employees’ needs, career development, advancement opportunities, annual merit, a generous time-off policy, and a flexible work environment.
Unilog is committed to building the best team and we are committed to fair hiring practices where we hire people for their potential and advocate for diversity, equity, and inclusion. As such, we do not discriminate or make decisions based on your race, color, religion, creed, ancestry, sex, national origin, age, disability, familial status, marital status, military status, veteran status, sexual orientation, gender identity, or expression, or any other protected class.


Role Overview
We're looking for a highly motivated AI Intern to join our dynamic team. This isn't your average internship; you'll be diving headfirst into the exciting world of Natural Language Processing (NLP) and Large Language Models (LLMs). You will work on real-world projects, contributing directly to our core products by researching, developing, and fine-tuning state-of-the-art language models. If you're passionate about making machines understand and generate human language, this is the perfect role for you!
Skills Needed:
- Python
- ML
- NLP
- LLM Fine-Tuning
- Fast API
What You'll Do (Key Responsibilities)
Develop & Implement LLM Models: Assist in building and deploying LLM solutions for tasks like sentiment analysis, text summarization, named entity recognition (NER), and question-answering.
Fine-Tune LLMs: Work hands-on with pre-trained Large Language Models (like Llama, GPT, BERT) and fine-tune them on our custom datasets to enhance performance for specific tasks.
Data Pipeline Management: Be responsible for data preprocessing, cleaning, and augmentation to create high-quality datasets for training and evaluation.
Experiment & Evaluate: Research and experiment with different model architectures and fine-tuning strategies (e.g., LoRA, QLoRA) to optimize for accuracy, speed, and cost.
Collaborate & Document: Work closely with our senior ML engineers and data scientists, actively participating in code reviews and clearly documenting your findings and methodologies.
Must-Have Skills (Qualifications)
Strong Python Proficiency: You live and breathe Python and are comfortable with its data science ecosystem (Pandas, NumPy, Scikit-learn).
Solid ML & NLP Fundamentals: A strong theoretical understanding of machine learning algorithms, deep learning concepts, and core NLP techniques (e.g., tokenization, embeddings, attention mechanisms).
Deep Learning Frameworks: Hands-on experience with either PyTorch or TensorFlow.
Familiarity with LLMs: You understand the basics of transformer architecture and have some exposure to working with or fine-tuning Large Language Models.
Problem-Solving Mindset: An analytical and curious approach to tackling complex challenges.
Educational Background: Currently pursuing or recently graduated with a degree in Computer Science, AI, Data Science, or a related technical field.
Brownie Points For (Preferred Skills)
Experience with the Hugging Face ecosystem (Transformers, Datasets, Tokenizers).
A portfolio of personal or academic projects on GitHub showcasing your AI/ML skills.
Familiarity with vector databases (e.g., Pinecone, ChromaDB).

We're Hiring: Senior Developer (AI & Machine Learning)** 🚀
🔧 **Tech Stack**: Python, Neo4j, FAISS, LangChain, React.js, AWS/GCP/Azure
🧠 **Role**: AI/ML development, backend architecture, cloud deployment
🌍 **Location**: Remote (India)
💼 **Experience**: 5-10 years
If you're passionate about making an impact in EdTech and want to help shape the future of learning with AI, we want to hear from you!


About Data Axle:
Data Axle Inc. has been an industry leader in data, marketing solutions, sales and research for over 50 years in the USA. Data Axle now as an established strategic global centre of excellence in Pune. This centre delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business & consumer databases.
Data Axle Pune is pleased to have achieved certification as a Great Place to Work!
Roles & Responsibilities:
We are looking for a Data Scientist to join the Data Science Client Services team to continue our success of identifying high quality target audiences that generate profitable marketing return for our clients. We are looking for experienced data science, machine learning and MLOps practitioners to design, build and deploy impactful predictive marketing solutions that serve a wide range of verticals and clients. The right candidate will enjoy contributing to and learning from a highly talented team and working on a variety of projects.
We are looking for a Senior Data Scientist who will be responsible for:
- Ownership of design, implementation, and deployment of machine learning algorithms in a modern Python-based cloud architecture
- Design or enhance ML workflows for data ingestion, model design, model inference and scoring
- Oversight on team project execution and delivery
- If senior, establish peer review guidelines for high quality coding to help develop junior team members’ skill set growth, cross-training, and team efficiencies
- Visualize and publish model performance results and insights to internal and external audiences
Qualifications:
- Masters in a relevant quantitative, applied field (Statistics, Econometrics, Computer Science, Mathematics, Engineering)
- Minimum of 3.5 years of work experience in the end-to-end lifecycle of ML model development and deployment into production within a cloud infrastructure (Databricks is highly preferred)
- Proven ability to manage the output of a small team in a fast-paced environment and to lead by example in the fulfilment of client requests
- Exhibit deep knowledge of core mathematical principles relating to data science and machine learning (ML Theory + Best Practices, Feature Engineering and Selection, Supervised and Unsupervised ML, A/B Testing, etc.)
- Proficiency in Python and SQL required; PySpark/Spark experience a plus
- Ability to conduct a productive peer review and proper code structure in Github
- Proven experience developing, testing, and deploying various ML algorithms (neural networks, XGBoost, Bayes, and the like)
- Working knowledge of modern CI/CD methods This position description is intended to describe the duties most frequently performed by an individual in this position.
It is not intended to be a complete list of assigned duties but to describe a position level.


Company name: JPMorgan (JPMC)
Job Category: Predictive Science
Location: Parcel 9, Embassy Tech Village, Outer Ring Road, Deverabeesanhalli Village, Varthur Hobli, Bengaluru
Job Schedule: Full time
JOB DESCRIPTION
JPMC is hiring the best talents to join the growing Asset and Wealth Management AI team. We are executing like a startup and building next-generation technology that combines JPMC unique data and full-service advantage to develop high impact AI applications and platforms in the financial services industry. We are looking for hands-on ML Engineering leader and expert who is excited about the opportunity.
As a senior ML and GenAI engineer, you will play a lead role as a senior member of our global team. Your responsibilities will entail hands on development of high-impact business solutions through data analysis, developing cutting edge ML and LLM models, and deploying these models to production environments on AWS or Azure.
You'll combine your years of proven development expertise with a never-ending quest to create innovative technology through solid engineering practices. Your passion and experience in one or more technology domains will help solve complex business problems to serve our Private Bank clients. As a constant learner and early adopter, you’re already embracing leading-edge technologies and methodologies; your example encourages others to follow suit.
Job responsibilities
• Hands-on architecture and implementation of lighthouse ML and LLM-powered solutions
• Close partnership with peers in a geographically dispersed team and colleagues across organizational lines
• Collaborate across JPMorgan AWM’s lines of business and functions to accelerate adoption of common AI capabilities
• Design and implement highly scalable and reliable data processing pipelines and deploy model inference services.
• Deploy solutions into public cloud infrastructure
• Experiment, develop and productionize high quality machine learning models, services, and platforms to make a huge technology and business impact
Required qualifications, capabilities, and skills
• Formal training or certification on software engineering concepts and 5+ years applied experience
• MS in Computer Science, Statistics, Mathematics or Machine Learning.
• Development experience, along with hands-on Machine Learning Engineering
• Proven leadership capacity, including new AI/ML idea generation and GenAI-based solutions
• Solid Python programming skills required; with other high-performance language such as Go a big plus
• Expert knowledge of one of the cloud computing platforms preferred: Amazon Web Services (AWS), Azure, Kubernetes.
• Experience in using LLMs (OpenAI, Claude or other models) to solve business problems, including full workflow toolset, such as tracing, evaluations and guardrails. Understanding of LLM fine-tuning and inference a plus
• Knowledge of data pipelines, both batch and real-time data processing on both SQL (such as Postgres) and NoSQL stores (such as OpenSearch and Redis)
• Expertise in application, data, and infrastructure architecture disciplines
• Deep knowledge in Data structures, Algorithms, Machine Learning, Data Mining, Information Retrieval, Statistics.
• Excellent communication skills and ability to communicate with senior technical and business partners
Preferred qualifications, capabilities, and skills
• Expert in at least one of the following areas: Natural Language Processing, Reinforcement Learning, Ranking and Recommendation, or Time Series Analysis.
• Knowledge of machine learning frameworks: Pytorch, Keras, MXNet, Scikit-Learn
• Understanding of finance or wealth management businesses is an added advantage
ABOUT US
JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation.
ABOUT THE TEAM
J.P. Morgan Asset & Wealth Management delivers industry-leading investment management and private banking solutions. Asset Management provides individuals, advisors and institutions with strategies and expertise that span the full spectrum of asset classes through our global network of investment professionals. Wealth Management helps individuals, families and foundations take a more intentional approach to their wealth or finances to better define, focus and realize their goals.

Job Description – Machine Learning Expert
Role: Machine Learning Expert
Experience: 6+ Years
Location: Bangalore
Education: B.Tech Degree (Computer Science / Information Technology / Data Science / related fields)
Work Mode: Hybrid – 3 Days Office + 3 Days Work from Home
Interview Mode: Candidate must be willing to attend Face-to-Face (F2F) L2 round at Bangalore location
About the Role
We are seeking a highly skilled Machine Learning Expert with a strong background in building, training, and deploying AI/ML models. The ideal candidate will bring hands-on expertise in designing intelligent systems that leverage advanced algorithms, deep learning, and data-driven insights to solve complex business challenges.
Key Responsibilities
- Develop and implement machine learning and deep learning models for real-world business use cases.
- Perform data cleaning, preprocessing, feature engineering, and model optimization.
- Research, design, and apply state-of-the-art ML techniques across domains such as NLP, Computer Vision, or Predictive Analytics.
- Collaborate with data engineers and software developers to ensure seamless end-to-end ML solution deployment.
- Deploy ML models to production environments and monitor performance for scalability and accuracy.
- Stay updated with the latest advancements in Artificial Intelligence and Machine Learning frameworks.
Required Skills & Qualifications
- B.Tech degree in Computer Science, Information Technology, Data Science, or related discipline.
- 6+ years of hands-on experience in Machine Learning, Artificial Intelligence, and Deep Learning.
- Strong expertise in Python and ML frameworks such as TensorFlow, PyTorch, Scikit-learn, Keras.
- Solid foundation in mathematics, statistics, algorithms, and probability.
- Experience in working with NLP, Computer Vision, Recommendation Systems, or Predictive Modeling.
- Knowledge of cloud platforms (AWS / GCP / Azure) for model deployment.
- Familiarity with MLOps tools and practices for lifecycle management.
- Excellent problem-solving skills and the ability to work in a collaborative environment.
Preferred Skills (Good to Have)
- Experience with Big Data frameworks (Hadoop, Spark).
- Exposure to Generative AI, LLMs (Large Language Models), and advanced AI research.
- Contributions to open-source projects, publications, or patents in AI/ML.
Work Mode & Interview Process
- Hybrid Model: 3 days in office (Bangalore) + 3 days remote.
- Interview: Candidate must be available for Face-to-Face L2 interview at Bangalore location.

Role: GenAI Full Stack Engineer
Fulltime
Work Location: Remote
Job Description:
• Python and familiar with AI/Gen AI frameworks. Experience with data manipulation libraries like Pandas and NumPy is crucial.
• Specific expertise in implementing and managing large language models (LLMs) is a must.
• Fast API experience for API development
• A solid grasp of software engineering principles, including version control (Git), continuous integration and continuous deployment (CI/CD) practices, and automated testing, is required. Experience in MLOps, ML engineering, and Data Science, with a proven track record of developing and maintaining AI solutions, is essential.
• We also need proficiency in DevOps tools such as Docker, Kubernetes, Jenkins, and Terraform, along with advanced CI/CD practices.
About HelloRamp.ai
HelloRamp is on a mission to revolutionize media creation for automotive and retail using AI. Our platform powers 3D/AR experiences for leading brands like Cars24, Spinny, and Samsung. We’re now building the next generation of Computer Vision + AI products, including cutting-edge NeRF pipelines and AI-driven video generation.
What You’ll Work On
- Develop and optimize Computer Vision pipelines for large-scale media creation.
- Implement NeRF-based systems for high-quality 3D reconstruction.
- Build and fine-tune AI video generation models using state-of-the-art techniques.
- Optimize AI inference for production (CUDA, TensorRT, ONNX).
- Collaborate with the engineering team to integrate AI features into scalable cloud systems.
- Research latest AI/CV advancements and bring them into production.
Skills & Experience
- Strong Python programming skills.
- Deep expertise in Computer Vision and Machine Learning.
- Hands-on with PyTorch/TensorFlow.
- Experience with NeRF frameworks (Instant-NGP, Nerfstudio, Plenoxels) and/or video synthesis models.
- Familiarity with 3D graphics concepts (meshes, point clouds, depth maps).
- GPU programming and optimization skills.
Nice to Have
- Knowledge of Three.js or WebGL for rendering AI outputs on the web.
- Familiarity with FFmpeg and video processing pipelines.
- Experience in cloud-based GPU environments (AWS/GCP).
Why Join Us?
- Work on cutting-edge AI and Computer Vision projects with global impact.
- Join a small, high-ownership team where your work matters.
- Opportunity to experiment, publish, and contribute to open-source.
- Competitive pay and flexible work setup.
Title: Data Platform / Database Architect (Postgres + Kafka) — AI‑Ready Data Infrastructure
Location: Noida (Hybrid). Remote within IST±3 considered for exceptional candidates.
Employment: Full‑time
About Us
We are building a high‑throughput, audit‑friendly data platform that powers a SaaS for financial data automation and reconciliation. The stack blends OLTP (Postgres), streaming (Kafka/Debezium), and OLAP (ClickHouse/Snowflake/BigQuery), with hooks for AI use‑cases (vector search, feature store, RAG).
Role Summary
Own the end‑to‑end design and performance of our data platform—from multi‑tenant Postgres schemas to CDC pipelines and analytics stores—while laying the groundwork for AI‑powered product features.
What You’ll Do
• Design multi‑tenant Postgres schemas (partitioning, indexing, normalization, RLS), and define retention/archival strategies.
• Make Postgres fast and reliable: EXPLAIN/ANALYZE, connection pooling, vacuum/bloat control, query/index tuning, replication.
• Build event‑streaming/CDC with Kafka/Debezium (topics, partitions, schema registry), and deliver data to ClickHouse/Snowflake/BigQuery.
• Model analytics layers (star/snowflake), orchestrate jobs (Airflow/Dagster), and implement dbt‑based transformations.
• Establish observability and SLOs for data: query/queue metrics, tracing, alerting, capacity planning.
• Implement data security: encryption, masking, tokenization of PII, IAM boundaries; contribute to PCI‑like audit posture.
• Integrate AI plumbing: vector embeddings (pgvector/Milvus), basic feature‑store patterns (Feast), retrieval pipelines and metadata lineage.
• Collaborate with backend/ML/product to review designs, coach engineers, write docs/runbooks, and lead migrations.
Must‑Have Qualifications
• 6+ years building high‑scale data platforms with deep PostgreSQL experience (partitioning, advanced indexing, query planning, replication/HA).
• Hands‑on with Kafka (or equivalent) and Debezium/CDC patterns; schema registry (Avro/Protobuf) and exactly‑once/at‑least‑once tradeoffs.
• One or more analytics engines at scale: ClickHouse, Snowflake, or BigQuery, plus strong SQL.
• Python for data tooling (pydantic, SQLAlchemy, or similar); orchestration with Airflow or Dagster; transformations with dbt.
• Solid cloud experience (AWS/GCP/Azure)—networking, security groups/IAM, secrets management, cost controls.
• Pragmatic performance engineering mindset; excellent communication and documentation.
Nice‑to‑Have
• Vector/semantic search (pgvector/Milvus/Pinecone), feature store (Feast), or RAG data pipelines.
• Experience in fintech‑style domains (reconciliation, ledgers, payments) and SOX/PCI‑like controls.
• Infra‑as‑Code (Terraform), containerized services (Docker/K8s), and observability stacks (Prometheus/Grafana/OpenTelemetry).
• Exposure to Go/Java for stream processors/consumers.
• Lakehouse formats (Delta/Iceberg/Hudi).

Responsibilities:
● Design, develop, and maintain scalable backend services and APIs using Java and Spring
Boot.
● Create and optimize SQL database schemas and queries in PostgreSQL to ensure efficient
data storage and retrieval.
● Implement RESTful APIs to facilitate seamless communication between frontend and backend
components.
● Configure and manage Nginx web servers to efficiently handle incoming requests and improve
application performance.
● Deploy and manage applications on AWS or GCP, ensuring scalability, reliability, and
security.
● Configure and optimize message broker systems using Kafka for real-time data processing
and communication.
● Containerize applications using Docker for easy deployment, scaling, and management.
● Create detailed Low-Level Designs (LLDs) and High-Level Designs (HLDs) to guide the
development and architecture of backend systems.
● Automating CI/CD pipelines and streamlining the software development lifecycle.
● Integrate AI/ML models into backend workflows using Python, PyTorch/TensorFlow, or
third-party AI APIs.
● Leverage AI tools (e.g., OpenAI APIs, Hugging Face, AWS AI services) to build intelligent
features.
● Collaborate closely with frontend developers, product managers, data scientists, and other
stakeholders to deliver high-quality AI-powered solutions.
● Monitor and troubleshoot production systems to ensure optimal performance, reliability, and
uptime.
What We’re Looking For:
● Bachelor’s degree in Computer Science, Engineering, or related field.
● 3-5 years of experience in backend development.
● Proficiency in Java, Spring Boot, PostgreSQL, SQL, and GitActions.
● Strong understanding of RESTful API design principles and best practices.
● Experience with configuring and optimizing Nginx web servers.
● Experience with configuring and optimizing Kafka service.
● Hands-on experience with AWS or GCP.
● Familiarity with Docker containers and container orchestration.
● Ability to create comprehensive Low-Level Designs (LLDs) and High-Level Designs (HLDs)
for backend systems.
● Experience with Python for AI/ML model integration in backend services.
● Familiarity with AI platforms and APIs such as OpenAI, Hugging Face, AWS AI/ML, or GCP
Vertex AI.
● Excellent problem-solving skills and attention to detail.
● Strong communication and collaboration skills, with the ability to work effectively in a team
environment.
Preferred Qualifications:
● Knowledge of microservices architecture and related technologies.
● Experience with cloud-native development and serverless computing.
● Understanding of software development best practices, including Agile methodologies

We are seeking a talented Full Stack Developer to design, build, and maintain scalable web and mobile applications. The ideal candidate should have hands-on experience in frontend (React.js, Flutter), backend (Node.js, Express), databases (PostgreSQL, MongoDB), and Python for AI/ML integration. You will work closely with the engineering team to deliver secure, high-performance, and user-friendly products.
Key Responsibilities
- Develop responsive and dynamic web applications using React.js and modern UI frameworks.
- Build and optimize REST APIs and backend services with Node.js and Express.js.
- Design and manage PostgreSQL and MongoDB databases, ensuring optimized queries and data modeling.
- Implement state management using Redux/Context API.
- Ensure API security with JWT, OAuth2, Helmet.js, and rate-limiting.
- Integrate Google Cloud services (GCP) for hosting, storage, and serverless functions.
- Deploy and maintain applications using CI/CD pipelines, Docker, and Kubernetes.
- Use Redis for caching, sessions, and job queues.
- Optimize frontend performance (lazy loading, code splitting, caching strategies).
- Collaborate with design, QA, and product teams to deliver high-quality features.
- Maintain clear documentation and follow coding standards.

We are looking for a highly skilled Senior Full Stack Developer / Tech Lead to drive end-to-end development of scalable, secure, and high-performance applications. The ideal candidate will have strong expertise in React.js, Node.js, PostgreSQL, MongoDB, Python, AI/ML, and Google Cloud platforms (GCP). You will play a key role in architecture design, mentoring developers, ensuring best coding practices, and integrating AI/ML solutions into our products.
This role requires a balance of hands-on coding, system design, cloud deployment, and leadership.
Key Responsibilities
- Design, develop, and deploy scalable full-stack applications using React.js, Node.js, PostgreSQL, and MongoDB.
- Build, consume, and optimize REST APIs and GraphQL services.
- Develop AI/ML models with Python and integrate them into production systems.
- Implement CI/CD pipelines, containerization (Docker, Kubernetes), and cloud deployments (GCP/AWS).
- Manage security, authentication (JWT, OAuth2), and performance optimization.
- Use Redis for caching, session management, and queue handling.
- Lead and mentor junior developers, conduct code reviews, and enforce coding standards.
- Collaborate with cross-functional teams (product, design, QA) for feature delivery.
- Monitor and optimize system performance, scalability, and cost-efficiency.
- Own technical decisions and contribute to long-term architecture strategy.

Supply Wisdom is a global leader in transformative risk intelligence, offering real-time insights to drive business growth, reduce costs, enhance security and compliance, and identify revenue opportunities. Our AI-based SaaS products cover various risk domains, including financial, cyber, operational, ESG, and compliance. With a diverse workforce that is 57% female, our clients include Fortune 100 and Global 2000 firms in sectors like financial services, insurance, healthcare, and technology.
Objectives: The Technical Writer will collaborate closely with subject matter experts to create clear and comprehensive technical documentation for end users, covering product and technology policies, plans, procedures, user guides, manuals, release notes, and knowledge base articles. This role demands adept communication skills to convey complex technical information effectively to various stakeholders, ensuring documentation is comprehensive, accurate, and easily understood by developers, engineers, and non-technical team members.
Responsibilities
- Develop, write, and maintain comprehensive documentation, including product specifications, policies, procedures, and user guides tailored to Supply Wisdom’s risk intelligence solutions.
- Collaborate with stakeholders and subject matter experts to gather information, ensuring accuracy and completeness of documentation.
- Create and update templates for documentation, ensuring consistency and standardization.
- Review, edit, and manage documentation projects from start to finish, meeting deadlines and quality standards.
- Stay updated on industry trends and best practices, particularly in AI, Machine Learning, and data science, and conduct regular audits of existing documentation.
- Use appropriate tools and software for documentation and provide training and support to team members.
- Understand and document complex technical concepts related to AI, Machine Learning, and data science, creating user-friendly content.
- Produce release notes and technical documentation for software updates and new features, maintaining version control.
- Develop and maintain knowledge base articles and FAQs to support end users, while evaluating and improving current content.
Requirements
- Bachelor’s degree in Technical Writing, English, Communications, Computer Science, or related field.
- 3-5 years of experience as a Technical Writer, preferably in technology or software development.
- Excellent written and verbal communication skills, with meticulous attention to detail and ability to comprehend complex technical information.
- Strong understanding of technology and software development processes, including AI, Machine Learning, and data science, and proficiency in documentation tools such as Microsoft Word, Adobe Acrobat, or similar software.
- Strong organizational skills to manage multiple projects effectively, with experience in Agile development environments and familiarity with version control systems like Git.
- Proven research skills and ability to grasp new technologies and concepts rapidly, along with proficient problem-solving skills and critical thinking abilities.
- Two or more years of effective technical writing experience, including writing for diverse audiences and collaborating with engineers to enhance user experience and create technical visuals.
- Basic knowledge of Software Development Processes, SaaS, and Cloud, with capacity to manage multiple projects concurrently.
We offer a flexible and vibrant work environment, a global team filled with passionate and fun-loving people coming from diverse cultures and background.
If you are looking to make an impact in delivering market-leading risk management solutions, empowering our clients, and making the world a better place, then Supply Wisdom is the place for you. You can learn more at supplywisdom.com and on LinkedIn.

Supply Wisdom is a global leader in transformative risk intelligence, offering real-time insights to drive business growth, reduce costs, enhance security and compliance, and identify revenue opportunities. Our AI-based SaaS products cover various risk domains, including financial, cyber, operational, ESG, and compliance. With a diverse workforce that is 57% female, our clients include Fortune 100 and Global 2000 firms in sectors like financial services, insurance, healthcare, and technology.
Objectives: The Technical Writer will collaborate closely with subject matter experts to create clear and comprehensive technical documentation for end users, covering product and technology policies, plans, procedures, user guides, manuals, release notes, and knowledge base articles. This role demands adept communication skills to convey complex technical information effectively to various stakeholders, ensuring documentation is comprehensive, accurate, and easily understood by developers, engineers, and non-technical team members.
Responsibilities
- Develop, write, and maintain comprehensive documentation, including product specifications, policies, procedures, and user guides tailored to Supply Wisdom’s risk intelligence solutions.
- Collaborate with stakeholders and subject matter experts to gather information, ensuring accuracy and completeness of documentation.
- Create and update templates for documentation, ensuring consistency and standardization.
- Review, edit, and manage documentation projects from start to finish, meeting deadlines and quality standards.
- Stay updated on industry trends and best practices, particularly in AI, Machine Learning, and data science, and conduct regular audits of existing documentation.
- Use appropriate tools and software for documentation and provide training and support to team members.
- Understand and document complex technical concepts related to AI, Machine Learning, and data science, creating user-friendly content.
- Produce release notes and technical documentation for software updates and new features, maintaining version control.
- Develop and maintain knowledge base articles and FAQs to support end users, while evaluating and improving current content.
Requirements
- Bachelor’s degree in Technical Writing, English, Communications, Computer Science, or related field.
- 3-5 years of experience as a Technical Writer, preferably in technology or software development.
- Excellent written and verbal communication skills, with meticulous attention to detail and ability to comprehend complex technical information.
- Strong understanding of technology and software development processes, including AI, Machine Learning, and data science, and proficiency in documentation tools such as Microsoft Word, Adobe Acrobat, or similar software.
- Strong organizational skills to manage multiple projects effectively, with experience in Agile development environments and familiarity with version control systems like Git.
- Proven research skills and ability to grasp new technologies and concepts rapidly, along with proficient problem-solving skills and critical thinking abilities.
- Two or more years of effective technical writing experience, including writing for diverse audiences and collaborating with engineers to enhance user experience and create technical visuals.
- Basic knowledge of Software Development Processes, SaaS, and Cloud, with capacity to manage multiple projects concurrently.
We offer a flexible and vibrant work environment, a global team filled with passionate and fun-loving people coming from diverse cultures and background.
If you are looking to make an impact in delivering market-leading risk management solutions, empowering our clients, and making the world a better place, then Supply Wisdom is the place for you. You can learn more at supplywisdom.com and on LinkedIn.



Lead Data Scientist
Location: Mumbai
Application Link: https://flpl.keka.com/careers/jobdetails/40052
What you’ll do
- Manage end-to-end data science projects from scoping to deployment, ensuring accuracy, reliability and measurable business impact
- Translate business needs into actionable DS tasks, lead data wrangling, feature engineering, and model optimization
- Communicate insights to non-technical stakeholders to guide decisions while mentoring a 14 member DS team.
- Implement scalable MLOps, automated pipelines, and reusable frameworks to accelerate delivery and experimentation
What we’re looking for
- 4-5 years of hands-on experience in Data Science/ML with strong foundations in statistics, Linear Algebra, and optimization
- Proficient in Python (NumbPy, pandas, scikit-learn, XGBoost) and experienced with at least one cloud platform (AWS, GCP or Azure)
- Skilled in building data pipelines (Airflow, Spark) and deploying models using Docker, FastAPI, etc
- Adept at communicating insights effectively to both technical and non-technical audiences
- Bachelor’s from any field
You might have an edge over others if
- Experience with LLMs or GenAI apps
- Contributions to open-source or published research
- Exposure to real-time analytics and industrial datasets
You should not apply with us if
- You don’t want to work in agile environments
- The unpredictability and super iterative nature of startups scare you
- You hate working with people who are smarter than you
- You don’t thrive in self-driven, “owner mindset” environments- nothing wrong- just not our type!
About us
We’re Faclon Labs – a high-growth, deep-tech startup on a mission to make infrastructure and utilities smarter using IoT and SaaS. Sounds heavy? That’s because we do heavy lifting — in tech, in thinking, and in creating real-world impact.
We’re not your average startup. We don’t do corporate fluff. We do ownership, fast iterations, and big ideas. If you're looking for ping-pong tables, we're still saving up. But if you want to shape the soul of the company while it's being built- this is the place!

Senior Cloud & ML Infrastructure Engineer
Location: Bangalore / Bengaluru, Hyderabad, Pune, Mumbai, Mohali, Panchkula, Delhi
Experience: 6–10+ Years
Night Shift - 9 pm to 6 am
About the Role:
We’re looking for a Senior Cloud & ML Infrastructure Engineer to lead the design,scaling, and optimization of cloud-native machine learning infrastructure. This role is ideal forsomeone passionate about solving complex platform engineering challenges across AWS, witha focus on model orchestration, deployment automation, and production-grade reliability. You’llarchitect ML systems at scale, provide guidance on infrastructure best practices, and work cross-functionally to bridge DevOps, ML, and backend teams.
Key Responsibilities:
● Architect and manage end-to-end ML infrastructure using SageMaker, AWS StepFunctions, Lambda, and ECR
● Design and implement multi-region, highly-available AWS solutions for real-timeinference and batch processing
● Create and manage IaC blueprints for reproducible infrastructure using AWS CDK
● Establish CI/CD practices for ML model packaging, validation, and drift monitoring
● Oversee infrastructure security, including IAM policies, encryption at rest/in-transit, andcompliance standards
● Monitor and optimize compute/storage cost, ensuring efficient resource usage at scale
● Collaborate on data lake and analytics integration
● Serve as a technical mentor and guide AWS adoption patterns across engineeringteams
Required Skills:
● 6+ years designing and deploying cloud infrastructure on AWS at scale
● Proven experience building and maintaining ML pipelines with services like SageMaker,ECS/EKS, or custom Docker pipelines
● Strong knowledge of networking, IAM, VPCs, and security best practices in AWS
● Deep experience with automation frameworks, IaC tools, and CI/CD strategies
● Advanced scripting proficiency in Python, Go, or Bash
● Familiarity with observability stacks (CloudWatch, Prometheus, Grafana)
Nice to Have:
● Background in robotics infrastructure, including AWS IoT Core, Greengrass, or OTA deployments
● Experience designing systems for physical robot fleet telemetry, diagnostics, and control
● Familiarity with multi-stage production environments and robotic software rollout processes
● Competence in frontend hosting for dashboard or API visualization
● Involvement with real-time streaming, MQTT, or edge inference workflows
● Hands-on experience with ROS 2 (Robot Operating System) or similar robotics frameworks, including launch file management, sensor data pipelines, and deployment to embedded Linux devices

🚀 We’re Hiring: Senior Cloud & ML Infrastructure Engineer 🚀
We’re looking for an experienced engineer to lead the design, scaling, and optimization of cloud-native ML infrastructure on AWS.
If you’re passionate about platform engineering, automation, and running ML systems at scale, this role is for you.
What you’ll do:
🔹 Architect and manage ML infrastructure with AWS (SageMaker, Step Functions, Lambda, ECR)
🔹 Build highly available, multi-region solutions for real-time & batch inference
🔹 Automate with IaC (AWS CDK, Terraform) and CI/CD pipelines
🔹 Ensure security, compliance, and cost efficiency
🔹 Collaborate across DevOps, ML, and backend teams
What we’re looking for:
✔️ 6+ years AWS cloud infrastructure experience
✔️ Strong ML pipeline experience (SageMaker, ECS/EKS, Docker)
✔️ Proficiency in Python/Go/Bash scripting
✔️ Knowledge of networking, IAM, and security best practices
✔️ Experience with observability tools (CloudWatch, Prometheus, Grafana)
✨ Nice to have: Robotics/IoT background (ROS2, Greengrass, Edge Inference)
📍 Location: Bengaluru, Hyderabad, Mumbai, Pune, Mohali, Delhi
5 days working, Work from Office
Night shifts: 9pm to 6am IST
👉 If this sounds like you (or someone you know), let’s connect!
Apply here:


Job Title: Data Scientist
Location: Bangalore (Hybrid/On-site depending on project needs)
About the Role
We are seeking a highly skilled Data Scientist to join our team in Bangalore. In this role, you will take ownership of data science components across client projects, build production-ready ML and GenAI-powered applications, and mentor junior team members. You will collaborate with engineering teams to design and deploy impactful solutions that leverage cutting-edge machine learning and large language model technologies.
Key Responsibilities
ML & Data Science
- Develop, fine-tune, and evaluate ML models (classification, regression, clustering, recommendation systems).
- Conduct exploratory data analysis, preprocessing, and feature engineering.
- Ensure model reproducibility, scalability, and alignment with business objectives.
GenAI & LLM Applications
- Prototype and design solutions leveraging LLMs (OpenAI, Claude, Mistral, Llama).
- Build RAG (Retrieval-Augmented Generation) pipelines, prompt templates, and evaluation frameworks.
- Integrate LLMs with APIs and vector databases (Pinecone, FAISS, Weaviate).
Product & Engineering Collaboration
- Partner with engineering teams to productionize ML/GenAI models.
- Contribute to API development, data pipelines, technical documentation, and client presentations.
Team & Growth
- Mentor junior data scientists and review technical contributions.
- Stay up to date with the latest ML & GenAI research and tools; share insights across the team.
Required Skills & Qualifications
- 4.5–9 years of applied data science experience.
- Strong proficiency in Python and ML libraries (scikit-learn, XGBoost, LightGBM).
- Hands-on experience with LLM APIs (OpenAI, Cohere, Claude) and frameworks (LangChain, LlamaIndex).
- Strong SQL, data wrangling, and analysis skills (pandas, NumPy).
- Experience working with APIs, Git, and cloud platforms (AWS/GCP).
Good-to-Have
- Deployment experience with FastAPI, Docker, or serverless frameworks.
- Familiarity with MLOps tools (MLflow, DVC).
- Experience working with embeddings, vector databases, and similarity search.

A American Bank holding company . a community-focused financial institution that provides accessible banking services to its members, operating on a not-for-profit basis.



Position: AIML_Python Enginner
Kothapet_Hyderabad _Hybrid.( 4 days a week onsite)
Contract to hire fulltime to client.
5+ years of python experience for scripting ML workflows to deploy ML Pipelines as real time, batch, event triggered, edge deployment
4+ years of experience in using AWS sagemaker for deployment of ML pipelines and ML Models using Sagemaker piplines, Sagemaker mlflow, Sagemaker Feature Store..etc.
3+ years of development of apis using FastAPI, Flask, Django
3+ year of experience in ML frameworks & tools like scikit-learn, PyTorch, xgboost, lightgbm, mlflow.
Solid understanding of ML lifecycle: model development, training, validation, deployment and monitoring
Solid understanding of CI/CD pipelines specifically for ML workflows using bitbucket, Jenkins, Nexus, AUTOSYS for scheduling
Experience with ETL process for ML pipelines with PySpark, Kafka, AWS EMR Serverless
Good to have experience in H2O.ai
Good to have experience in containerization using Docker and Orchestration using Kubernetes.



About Moative
Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots, and develops agentic AI solutions for companies across industries including energy and utilities.
Through Moative Labs, we aspire to build AI-led products and launch AI startups in vertical markets.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League alumni, ex-Googlers, and successful entrepreneurs.
Business Context
Moative is looking for a Data Science Project Manager to lead a long-term engagement for a Houston-based utilities company. As part of this engagement, we will develop advanced AI/ML models for load forecasting, energy pricing, trading strategies, and related areas.
We have a high-performing team of data scientists and ML engineers based in Chennai, India, along with an on-site project manager in Houston, TX.
Work You’ll Do
As a Data Science Project Manager, you’ll wear two hats. On one hand, you’ll act as a project manager — engaging with clients, understanding business priorities, and discussing solution or algorithm approaches. You will coordinate with the on-site project manager to manage timelines and ensure quality delivery. You’ll also handle client communication, setting expectations, gathering feedback, and keeping the engagement in good health.
On the other hand, you’ll act as a senior data scientist — overseeing junior data scientists and analysts, guiding the offshore team on data science, engineering, and business challenges, and providing expertise in statistical and mathematical concepts, algorithms, and model development.
The ideal candidate has a strong background in statistics, machine learning, and programming, along with business acumen and project management skills.
Responsibilities
- Client Engagement: Act as the primary point of contact for clients on data science requirements. Support the on-site PM in managing client needs and build strong stakeholder relationships.
- Project Coordination: Lead the offshore team, ensure alignment on project goals, and work with clients to define scope and deliverables.
- Team Management: Supervise the offshore AI/ML team, ensure milestones are met, and provide domain/technical guidance.
- Data Science Leadership: Mentor teams, create frameworks for scalable solutions, and drive adoption of best practices in AI/ML lifecycle.
- Quality Assurance: Work with the offshore PM to implement QA processes that ensure accuracy and reliability.
- Risk Management: Identify risks and develop mitigation strategies.
- Stakeholder Communication: Provide regular updates on progress, challenges, and achievements.
Who You Are
You are a Project Manager passionate about delivering high-quality, data-driven solutions through robust project management practices. You have experience managing data-heavy projects in an onsite-offshore model with significant client engagement. You also bring some hands-on experience in data science and analytics, preferably in energy/utilities or financial risk/trading. You thrive in ambiguity, take initiative, and can confidently defend your decisions.
Requirements & Skills
- 8+ years of experience applying data science methods to real-world data, ideally in Energy & Utilities or financial risk/commodities trading.
- 3+ years of experience leading data science teams delivering AI/ML solutions.
- Deep familiarity with a range of methods and algorithms: time-series analysis, regression, experimental design, optimization, etc.
- Strong understanding of ML algorithms, including deep learning, neural networks, NLP, and more.
- Proficient in cloud platforms (AWS, Azure, GCP), ML frameworks (TensorFlow, PyTorch), and MLOps platforms (MLflow, etc.).
- Broad understanding of data structures, data engineering, and architectures.
- Strong interpersonal skills: result-oriented, proactive, and capable of handling multiple projects.
- Ability to collaborate effectively, take accountability, and stay composed under stress.
- Excellent verbal and written communication skills for both technical and non-technical stakeholders.
- Proven ability to identify and resolve issues quickly and efficiently.
Working at Moative
Moative is a young company, but we believe in thinking long-term while acting with urgency. Our ethos is rooted in innovation, efficiency, and high-quality outcomes. We believe the future of work is AI-augmented and boundaryless.
Guiding Principles
- Think in decades. Act in hours. Decisions for the long-term, execution in hours/days.
- Own the canvas. Fix or improve anything not done right, regardless of who did it.
- Use data or don’t. Avoid political “cover-my-back” use of data; balance intuition with data-driven approaches.
- Avoid work about work. Keep processes lean; meetings should be rare and purposeful.
- High revenue per person. Default to automation, multi-skilling, and high-quality output instead of unnecessary hiring.
Additional Details
The position is based out of Chennai and involves significant in-person collaboration. Applicants should demonstrate being in the 90th percentile or above, whether through top institutions, awards/accolades, or consistent outstanding performance.
If this role excites you, we encourage you to apply — even if you don’t check every box.


Role Overview
As a Data Scientist, you will play a key role in improving and optimising the models that drive Arctan’s real-time speech AI. You’ll work on analysing, processing, and modelling data to make our systems smarter.
Specifically, you’ll:
- Develop and fine-tune Speech to Speech machine learning models
- Experiment with data-driven solutions to enhance AI performance.
- Work on feature engineering and dataset preparation to train robust models.
- Collaborate with the team to evaluate and deploy models into production.
What we’re looking for:
- Strong problem-solving skills, paired with a passion for learning.
- 1-3 years of experience in Data Science and related fields.
- Experience with speech processing or NLP.
- Proficiency in Python and familiarity with data science libraries.
- Understanding of machine learning frameworks (e.g., TensorFlow, PyTorch).
- Familiarity with working on datasets and data preprocessing techniques.
Benefits And Perks:
- You’ll be joining an early stage startup building foundational Speech models
- Being a part of a small, high talent density team would accelerate your career growth
- You'll wear multiple hats, own meaningful projects, and build things that ship.


About Us
CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.
Our Values
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/.
Job Summary:
We are seeking a highly innovative and skilled AI Engineer to join our AI CoE for the Data Integration Project. The ideal candidate will be responsible for designing, developing, and deploying intelligent assets and AI agents that automate and optimize various stages of the data ingestion and integration pipeline. This role requires expertise in machine learning, natural language processing (NLP), knowledge representation, and cloud platform services, with a strong focus on building scalable and accurate AI solutions.
Key Responsibilities:
- LLM-based Auto-schematization: Develop and refine LLM-based models and techniques for automatically inferring schemas from diverse unstructured and semi-structured public datasets and mapping them to a standardized vocabulary.
- Entity Resolution & ID Generation AI: Design and implement AI models for highly accurate entity resolution, matching new entities with existing IDs and generating unique, standardized IDs for newly identified entities.
- Automated Data Profiling & Schema Detection: Develop AI/ML accelerators for automated data profiling, pattern detection, and schema detection to understand data structure and quality at scale.
- Anomaly Detection & Smart Imputation: Create AI-powered solutions for identifying outliers, inconsistencies, and corrupt records, and for intelligently filling missing values using machine learning algorithms.
- Multilingual Data Integration AI: Develop AI assets for accurately interpreting, translating (leveraging automated tools with human-in-the-loop validation), and semantically mapping data from diverse linguistic sources, preserving meaning and context.
- Validation Automation & Error Pattern Recognition: Build AI agents to run comprehensive data validation tool checks, identify common error types, suggest fixes, and automate common error corrections.
- Knowledge Graph RAG/RIG Integration: Integrate Retrieval Augmented Generation (RAG) and Retrieval Augmented Indexing (RIG) techniques to enhance querying capabilities and facilitate consistency checks within the Knowledge Graph.
- MLOps Implementation: Implement and maintain MLOps practices for the lifecycle management of AI models, including versioning, deployment, monitoring, and retraining on a relevant AI platform.
- Code Generation & Documentation Automation: Develop AI tools for generating reusable scripts, templates, and comprehensive import documentation to streamline development.
- Continuous Improvement Systems: Design and build learning systems, feedback loops, and error analytics mechanisms to continuously improve the accuracy and efficiency of AI-powered automation over time.
Required Skills and Qualifications:
- Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, or a related quantitative field.
- Proven experience (e.g., 3+ years) as an AI/ML Engineer, with a strong portfolio of deployed AI solutions.
- Strong expertise in Natural Language Processing (NLP), including experience with Large Language Models (LLMs) and their applications in data processing.
- Proficiency in Python and relevant AI/ML libraries (e.g., TensorFlow, PyTorch, scikit-learn).
- Hands-on experience with cloud AI/ML services,
- Understanding of knowledge representation, ontologies (e.g., Schema.org, RDF), and knowledge graphs.
- Experience with data quality, validation, and anomaly detection techniques.
- Familiarity with MLOps principles and practices for model deployment and lifecycle management.
- Strong problem-solving skills and an ability to translate complex data challenges into AI solutions.
- Excellent communication and collaboration skills.
Preferred Qualifications:
- Experience with data integration projects, particularly with large-scale public datasets.
- Familiarity with knowledge graph initiatives.
- Experience with multilingual data processing and AI.
- Contributions to open-source AI/ML projects.
- Experience in an Agile development environment.
Benefits:
- Opportunity to work on a high-impact project at the forefront of AI and data integration.
- Contribute to solidifying a leading data initiative's role as a foundational source for grounding Large Models.
- Access to cutting-edge cloud AI technologies.
- Collaborative, innovative, and fast-paced work environment.
- Significant impact on data quality and operational efficiency.

💯What will you do?
- Create and conduct engaging and informative Data Science classes that incorporate real-world examples and hands-on activities to ensure student engagement and retention.
- Evaluate student projects to ensure they meet industry standards and provide personalised, constructive feedback to students to help them improve their skills and understanding.
- Conduct viva sessions to assess student understanding and comprehension of the course materials. You will evaluate each student's ability to apply the concepts they have learned in real-world scenarios and provide feedback on their performance.
- Conduct regular assessments to evaluate student progress, provide feedback to students, and identify areas for improvement in the curriculum.
- Stay up-to-date with industry developments, best practices, and trends in Data Science, and incorporate this knowledge into course materials and instruction.
- Work with the placements team to provide guidance and support to students as they navigate their job search, including resume and cover letter reviews, mock interviews, and career coaching.
- Train the TAs to take the doubt sessions and for project evaluations
💯Who are we looking for?
We are looking for someone who has:
- A minimum of 1-2 years of industry work experience in data science or a related field. Teaching experience is a plus.
- In-depth knowledge of various aspects of data science like Python, MYSQL, Power BI, Excel, Machine Learning with statistics, NLP, DL.
- Knowledge of AI tools like ChatGPT (latest versions as well), debugcode.ai, etc.
- Passion for teaching and a desire to impart practical knowledge to students.
- Excellent communication and interpersonal skills, with the ability to engage and motivate students of all levels.
- Experience with curriculum development, lesson planning, and instructional design is a plus.
- Familiarity with learning management systems (LMS) and digital teaching tools will be an added advantage.
- Ability to work independently and as part of a team in a fast-paced, dynamic environment.
💯What do we offer in return?
- Awesome colleagues & a great work environment - Internshala is known for its culture (see for yourself) and has twice been recognized as a Great Place To Work in the last 3 years
- A massive learning opportunity to be an early member of a new initiative and experience building it from scratch
- Competitive remuneration
💰 Compensation - Competitive remuneration based on your experience and skills
📅 Start date - Immediately

Job description
We are looking for a Data Scientist with strong AI/ML engineering skills to join our high-impact team at KrtrimaIQ Cognitive Solutions. This is not a notebook-only role — you must have production-grade experience deploying and scaling AI/ML models in cloud environments, especially GCP, AWS, or Azure.
This role involves building, training, deploying, and maintaining ML models at scale, integrating them with business applications. Basic model prototyping won't qualify — we’re seeking hands-on expertise in building scalable machine learning pipelines.
Key Responsibilities
Design, train, test, and deploy end-to-end ML models on GCP (or AWS/Azure) to support product innovation and intelligent automation.
Implement GenAI use cases using LLMs
Perform complex data mining and apply statistical algorithms and ML techniques to derive actionable insights from large datasets.
Drive the development of scalable frameworks for automated insight generation, predictive modeling, and recommendation systems.
Work on impactful AI/ML use cases in Search & Personalization, SEO Optimization, Marketing Analytics, Supply Chain Forecasting, and Customer Experience.
Implement real-time model deployment and monitoring using tools like Kubeflow, Vertex AI, Airflow, PySpark, etc.
Collaborate with business and engineering teams to frame problems, identify data sources, build pipelines, and ensure production-readiness.
Maintain deep expertise in cloud ML architecture, model scalability, and performance tuning.
Stay up to date with AI trends, LLM integration, and modern practices in machine learning and deep learning.
Technical Skills Required Core ML & AI Skills (Must-Have):
Strong hands-on ML engineering (70% of the role) — supervised/unsupervised learning, clustering, regression, optimization.
Experience with real-world model deployment and scaling, not just notebooks or prototypes.
Good understanding of ML Ops, model lifecycle, and pipeline orchestration.
Strong with Python 3, Pandas, NumPy, Scikit-learn, TensorFlow, PyTorch, Seaborn, Matplotlib, etc.
SQL proficiency and experience querying large datasets.
Deep understanding of linear algebra, probability/statistics, Big-O, and scientific experimentation.
Cloud experience in GCP (preferred), AWS, or Azure.
Cloud & Big Data Stack
Hands-on experience with:
GCP tools – Vertex AI, Kubeflow, BigQuery, GCS
Or equivalent AWS/Azure ML stacks
Familiar with Airflow, PySpark, or other pipeline orchestration tools.
Experience reading/writing data from/to cloud services.
Qualifications
Bachelor's/Master’s/Ph.D. in Computer Science, Mathematics, Engineering, Data Science, Statistics, or related quantitative field.
4+ years of experience in data analytics and machine learning roles.
2+ years of experience in Python or similar programming languages (Java, Scala, Rust).
Must have experience deploying and scaling ML models in production.
Nice to Have
Experience with LLM fine-tuning, Graph Algorithms, or custom deep learning architectures.
Background in academic research to production applications.
Building APIs and monitoring production ML models.
Familiarity with advanced math – Graph Theory, PDEs, Optimization Theory.
Communication & Collaboration
Strong ability to explain complex models and insights to both technical and non-technical stakeholders.
Ask the right questions, clarify objectives, and align analytics with business goals.
Comfortable working cross-functionally in agile and collaborative teams.
Important Note:
This is a Data Science-heavy role — 70% of responsibilities involve building, training, deploying, and scaling AI/ML models.
Cloud experience is mandatory (GCP preferred, AWS/Azure acceptable).
Only candidates with hands-on experience in deploying ML models into production (not just notebooks) will be considered.

About the Role
We are seeking an AI Product Manager to lead the full product lifecycle of our SaMD-compliant AI platform. You will act as the bridge between data scientists, developers, and clinicians, ensuring timely delivery of regulatory-grade features that align with hospital pilots and scale for long-term adoption.
This role requires hands-on experience with Agile delivery, CI/CD pipelines, and healthcare SaaS products, along with strong communication and presentation skills for customer-facing interactions.
Key Responsibilities
- Own the end-to-end product lifecycle, from roadmap definition to delivery and scaling.
- Translate clinical and technical requirements into actionable product features.
- Drive Agile ceremonies (sprint planning, backlog grooming, stand-ups, retrospectives).
- Prioritize product backlog to balance hospital pilot needs with long-term scalability.
- Ensure regulatory-grade quality standards (SaMD compliance) are met.
- Collaborate with engineering, data science, and clinical teams for delivery.
- Manage CI/CD release pipelines in coordination with DevOps teams.
- Prepare reports, dashboards, and customer-facing presentations (PowerPoint) to communicate progress, outcomes, and next steps.
- Engage with stakeholders, hospital partners, and customers to gather feedback and refine product vision.
Requirements
- 6–10 years of experience in Product or Delivery Management.
- Strong exposure to AI/ML-driven platforms or clinical healthcare SaaS.
- Proven expertise in Agile methodologies and CI/CD delivery environments.
- Experience working with cross-functional teams (data scientists, engineers, clinicians).
- Strong communication and stakeholder management skills.
- Excellent ability to create presentations, reports, and customer-facing documents.
- Prior experience with SaMD-compliant or regulated products (preferred).
- Bachelor’s or Master’s in Engineering, Computer Science, Healthcare IT, or related field.
Why Join Us?
- Work on cutting-edge AI products shaping the future of healthcare.
- Collaborate with clinicians, hospitals, and AI experts to create real-world impact.
- Be part of a fast-scaling, innovation-driven environment.

We are hiring freelancers to work on advanced Data & AI projects using Databricks. If you are passionate about cloud platforms, machine learning, data engineering, or architecture, and want to work with cutting-edge tools on real-world challenges, this is the opportunity for you!
✅ Key Details
- Work Type: Freelance / Contract
- Location: Remote
- Time Zones: IST / EST only
- Domain: Data & AI, Cloud, Big Data, Machine Learning
- Collaboration: Work with industry leaders on innovative projects
🔹 Open Roles
1. Databricks – Senior Consultant
- Skills: Data Warehousing, Python, Java, Scala, ETL, SQL, AWS, GCP, Azure
- Experience: 6+ years
2. Databricks – ML Engineer
- Skills: CI/CD, MLOps, Machine Learning, Spark, Hadoop
- Experience: 4+ years
3. Databricks – Solution Architect
- Skills: Azure, GCP, AWS, CI/CD, MLOps
- Experience: 7+ years
4. Databricks – Solution Consultant
- Skills: SQL, Spark, BigQuery, Python, Scala
- Experience: 2+ years
✅ What We Offer
- Opportunity to work with top-tier professionals and clients
- Exposure to cutting-edge technologies and real-world data challenges
- Flexible remote work environment aligned with IST / EST time zones
- Competitive compensation and growth opportunities
📌 Skills We Value
Cloud Computing | Data Warehousing | Python | Java | Scala | ETL | SQL | AWS | GCP | Azure | CI/CD | MLOps | Machine Learning | Spark |

Job Title: AI Developer/Engineer
Location: Remote
Employment Type: Full-time
About the Organization
We are a cutting-edge AI-powered startup that is revolutionizing data management and content generation. Our platform harnesses the power of generative AI and natural language processing to turn unstructured data into actionable insights, providing businesses with real-time, intelligent content and driving operational efficiency. As we scale, we are looking for an experienced lead architect to help design and build our next-generation AI-driven solutions.
About the Role
We are seeking an AI Developer to design, fine-tune, and deploy advanced Large Language Models (LLMs) and AI agents across healthcare and SMB workflows. You will work with cutting-edge technologies—OpenAI, Claude, LLaMA, Gemini, Grok—building robust pipelines and scalable solutions that directly impact real-world hospital use cases such as risk calculators, clinical protocol optimization, and intelligent decision support.
Key Responsibilities
- Build, fine-tune, and customize LLMs and AI agents for production-grade workflows
- Leverage Node.js for backend development and integration with various cloud services.
- Use AI tools and AI prompts to develop automated processes that enhance data management and client offerings
- Drive the evolution of deployment methodologies, ensuring that AI systems are continuously optimized, tested, and delivered in production-ready environments.
- Stay up-to-date with emerging AI technologies, cloud platforms, and development methodologies to continually evolve the platform’s capabilities.
- Integrate and manage vector databases such as FAISS and Pinecone.
- Ensure scalability, performance, and compliance in all deployed AI systems.
Required Qualifications
- 2–3 years of hands-on experience in AI/ML development or full-stack AI integration.
- Proven expertise in building Generative AI models and AI-powered applications, especially in a cloud environment.
- Strong experience with multi-cloud infrastructure and platforms,
- Proficiency with Node.js and modern backend frameworks for developing scalable solutions.
- In-depth understanding of AI prompts, natural language processing, and agent-based systems for enhancing decision-making processes.
- Familiarity with AI tools for model training, data processing, and real-time inference tasks.
- Experience working with hybrid cloud solutions, including private and public cloud integration for AI workloads.
- Strong problem-solving skills and a passion for innovation in AI and cloud technologies
- Agile delivery mythology knowledge.
- CI/CD pipeline deployment with JIRA and GitHub knowledge for code deployment
- Strong experience with LLMs, prompt engineering, and fine-tuning.
- Knowledge of vector databases (FAISS, Pinecone, Milvus, or similar).
Nice to Have
- Experience in healthcare AI, digital health, or clinical applications.
- Exposure to multi-agent AI frameworks.
What We Offer
- Flexible working hours.
- Collaborative, innovation-driven work culture.
- Growth opportunities in a rapidly evolving AI-first environment.

- 3 + years owning ML / LLM services in production on Azure (AKS, Azure OpenAI/Azure ML) or another major cloud.
- Strong Python plus hands-on work with a modern deep-learning stack (PyTorch / TensorFlow / HF Transformers).
- Built features with LLM toolchains: prompt engineering, function calling / tools, vector stores (FAISS, Pinecone, etc.).
- Familiar with agentic AI patterns (LangChain / LangGraph, eval harnesses, guardrails) and strategies to tame LLM non-determinism.
- Comfortable with containerization & CI/CD (Docker, Kubernetes, Git-based workflows); can monitor, scale and troubleshoot live services.
Nice-to-Haves
- Experience in billing, collections, fintech, or professional-services SaaS.
- Knowledge of email deliverability, templating engines, or CRM systems.
- Exposure to compliance frameworks (SOC 2, ISO 27001) or secure handling of financial data.



Title: Quantitative Developer
Location : Mumbai
Candidates preferred with Master's
Who We Are
At Dolat Capital, we are a collective of traders, puzzle solvers, and tech enthusiasts passionate about decoding the intricacies of financial markets. From navigating volatile trading conditions with precision to continuously refining cutting-edge technologies and quantitative strategies, our work thrives at the intersection of finance and engineering.
We operate a robust, ultra-low latency infrastructure built for market-making and active trading across Equities, Futures, and Options—with some of the highest fill rates in the industry. If you're excited by technology, trading, and critical thinking, this is the place to evolve your skills into world class capabilities.
What You Will Do
This role offers a unique opportunity to work across both quantitative development and high frequency trading. You'll engineer trading systems, design and implement algorithmic strategies, and directly participate in live trading execution and strategy enhancement.
1. Quantitative Strategy & Trading Execution
- Design, implement, and optimize quantitative strategies for trading derivatives, index options, and ETFs
- Trade across options, equities, and futures, using proprietary HFT platforms
- Monitor and manage PnL performance, targeting Sharpe ratios of 6+
- Stay proactive in identifying market opportunities and inefficiencies in real-time HFT environments
- Analyze market behavior, particularly in APAC indices, to adjust models and positions dynamically
2. Trading Systems Development
- Build and enhance low-latency, high-throughput trading systems
- Develop tools to simulate trading strategies and access historical market data
- Design performance-optimized data structures and algorithms for fast execution
- Implement real-time risk management and performance tracking systems
3. Algorithmic and Quantitative Analysis
- Collaborate with researchers and traders to integrate strategies into live environments
- Use statistical methods and data-driven analysis to validate and refine models
- Work with large-scale HFT tick data using Python / C++
4. AI/ML Integration
- Develop and train AI/ML models for market prediction, signal detection, and strategy enhancement
- Analyze large datasets to detect patterns and alpha signals
5. System & Network Optimization
- Optimize distributed and concurrent systems for high-transaction throughput
- Enhance platform performance through network and systems programming
- Utilize deep knowledge of TCP/UDP and network protocols
6. Collaboration & Mentorship
- Collaborate cross-functionally with traders, engineers, and data scientists
- Represent Dolat in campus recruitment and industry events as a technical mentor
What We Are Looking For:
- Strong foundation in data structures, algorithms, and object-oriented programming (C++).
- Experience with AI/ML frameworks like TensorFlow, PyTorch, or Scikit-learn.
- Hands-on experience in systems programming within a Linux environment.
- Proficient and Hands on programming using python/ C++
- Familiarity with distributed computing and high-concurrency systems.
- Knowledge of network programming, including TCP/UDP protocols.
- Strong analytical and problem-solving skills.
A passion for technology-driven solutions in the financial markets.
Job Description – AI Developer
Job Description
We are seeking a forward-thinking AI Developer with expertise in Generative AI, Machine Learning, Data Science, and Advanced Analytics. In this role, you will design, optimize, and deploy scalable AI solutions that power automation, personalization, and decision intelligence. You will closely collaborate with data engineers, product leaders, and business stakeholders to develop production-grade AI systems aligned with the latest enterprise AI trends such as LLM, prompt engineering, multimodal AI, and trustworthy/ethical AI practices.
This is a unique opportunity to work at the forefront of AI innovation, shaping high-impact solutions that address real-world challenges across industries.
Key Responsibilities
- Design and deploy advanced Generative AI models including LLMs, diffusion models, and multimodal architectures.
- Develop and fine-tune ML/DL solutions for NLP, computer vision, time-series forecasting, and predictive analytics.
- Implement RAG pipelines, embeddings-based search, and knowledge graph integrations to enhance context-aware AI applications.
- Stay ahead of AI research, industry trends, and tools (multi-agent systems, federated learning, privacy-preserving ML).
What we are looking for
- Strong programming background in Python (primary), R & Familiar with Object Oriented Programming Language.
- Proficiency in any of the framework PyTorch, TensorFlow, SAS, ChemBerta.
- Expertise in Machine Learning & Deep Learning.
- Strong grounding in mathematics & applied statistics (linear algebra, probability, optimization).
Qualifications
- Bachelor’s, Master’s, in Computer Science, Artificial Intelligence, Data Science, or related fields. We also welcome PhD scholars.
- 0–6 years of proven experience into AI development, optimization, and production deployment.
Preferred / Nice to Have
- Someone who is passionate about GEN AI is ideal candidate & did Research contributions or open-source AI project contributions.
- Exposure to multimodal AI, agent-based systems, or reinforcement learning.
- Knowledge of AI safety, interpretability frameworks, and responsible AI practices.

Responsibilities
● Design, build, and deploy applied ML models (NLP, LLM-based) to solve real-world sales tech problems.
● Build and orchestrate true multi-agent systems to collaboratively solve complex tasks and workflows that reason, remember, plan, and take actions over structured and unstructured data.
● Use frameworks like LangGraph, CrewAI, or custom planners to orchestrate LLM-powered agents.
● Prototype and productionize capabilities using foundation models (OpenAI, Claude, Gemini, etc.), embeddings, and vector databases (e.g., Pinecone, Weaviate).
● Work with the product team to integrate intelligent agents into customer CRM workflows and GTM-facing tools.
● Design observability, evaluation and control mechanisms to ensure agent reliability, traceability, safety and frugality.
● Continuously improve agents with RF and active learning.
Requirements
● Bachelor’s or Master’s in Computer Science, AI/ML, or a related field.
● 2+ years of experience deploying ML models or AI systems in production
environments.
● Experience with LLMs, NLP pipelines, prompt engineering, and fine-tuning.
● Familiarity with agentic frameworks (LangChain, LangGraph, AutoGPT, CrewAI, or custom orchestration).
● Strong Python programming skills and deep knowledge of the ML/LLM ecosystem.
● Hands-on experience with MLOps, API integrations, and cloud infrastructure (AWS, GCP, or Azure).
● Ability to balance rapid experimentation with robust engineering.
Nice to Have
● Experience working with sales/revenue data (CRM, Gong, Salesforce, LinkedIn).
● Familiarity with reasoning techniques (ReAct, CoT, Tree-of-Thoughts), graph orchestration, and tool chaining.
● Exposure to multi-agent systems, RAG pipelines, or feedback loops in production.
● Contributions to open-source AI/agentic tools or frameworks.

We are looking for a proactive and skilled MEAN Stack Developer with 2–4 years of experience to join our growing team at Rudra Innovative Software Pvt. Ltd. The ideal candidate must have strong proficiency in Angular and TypeScript, along with hands-on experience in the full MEAN Stack (MongoDB, Express.js, Angular, Node.js). Exposure to microservices architecture is highly desirable.
Key Responsibilities:
- Develop, enhance, and maintain robust web applications using the MEAN stack
- Write clean, maintainable, and efficient code with a strong focus on Angular and TypeScript
- Integrate and manage RESTful APIs and backend services using Node.js and Express.js
- Collaborate closely with designers, testers, and other developers for end-to-end delivery
- Work on designing and developing microservices-based components where applicable
- Participate in daily standups, code reviews, and technical discussions
- Troubleshoot application issues, perform root cause analysis, and implement solutions
Required Skills:
- 2–3 years of solid experience with Angular (v18+) and TypeScript
- Awareness of current Angular features like standalone components and signals
- Strong foundation in JavaScript, HTML5, and CSS3
- Proficient in Node.js and Express.js development
- Familiar with MongoDB and writing optimized database queries
- Good understanding of RESTful APIs, JSON, and API integration
- Hands-on experience with Git and version control practices
- Exposure to microservices architecture and understanding of its types (e.g., API gateway, database per service, event-driven communication)
- Excellent debugging, problem-solving, and communication skills
Preferred Qualifications:
- Bachelor’s degree in Computer Science, IT, or a related field
- Prior experience working in Agile/Scrum environments
- Familiarity with Docker, Kubernetes, or any cloud services is a plus
What We Offer:
- Opportunity to work on exciting and challenging global projects
- Supportive, collaborative, and innovation-driven work environment
- Competitive compensation with performance-based incentives
- Ongoing training, learning resources, and growth opportunities

Job Title: AI Architecture Intern
Company: PGAGI Consultancy Pvt. Ltd.
Location: Remote
Employment Type: Internship
Position Overview
We're at the forefront of creating advanced AI systems, from fully autonomous agents that provide intelligent customer interaction to data analysis tools that offer insightful business solutions. We are seeking enthusiastic interns who are passionate about AI and ready to tackle real-world problems using the latest technologies.
Duration: 6 months
Key Responsibilities:
- AI System Architecture Design: Collaborate with the technical team to design robust, scalable, and high-performance AI system architectures aligned with client requirements.
- Client-Focused Solutions: Analyze and interpret client needs to ensure architectural solutions meet expectations while introducing innovation and efficiency.
- Methodology Development: Assist in the formulation and implementation of best practices, methodologies, and frameworks for sustainable AI system development.
- Technology Stack Selection: Support the evaluation and selection of appropriate tools, technologies, and frameworks tailored to project objectives and future scalability.
- Team Collaboration & Learning: Work alongside experienced AI professionals, contributing to projects while enhancing your knowledge through hands-on involvement.
Requirements:
- Strong understanding of AI concepts, machine learning algorithms, and data structures.
- Familiarity with AI development frameworks (e.g., TensorFlow, PyTorch, Keras).
- Proficiency in programming languages such as Python, Java, or C++.
- Demonstrated interest in system architecture, design thinking, and scalable solutions.
- Up-to-date knowledge of AI trends, tools, and technologies.
- Ability to work independently and collaboratively in a remote team environment
Perks:
- Hands-on experience with real AI projects.
- Mentoring from industry experts.
- A collaborative, innovative and flexible work environment
Compensation:
- Joining Bonus: A one-time bonus of INR 2,500 will be awarded upon joining.
- Stipend: Base is INR 8000/- & can increase up to 20000/- depending upon performance matrix.
After completion of the internship period, there is a chance to get a full-time opportunity as an AI/ML engineer (Up to 12 LPA).
Preferred Experience:
- Prior experience in roles such as AI Solution Architect, ML Architect, Data Science Architect, or AI/ML intern.
- Exposure to AI-driven startups or fast-paced technology environments.
- Proven ability to operate in dynamic roles requiring agility, adaptability, and initiative.



About Moative
Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots and predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries. Through Moative Labs, we aspire to build micro-products and launch AI startups in vertical markets.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Work you’ll do
As an AI Engineer at Moative, you will be at the forefront of applying cutting-edge AI to solve real-world problems. You will be instrumental in designing and developing intelligent software solutions, leveraging the power of foundation models to automate and optimize critical workflows. Collaborating closely with domain experts, data scientists, and ML engineers, you will integrate advanced ML and AI technologies into both existing and new systems. This role offers a unique opportunity to explore innovative ideas, experiment with the latest foundation models, and build impactful products that directly enhance the lives of citizens by transforming how government services are delivered. You'll be working on challenging and impactful projects that move the needle on traditionally difficult-to-automate processes.
Responsibilities
- Utilize and adapt foundation models, particularly in vision and data extraction, as the core building blocks for developing impactful products aimed at improving government service delivery. This includes prompt engineering, fine-tuning, and evaluating model performance
- Architect, build, and deploy intelligent AI agent-driven workflows that automate and optimize key processes within government service delivery. This encompasses the full lifecycle from conceptualization and design to implementation and monitoring
- Contribute directly to enhancing our model evaluation and monitoring methodologies to ensure robust and reliable system performance. Proactively identify areas for improvement and implement solutions to optimize model accuracy and efficiency
- Continuously learn and adapt to the rapidly evolving landscape of AI and foundation models, exploring new techniques and technologies to enhance our capabilities and solutions
Who you are
You are a passionate and results-oriented engineer who is driven by the potential of AI/ML to revolutionize processes, enhance products, and ultimately improve user experiences. You thrive in dynamic environments and are comfortable navigating ambiguity. You possess a strong sense of ownership and are eager to take initiative, advocating for your technical decisions while remaining open to feedback and collaboration.
You are adept at working with real-world, often imperfect data, and have a proven ability to develop, refine, and deploy AI/ML models into production in a cost-effective and scalable manner. You are excited by the prospect of directly impacting government services and making a positive difference in the lives of citizens
Skills & Requirements
- 3+ years of experience in programming languages such as Python or Scala
- Proficient knowledge of cloud platforms (e.g., AWS, Azure, GCP) and containerization, DevOps (Docker, Kubernetes)
- Tuning and deploying foundation models, particularly for vision tasks and data extraction
- Excellent analytical and problem-solving skills with the ability to break down complex challenges into actionable steps
- Strong written and verbal communication skills, with the ability to effectively articulate technical concepts to both technical and non-technical audiences
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less.
Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that out loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.

Job Title : Applied AI Engineer (AI/ML & LLM Deployment)
Experience : 4+ Years
Location : Remote
Job Summary :
We are looking for an Applied AI Engineer with strong expertise in Large Language Models (LLMs) to design, optimize, and deploy advanced AI solutions. The role focuses on LLM integration, fine-tuning, and performance optimization for scalable production use.
Mandatory Skills :
LLM Integration, Prompt Engineering, LangChain/LlamaIndex, Fine-tuning & LoRA, Model Quantization, Local Deployment, Tokenizers, Performance Optimization.
Key Responsibilities :
- Integrate GPT-4, Claude, and open-source LLMs into applications.
- Design and implement prompt engineering for context-specific, tone-appropriate outputs.
- Build AI pipelines using LangChain, LlamaIndex, or similar frameworks.
- Perform fine-tuning and LoRA (Low-Rank Adaptation) to customize models.
- Apply model quantization (GGML, GPTQ, bitsandbytes) for efficient deployment.
- Deploy models with Hugging Face Transformers, vLLM, llama.cpp.
- Optimize tokenization and inference performance for low latency.
Required Skills :
- Strong hands-on experience with LLMs (OpenAI GPT-4, Claude, open-source models).
- Proficiency in Prompt Engineering and tone-specific output design.
- Experience with LangChain, LlamaIndex, or other orchestration frameworks.
- Knowledge of fine-tuning, LoRA techniques, and quantization methods.
- Familiarity with deployment frameworks (Transformers, vLLM, llama.cpp).
- Strong understanding of tokenizers and efficient text preprocessing.
- Ability to optimize models for real-time, low-latency inference.
- Solid programming skills in Python and experience with PyTorch/TensorFlow.
Nice to Have :
- Exposure to MLOps tools (Docker, Kubernetes, CI/CD for AI).
- Experience with evaluation frameworks for LLMs.
- Knowledge of multi-modal models (text, vision, audio).
Location: Flexible - Remote or Based in India
Type: Equity-Based Partnership with Investment
Vesting Schedule: 4 years with a 1-year cliff
IMPORTANT: This is an equity-only position with investment until the business starts generating revenue.
About Dwichakra:
Dwichakra is the ultimate lifestyle app for motorcycle enthusiasts, providing everything riders need to fuel their passion for the open road. Our platform offers new routes, epic ride planning, and a vibrant community connection, along with an e-commerce shop, achievement tracking, expense management, and roadside assistance. Committed to AI-powered personalized route catalogs and instant support, Dwichakra ensures a thrilling journey for every motorcyclist.
Founded in 2016, Dwichakra is transforming the Indian two-wheeler industry by delivering an integrated ecosystem that caters to every aspect of a rider's journey. We are rapidly positioning ourselves as a pioneer in comprehensive services for motorcycle enthusiasts across India.
Time Commitment:
- Hours: 15-25 hours per week
- Flexibility: Can be managed alongside a full-time job
- Working Hours: Flexible
Compensation:
- Equity Partnership: Join a growing startup as a key partner
- Leadership Opportunity: Shape technology as a Director
- Future Revenue Sharing: As the business scales
- Potential Full-Time Role: As the company grows
Role Overview:
We are seeking a driven and visionary Director Of Technology to join Dwichakra as an integral contributor and investor. This role is perfect for anyone with entrepreneurial aspirations and a robust technical background who is ready to invest in our shared mission.
As a technologist and partner, you will provide technical leadership and expertise in developing and launching innovative technology solutions. Your contributions will be crucial in enhancing our product offerings and scaling the company.
Key Responsibilities:
- Technology Strategy: Collaborate with founders to define and execute a technology roadmap that aligns with the company’s vision and growth strategy.
- Financial Investment: Commit to a financial contribution to support technology development, product launch, and scaling efforts.
- Technical Leadership: Lead the design and implementation of the platform, ensuring scalability, security, and performance.
- Product Development: Direct the development of new features and enhancements, leveraging AI and other technologies for a superior user experience.
- Team Building: Recruit, mentor, and manage a talented technology team to foster innovation and excellence.
- Decision-Making: Participate in strategic and operational decisions to drive the company forward.
Shared Responsibilities:
- Operations Oversight: Collaborate with Founders for smooth daily operations in technology.
- Business Development: Partner with technology vendors, service providers, and stakeholders.
- Industry Representation: Represent Dwichakra at key industry events and technology forums.
- Visionary Leadership: Inspire the technology team to drive innovation and stay ahead of market trends.
Qualifications and Skills:
Essential:
- Proven experience in a senior technology leadership role (CTO or equivalent).
- Strong understanding of software development, AI applications, and mobile app technologies.
- An entrepreneurial mindset with a strategic approach to technology and business goals.
- Willingness and ability to make a financial investment in the company.
- Excellent leadership, collaboration, and problem-solving skills.
- Strong communication and interpersonal abilities.
Preferred:
- Experience in startups, product development, or technology commercialization.
- Familiarity with the motorcycle or automobile industry and market trends.
- A network of potential clients, investors, or technology partners.
Equity and Investment:
- Equity Offering: Based on contributions (financial, expertise, and operational involvement), equity shares will be negotiated with a vesting schedule.
- Investment Requirement: A minimum financial contribution of INR 15-25 lakhs to support technology development and product launch.
If you are an innovative and entrepreneurial-minded technology leader passionate about the startup community, we invite you to join us in shaping the future of Dwichakra. Apply today and help us create the ultimate experience for motorcycle riders and indian two-wheeler ecosystem!




Job description
Job Title: AI-Driven Data Science Automation Intern – Machine Learning Research Specialist
Location: Remote (Global)
Compensation: $50 USD per month
Company: Meta2 Labs
www.meta2labs.com
About Meta2 Labs:
Meta2 Labs is a next-gen innovation studio building products, platforms, and experiences at the convergence of AI, Web3, and immersive technologies. We are a lean, mission-driven collective of creators, engineers, designers, and futurists working to shape the internet of tomorrow. We believe the next wave of value will come from decentralized, intelligent, and user-owned digital ecosystems—and we’re building toward that vision.
As we scale our roadmap and ecosystem, we're looking for a driven, aligned, and entrepreneurial AI-Driven Data Science Automation Intern – Machine Learning Research Specialist to join us on this journey.
The Opportunity:
We’re seeking a part-time AI-Driven Data Science Automation Intern – Machine Learning Research Specialist to join Meta2 Labs at a critical early stage. This is a high-impact role designed for someone who shares our vision and wants to actively shape the future of tech. You’ll be an equal voice at the table and help drive the direction of our ventures, partnerships, and product strategies.
Responsibilities:
- Collaborate on the vision, strategy, and execution across Meta2 Labs' portfolio and initiatives.
- Drive innovation in areas such as AI applications, Web3 infrastructure, and experiential product design.
- Contribute to go-to-market strategies, business development, and partnership opportunities.
- Help shape company culture, structure, and team expansion.
- Be a thought partner and problem-solver in all key strategic discussions.
- Lead or support verticals based on your domain expertise (e.g., product, technology, growth, design, etc.).
- Act as a representative and evangelist for Meta2 Labs in public or partner-facing contexts.
Ideal Profile:
- Passion for emerging technologies (AI, Web3, XR, etc.).
- Comfortable operating in ambiguity and working lean.
- Strong strategic thinking, communication, and collaboration skills.
- Open to wearing multiple hats and learning as you build.
- Driven by purpose and eager to gain experience in a cutting-edge tech environment.
Commitment:
- Flexible, part-time involvement.
- Remote-first and async-friendly culture.
Why Join Meta2 Labs:
- Join a purpose-led studio at the frontier of tech innovation.
- Help build impactful ventures with real-world value and long-term potential.
- Shape your own role, focus, and future within a decentralized, founder-friendly structure.
- Be part of a collaborative, intellectually curious, and builder-centric culture.
Job Types: Part-time, Internship
Pay: $50 USD per month
Work Location: Remote
Job Types: Full-time, Part-time, Internship
Contract length: 3 months
Pay: Up to ₹5,000.00 per month
Benefits:
- Flexible schedule
- Health insurance
- Work from home
Work Location: Remote