50+ Machine Learning (ML) Jobs in India
Apply to 50+ Machine Learning (ML) Jobs on CutShort.io. Find your next job, effortlessly. Browse Machine Learning (ML) Jobs and apply today!
Department
Product & Technology
Location
On-site | Prabhat Road, Pune
Experience
3-5 Years in a Data Engineering or Analytics Role
Domain
Fintech / Wealth Management — non-negotiable
Compensation
11-12 LPA Fixed + Performance Bonus
Growth
Title upgrade + salary revision at 12–18 months for strong performers
Why this role is different from most Data Engineer postings
You will work directly with the founding team on a live wealth management platform used by HNI and NRI clients. You will not spend years in a queue waiting to matter your work ships to production, your analysis influences product decisions, and you will guide junior teammates from day one. If you perform, a raise and title upgrade are on the table within 1218 months. This is the kind of early-team role that defines careers.
About Cambridge Wealth
Cambridge Wealth is a fast-growing, award-winning Financial Services and Fintech firm obsessed with quality and exceptional client service. We serve a high-profile clientele NRI, Mass Affluent, HNI, and ultra-HNI professionals and have received multiple awards from major Mutual Fund houses and BSE. We are past the zero-to-one stage and now focused on scaling our features and intelligence layer. You will be joining at exactly the right time.
What You Will Be Doing
This is a central, hands-on data engineering role at the intersection of financial analytics and applied ML. You will own the data pipelines and analytical models that power investment insights for wealth management clients transforming transaction data and portfolio information into measurable, actionable intelligence.
We are not looking for someone who just keeps the lights on. We want someone who looks at a working system and immediately sees how to make it 10x faster, cleaner, and smarter using AI and automation wherever possible.
Key Responsibilities:
Data Engineering & Pipelines
- Build and optimize PostgreSQL-based pipelines to process large volumes of investment transaction data.
- Design and maintain database schemas, foreign tables, and analytical structures for performance at scale.
- Write advanced SQL — window functions, stored procedures, query optimization, index design.
- Build Python automation scripts for data ingestion, transformation, and scheduled pipeline runs.
- Monitor AWS RDS workloads and troubleshoot performance issues proactively.
Financial Analytics & Modelling
- Develop analytical frameworks to evaluate client portfolios against benchmarks and category averages.
- Build data models covering mutual fund schemes, SIPs, redemptions, switches, and transfer lifecycles.
- Create materialized views and derived tables optimized for dashboards and internal reporting tools.
- Analyse client transaction history to surface patterns in investment behaviour and financial discipline.
Applied ML & AI-Driven Development
- Use Python (Pandas, NumPy, Scikit-learn) for trend analysis, forecasting, and predictive modelling.
- Implement classification or regression models to support financial pattern detection.
- Use AI tools — LLMs, Copilots — to accelerate ETL development, code quality, and data cleaning.
- Identify opportunities to automate repetitive data tasks and advocate for smarter tooling.
Data Quality & Governance
- Own data integrity end-to-end in a live, high-stakes financial environment.
- Build and maintain validation and cleaning protocols across all financial datasets.
- Maintain Excel models, Power Query workflows, and structured reporting outputs.
Collaboration & Junior Mentorship
- Work directly with Product, Investment Research, and Wealth Advisory teams.
- Translate open-ended business questions into structured queries and measurable outputs.
- Guide 1–2 junior trainees — review their work, set code quality standards, and help them grow.
- Present findings clearly to non-technical stakeholders — no jargon, just clarity.
Skills — What We Need vs. What Helps
Skill / Tool
Requirement
Must-Haves:
SQL & PostgreSQL (window functions, stored procedures, optimization)
Python — Pandas, NumPy for data processing and automation
ML fundamentals — classification or regression (Scikit-learn)
AWS RDS or equivalent cloud database experience
Financial domain knowledge — mutual funds, SIPs, portfolio concepts
Python data visualization — Matplotlib, Seaborn, or Plotly
Strong Advantage
Excel — Power Query, advanced modelling
Materialized views, query planning, index optimization
Experience with BI/dashboard tools
Good to Have
NoSQL databases
Prior fintech or wealth management startup experience
Financial Domain — Non-Negotiable
This is a wealth management platform. You must come in with a working understanding of:
- Mutual fund structures, scheme types, and NAV-based transactions
- Investment lifecycle — SIPs, Lump Sum, Redemptions, Switches, and STPs
- Portfolio allocation and benchmarking against indices (e.g. Nifty 50, category averages)
- How HNI/NRI clients interact with financial products differently from retail investors
You do not need to be a CFA. But if mutual funds and portfolio analytics are completely new territory, this role is not the right fit right now.
The Culture Fit — Read This Carefully
We are a small, fast-moving team. This is not a place where you wait for a ticket to arrive in your queue. The right person for this role:
- Has worked at a small startup before and is used to wearing multiple hats
- Finds broken or slow data systems genuinely irritating and fixes them without being asked
- Reaches for Python or an LLM when there is a repetitive task — automating is instinctive
- Is comfortable saying 'I don't know but I'll find out' and follows through independently
- Wants visibility and ownership, not just a well-defined job description
- Is looking for a role where strong performance is directly visible and rewarded
Growth Path — What Happens If You Perform
This is not a vague 'growth opportunity' pitch.
If you hit the bar in your first 12–18 months, you will receive a salary revision and a title upgrade to Senior Data Engineer or Lead Data Engineer depending on team expansion. As we scale our Data and AI team, this role is the natural stepping stone to a team lead position. You will also gain direct exposure to founding-team decision-making — the kind of access that is hard to get at larger companies.
Preferred Background
- 2–4 years in a data engineering or analytics role at a startup or small Fintech
- Experience in a live product environment where data errors have real consequences
- Exposure to portfolio analytics, investment research, or wealth management platforms
- Has mentored or reviewed code for at least one junior team member
Hiring Process
We respect your time. The process is direct and moves fast.
- Screening Questions — 5 minutes online
- Online Challenge — MCQ(Data, SQL, AWS, etc), and one applied ML or analytics problem, Communication Skills and Personality (focused, not trick questions)
- People Round — 30-minute video call, culture and communication
- Technical Deep-Dive — 1 hour in person, live financial data problems and your past work
- Founder's Interview — 1 hour in person, growth conversation and mutual fit
- Offer & Background Verification
We are looking for a highly skilled AI Platform Engineer to build and scale agentic AI capabilities across our product suite. You’ll work on multi‑agent systems, orchestration platforms, RAG pipelines, and real‑time AI services used in enterprise workflows.
🔹 Key Responsibilities
- Build and maintain AI platform services enabling agentic workflows
- Develop domain-specific agents (proposal generation, compliance, data analysis)
- Implement multi-agent orchestration using LangGraph and related frameworks
- Build APIs, SDKs, and integration layers for product teams
- Design and optimize RAG, GraphRAG, and knowledge ingestion pipelines
- Enhance orchestration platforms, WebSocket communication, and error recovery
- Optimize performance, latency (<3s), cost, and reliability of AI systems
- Collaborate closely with ML engineers and data scientists on models, prompts, and A/B testing
🔹 Required Skills & Experience
- 5+ years of software engineering experience (1–2+ years in AI/ML systems)
- Strong Python (FastAPI, async, LangChain/LangGraph)
- Experience with LLM APIs (OpenAI, Claude, Llama, Phi‑3)
- Hands-on RAG, embeddings, vector databases, and hybrid search
- React + TypeScript experience (WebSockets, hooks, real-time UI)
- Knowledge of multi-agent systems, prompt engineering, and orchestration patterns
- Solid backend fundamentals: REST APIs, databases, auth, testing, Git
🔹 Nice to Have
- MLOps exposure (prompt versioning, monitoring, A/B testing)
- Experience with semantic caching and context management
- Docker and cloud deployment basics
Senior Quality Engineer – AI Products
Fulltime
Remote
Requirements
● 3-7 years of experience in software quality engineering, preferably in SaaS environments with a platform or infrastructure focus.
● Strong demonstrated experience testing distributed systems, APIs, data pipelines, or cloud-based infrastructure.
● Experience designing and executing test plans for AI/ML systems, data pipelines, or shared platform services.
● Familiarity with AI/LLM infrastructure concepts such as retrieval-augmented generation (RAG), vector search, model routing, and observability.
● Strong demonstrated proficiency in Linux distributions and CLI-based testing, including log file analysis and other troubleshooting tasks.
● Experience with AWS or other major cloud platforms.
● Basic Python/Shell scripting knowledge with ability to edit existing scripts and create new automation for pipeline validation.
● Advanced skills with API and SQL testing methodologies.
● Familiarity with test management tools such as TestRail; experience with Qase is a plus.
● Demonstrated experience leveraging Version Control Systems with a focus on GitHub.
● Experience with testing tools: Jira, Sentry, DataDog.
● Strong understanding of Agile/Scrum methodologies.
● Proven track record of mentoring junior engineers and contributing to process improvements.
● Excellent analytical and problem-solving abilities.
● Strong communication skills with ability to present to both technical and non-technical stakeholders.
● Proficiency in English (C1-C2 level).
● Most importantly: The courage to be vocal about quality concerns, platform risks, and testing impediments.
Preferred Qualifications
● Experience with AI/ML evaluation frameworks or tools (e.g., LLM-as-judge, Ragas, custom eval harnesses).
● Hands-on experience with document parsing, OCR, or unstructured data pipelines.
● Experience with observability tooling (e.g., Datadog, Grafana, OpenTelemetry) from a QA perspective.
● Experience testing SaaS products in regulated industries (such as PCI-compliant).
● Basic understanding of containerization, Kubernetes, and CI/CD pipelines (Jenkins, CircleCI).
● Experience with microservice architectures and distributed systems.
● Knowledge of basic non-functional testing (security, performance) with emphasis on AI-specific concerns.
● Background in security or compliance testing for AI systems.
● Certifications such as ISTQB or CSTE.
● Experience working in legal technology, fintech, or professional services software.
● Familiarity with AI-assisted testing tools and leveraging LLMs as a productivity-boosting tool.
● Experience evaluating and implementing new QE tools and processes
About The Nexora Group Inc.
The Nexora Group Inc. is a technology-driven organization focused on developing intelligent software solutions using Artificial Intelligence, Machine Learning, and advanced data technologies. Our teams work on innovative projects involving data analysis, predictive modeling, automation systems, and AI-powered applications designed to solve real-world business problems.
We are seeking passionate and motivated Artificial Intelligence & Machine Learning Interns who want to gain hands-on experience working on practical AI development projects.
Internship Responsibilities
- Assist in developing and implementing machine learning models
- Work on data preprocessing, data analysis, and model training
- Support AI projects involving predictive analytics, automation, and intelligent systems
- Use Python libraries such as NumPy, Pandas, Scikit-learn, or TensorFlow
- Participate in testing and improving model performance
- Collaborate with development teams on AI-based applications
- Document project workflows and research findings
Required Skills
- Basic knowledge of Python programming
- Understanding of Machine Learning concepts
- Familiarity with data analysis and statistics
- Basic experience with libraries such as Pandas, NumPy, or Scikit-learn
- Interest in Artificial Intelligence technologies
- Good analytical and problem-solving skills
Preferred Qualifications
- Students or recent graduates in Computer Science, Artificial Intelligence, Data Science, or related fields
- Basic knowledge of Deep Learning or Neural Networks
- Familiarity with TensorFlow, PyTorch, or similar frameworks is a plus
- Understanding of data visualization tools is beneficial
- Experience with Git or version control systems is an advantage
What Interns Will Gain
- Hands-on experience working on AI and machine learning projects
- Exposure to real-world datasets and model development
- Opportunity to build a strong AI project portfolio
- Mentorship from experienced developers and data scientists
- Internship completion certificate based on performance and participation
Machine Learning Engineer – Computer Vision
Company: Atna.ai
Location: Chennai
Experience: 3–6 years
About Atna.ai
Atna.ai is building an AI platform focused on detecting deepfakes, document tampering, and synthetic fraud across digital ecosystems.
As generative AI becomes more powerful, verifying the authenticity of digital content is becoming critical for enterprises, financial institutions, and governments.
Role Overview
We are looking for a Machine Learning Engineer with strong experience in computer vision and deep learning to help build next-generation AI systems for detecting manipulated media.
You will work closely with the founding team to design, train, and deploy models capable of detecting deepfakes, tampered images, and synthetic documents.
This role involves a mix of research, experimentation, and production ML engineering.
What You’ll Work On
- Building ML models to detect deepfakes and synthetic media
- Developing algorithms for document tampering and image forensics
- Designing pipelines for data processing, feature extraction, and model training
- Experimenting with state-of-the-art deep learning architectures
- Improving model robustness against adversarial manipulation
- Deploying models as production-ready APIs or microservices
- Optimizing models for performance and latency
Required Skills
Technical Skills:
- Strong programming skills in Python
- Experience with PyTorch or TensorFlow
- Strong understanding of deep learning and neural networks
- Experience with computer vision / image processing
- Familiarity with tools such as OpenCV, NumPy, Pandas
Machine Learning
- Training and evaluating deep learning models
- Feature engineering and data preprocessing
- Model optimization and experimentation
Engineering
- Experience building production ML pipelines
- Familiarity with Docker, APIs, and cloud environments
Nice to have
- Experience with deepfake detection or media forensics
- Experience working with GANs or diffusion models
- Background in computer vision research
- Publications or projects related to CVPR / ICCV / NeurIPS
- Experience working with large-scale image datasets
About CK-12 Foundation
CK-12’s mission is to provide free access to open-source content and technology tools that empower both students and teachers to enhance learning across different styles, resources, competence levels, and circumstances.
To achieve this ambitious vision, CK-12 challenges the traditional education model by leveraging technology to revolutionize learning for students, teachers, and parents.
CK-12 operates as a non-profit organization so it can experiment with bold ideas and focus on doing the right thing for education. The organization is backed by Vinod Khosla, a renowned technology venture capitalist.
At CK-12, you’ll work in a dynamic, entrepreneurial, and innovative environment where passionate individuals collaborate to disrupt traditional education through technology.
Technology is at the heart of scaling education, and CK-12 builds solutions on a cloud-based (AWS) and AI-first platform delivering rich and interactive learning experiences.
If you are a great technologist who enjoys challenging the status quo and building innovative products, this could be the place for you.
Together, we aim to transform education globally.
Product Offerings
Flexi 2.0 – AI-Powered Student Tutor
AI-Powered Teacher Assistant
https://www.ck12.org/pages/teacher-assistant/
Core Responsibilities
• Translate high-level directions and open-ended product ideas into deliverable ML projects and drive their completion.
• Architect and implement highly scalable ML solutions for systems such as multimodal information retrieval, conversational chatbots, recommender systems, and ranking systems.
• Own end-to-end product delivery from research and experimentation to production deployment.
• Work closely with cross-functional teams including Product, Engineering, DevOps, QA, and Content teams.
• Manage ML workflows involving data gathering, working with annotators, and collaborating with ML researchers.
• Extract and analyze large volumes of data to generate insights about student and teacher behavior based on platform usage.
• Design and build innovative ML-driven solutions that can improve learning experiences in the EdTech space.
• Apply statistical hypothesis testing and experimentation to evaluate and improve models.
• Continuously innovate and challenge the traditional approach to education through ML solutions.
Requirements
• Bachelor’s degree or higher in Computer Science or a related quantitative discipline, or equivalent practical experience.
• 4+ years of hands-on development experience with strong programming skills, preferably in Python.
• Expertise in deep learning approaches for NLP including transformer-based models, predictive modeling, search and recommendation systems, and autoregressive models.
• 2+ years of experience in NLP applications such as information retrieval, chatbots, summarization, or generative models.
• Proven experience building scalable ML applications on cloud infrastructure such as AWS, GCP, or Azure.
• Strong understanding of trade-offs between model architecture, deployment costs, and model accuracy.
• Ability to manage multiple tasks and collaborate effectively with geographically distributed teams.
• Up-to-date knowledge of advancements in NLP and computer vision and the ability to apply them in the education domain.
Technical Skills
• Python, PyTorch, TorchServe
• Pandas
• SQL and NoSQL databases such as MySQL, MongoDB, Redis, and Redshift
• Cloud infrastructure (AWS / GCP / Azure)
• Vector databases and search technologies such as Elasticsearch
• Linux
Nice to Have
• Familiarity with Reinforcement Learning
• Experience with Deep Knowledge Tracing
Job Title:
AI Native Operations Expert – Director / AVP / VP
Company: EOSGlobe
CTC: ₹24 – ₹36 LPA
Open Positions: 3
Experience: 12 – 18 Years
Joining: Immediate Joiners Preferred
Role Overview
EOSGlobe is transforming into an AI-First organization and is looking for an AI Native Operations Expert to lead this transformation. The role focuses on driving automation, process re-engineering, and AI adoption across BPM operations to improve efficiency, scalability, and business impact.
Key Responsibilities
Lead AI-driven transformation initiatives across BPM operations.
Re-engineer processes using Artificial Intelligence, Machine Learning, and automation tools.
Collaborate with leadership and strategy teams to implement AI-first operational models.
Define and track KPIs, productivity metrics, and financial impact of transformation initiatives.
Partner with internal teams and clients to demonstrate AI-driven efficiency and revenue growth.
Identify opportunities for process automation and digital adoption across operations.
Required Skills
Strong expertise in Artificial Intelligence (AI), Machine Learning (ML), and RPA.
Experience in process transformation and digital automation initiatives.
Deep understanding of BPM operations and service delivery models.
Strong leadership and stakeholder management skills.
Analytical mindset with ability to measure financial impact and operational KPIs.
Preferred Qualifications
Experience leading large-scale automation or AI transformation projects.
Exposure to BPM, consulting, or operations leadership roles.
Excellent communication and strategic thinking skills.
Job Title : Principal Backend Engineer (AI-Driven)
Experience : 10+ Years
Location : Chandigarh
Tech Stack : PHP, Node.js, Laravel, MySQL, MongoDB
Additional Requirement : Hands-on experience with AI technologies, APIs, or ML integrations
Role Overview :
We're looking for a Principal Backend Engineer (AI-Driven) to design and lead scalable backend systems while driving AI adoption across products.
The role involves integrating AI-powered features, architecting intelligent systems, and mentoring engineering teams on modern backend and AI implementation.
Key Responsibilities :
- Design and lead backend architecture using PHP (Laravel/CodeIgniter) and Node.js
- Build scalable microservices / modular backend systems
- Develop APIs and backend workflows for AI-driven features
- Integrate AI APIs (OpenAI, LangChain or similar frameworks)
- Work with LLMs, embeddings, vector databases, and AI pipelines
- Ensure performance, scalability, and security of backend systems
- Mentor engineering teams and drive backend + AI best practices
Requirements :
- 10+ years of backend development experience
- Strong expertise in PHP / Node.js, MySQL, MongoDB
- Hands-on experience integrating AI/ML APIs or AI-powered features
- Strong system design and architecture skills
- Experience leading engineering teams
Good to Have :
- Prompt engineering or AI cost optimization
- Exposure to MLOps / ML pipelines
What You’ll Do
● Partner with Product to spot high-leverage ML opportunities tied to business
metrics.
● Wrangle large structured and unstructured datasets; build reliable features and
data contracts.
● Build and ship models to:
○ Enhance customer experiences and personalization
○ Boost revenue via pricing/discount optimization
○ Power user-to-user discovery and ranking (matchmaking at scale)
○ Detect and block fraud/risk in real time
○ Score conversion/churn/acceptance propensity for targeted actions
● Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
● Design and run A/B tests with guardrails.
● Build monitoring for model/data drift and business KPIs
What We’re Looking For
● 2–4 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
● Proven, hands-on success in at least two (preferably 3–4) of the following:
○ Recommender systems (retrieval + ranking, NDCG/Recall, online lift;
bandits a plus)
○ Fraud/risk detection (severe class imbalance, PR-AUC)
○ Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs,
guardrails/simulation)
○ Propensity models (payment/churn)
● Programming: strong Python and SQL; solid git, Docker, CI/CD.
● Cloud and data: experience with AWS or GCP; familiarity with
warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
● ML breadth: recommender systems, NLP or user profiling, anomaly detection.
● Communication: clear storytelling with data; can align stakeholders and drive decisions.
About Us:
REConnect Energy’s GRIDConnect platform helps integrate and manage energy generation and consumption for 1000s of renewable energy assets and grid operators. We are currently serving customers across India, Bhutan and the Middle East with expansion planned in US and European markets.
We are headquartered in Central Bangalore with a team of 150+ and growing. You will join the Bangalore based Engineering team as a senior member and work at the intersection of Energy, Weather & Climate Sciences and AI.
Responsibilities:
● Engineering - Take complete ownership of engineering stacks including Data Engineering and MLOps. Define and maintain software systems architecture for high availability 24x7 systems.
● Leadership - Lead a team of engineers and analysts managing engineering development as well as round the clock service delivery. Provide mentorship and technical guidance to team members and contribute towards their professional growth. Manage weekly and monthly reviews with team members and senior management.
● Product Development - Contribute towards new product development through engineering solutions to product requirements. Interact with cross-functional teams to bring forward a technology perspective.
● Operations - Manage delivery of critical services to power utilities with expectations of zero downtime. Take ownership for uninterrupted product uptime.
Requirements:
● 4-5 years of experience building highly available systems
● 2-3 years experience leading a team of engineers and analysts
● Bachelors or Master’s degree in Computer Science, Software Engineering, Electrical Engineering or equivalent
● Proficient in python programming skills and expertise with data engineering and machine learning deployment
● Experience in databases including MySQL and NoSQL
● Experience in developing and maintaining critical and high availability systems will be given strong preference
● Experience in software design using design principles and architectural modeling.
● Experience working with AWS cloud platform.
● Strong analytical and data driven approach to problem solving
Role Overview
We are seeking an experienced Machine Learning Engineer to design, develop, deploy, and scale machine learning solutions in production environments. The ideal candidate will have strong expertise in model development, MLOps, backend integration, and scalable ML system architecture.
Key Responsibilities
- Design, train, and deploy machine learning and deep learning models for real-world applications.
- Build and scale ML models and pipelines to handle large datasets and high-traffic production workloads.
- Perform data exploration, preprocessing, feature engineering, and dataset optimization.
- Develop and integrate APIs for ML model consumption using frameworks such as FastAPI, Flask, or Django.
- Implement end-to-end ML workflows, including experimentation, versioning, deployment, and monitoring.
- Build scalable data pipelines and processing workflows for batch and real-time use cases.
- Monitor model performance, drift, and system reliability in production.
- Collaborate with data engineers, DevOps, and product teams to productionize ML solutions.
Required Skills & Qualifications
- 5+ years of experience in Machine Learning Engineering or related roles.
- Proven experience in training, developing, deploying, and scaling ML models in production.
- Strong knowledge of data exploration, preprocessing, and feature engineering techniques.
- Proficiency in Python and ML/Data Science libraries such as Pandas, NumPy, Scikit-learn, TensorFlow, and PyTorch.
- Experience with API development and backend integration (FastAPI, Flask, Django).
- Solid understanding of MLOps tools and practices, including MLflow, Kubeflow, Airflow, Prefect, Docker, and Kubernetes.
- Experience building scalable data pipelines, processing workflows, and monitoring deployed models.
We are seeking an experienced Python Lead to design, develop, and scale high-performance backend systems. The ideal candidate will have strong expertise in Python-based backend development, system design, and cloud-native architectures. You will lead the development of scalable APIs, work with modern cloud platforms, and collaborate with cross-functional teams to deliver reliable and efficient applications.
Key Responsibilities
- Design and develop scalable backend services using Python (Django/Flask).
- Build and maintain RESTful APIs and WebSocket-based applications.
- Implement efficient algorithms, data structures, and design patterns for high-performance systems.
- Develop and optimize database schemas and queries using PostgreSQL, MySQL, or MongoDB.
- Integrate caching and queuing systems to improve system performance and reliability.
- Deploy and manage applications on AWS or GCP cloud environments.
- Implement and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI, or GitHub Actions.
- Work with Docker containers and Linux-based environments for development and deployment.
- Collaborate with engineering teams to design scalable system architectures.
- Explore and integrate AI-driven capabilities such as RAG, LLMs, and vector databases where applicable.
Required Skills
- Strong expertise in Python backend development using Django or Flask
- Experience with REST APIs, WebSockets, and microservices architecture
- Solid knowledge of design patterns, algorithms, and data structures
- Experience with relational and NoSQL databases (PostgreSQL, MySQL, MongoDB)
- Hands-on experience with AWS or GCP cloud services
- Experience with CI/CD pipelines and containerization (Docker)
- Proficiency in Git and Linux environments
Preferred Skills
- Familiarity with AI/ML concepts
- Experience with RAG architectures and LLM integrations
- Knowledge of vector databases such as Pinecone or ChromaDB
What We’re Looking For
- Strong problem-solving and system design skills
- Ability to lead backend development initiatives
- Experience building scalable and production-grade systems
- Excellent collaboration and communication skills
Description
We are currently hiring for the position of Data Scientist/ Senior Machine Learning Engineer (6–7 years’ experience).
Please find the detailed Job Description attached for your reference. We are looking for candidates with strong experience in:
- Machine Learning model development
- Scalable data pipeline development (ETL/ELT)
- Python and SQL
- Cloud platforms such as Azure/AWS/Databricks
- ML deployment environments (SageMaker, Azure ML, etc.)
Kindly note:
- Location: Pune (Work From Office)
- Immediate joiners preferred
While sharing profiles, please ensure the following details are included:
- Current CTC
- Expected CTC
- Notice Period
- Current Location
- Confirmation on Pune WFO comfort
Must have skills
Machine Learning - 6 years
Python - 6 years
ETL(Extract, Transform, Load) - 6 years
SQL - 6 years
Azure - 6 years

Business Intelligence & Digital Consulting company
Description
JOB DESCRIPTION – SENIOR ANALYST – DATA SCIENTIST
Key Responsibilities ·
Work with business stakeholders and cross-functional SMEs to deeply understand business context and key business questions·
Advanced skills with statistical/programming in Python and data querying languages (e.g., SQL, Hadoop/Hive, Scala)·
Solid understanding of time-series forecasting techniques·
Good hands-on skills in both feature engineering and hyperparameter optimization·
Able to write clean and tested code that can be maintained by other software engineers·
Able to clearly summarize and communicate data analysis assumptions and results·
Able to craft effective data pipelines to transform your analyses from offline to production systems·
Self-motivated and a proactive problem solver who can work independently and in teams·
Connects both externally and internally to understand industry trends, technology advances and outstanding processes or solutions·
Is collaborative and engages (strategic & tactical. Able to influence without authority, handle complex issues and implement positive change·
Work on multiple pillars of AI including cognitive engineering, conversational bots, and data science·
Ensure that solutions exhibit high levels of performance, security, scalability, maintainability, repeatability, appropriate reusability, and reliability upon deployment ·
Provide guidance and leadership to more junior data scientists, managing processes and flow of work, vetting designs, and mentoring team members to realize their full potential·
Lead discussions at peer review and use interpersonal skills to positively influence decision making·
Provide subject matter expertise in machine learning techniques, tools, and concepts; make impactful contributions to internal discussions on emerging practices·
Facilitate cross-geography sharing of new ideas, learnings, and best-practices
What We Are Looking For
Required Qualifications ·
Master's degree in a quantitative field such as Data Science, Statistics, Applied Mathematics or Bachelor's degree in engineering, computer science, or related field. ·
4 – 6 years of total work experience as data scientist or analytical role, with at least 2-3 years of experience in time series forecasting·
A combination of business focus, strong analytical and problem-solving skills, and programming knowledge to be able to quickly cycle hypothesis through the discovery phase of a project ·
Strong experience in Time Series Forecasting and Demand Planning ·
Advanced skills with statistical/programming software (e.g., R, Python) and data querying languages (e.g., SQL, Hadoop/Hive, Scala) ·
Good hands-on skills in both feature engineering and hyperparameter optimization ·
Experience producing high-quality code, tests, documentation·
Understanding of descriptive and exploratory statistics, predictive modelling, evaluation metrics, decision trees, machine learning algorithms, optimization & forecasting techniques, and / or deep learning methodologies·
Proficiency in statistical concepts and ML algorithms·
Ability to lead, manage, build, and deliver customer business results through data scientists or professional services team·
Ability to share ideas in a compelling manner, to clearly summarize and communicate data analysis assumptions and results·
Self-motivated and a proactive problem solver who can work independently and in teams·
Outstanding verbal and written communication skills with the ability to effectively advocate technical solutions to engineering and business teams
Desired Qualifications ·
Experience working in one or multiple supply chain functions (e.g., procurement, planning, manufacturing, quality, logistics) is strongly preferred ·
Experience in applying AI/ML within a CPG or Healthcare business environment is strongly preferred ·
Experience in creating CI/CD pipelines for deployment using Jenkins. ·
Experience implementing MLOPs framework along with understanding of data security·
Implementation on ML models·
Exposure to visualization packages and Azure tech stack.
Must have skills
Python - 2 years
Data Science - 4 years
SQL - 2 years
Machine Learning - 2 years
Nice to have skills
Data Analysis - 4 years
Time Series Forecasting - 2 years
Demand Planning - 2 years
Hadoop - 2 years
Statistical concepts - 2 years
Supply chain functions - 2 years
Hi ,
Title : Senior AI/ML Engineer
Experience : 5 – 10+ Yrs
Location : Bengaluru
Work Type : Hybrid – 2 days Work from office
Type of hire : PwD & Non-PwD Inclusive Hiring
Employment Type : Full Time
Notice Period : Immediate Joiner
Workdays : Mon - Fri
Role Overview
We are seeking an exceptional AI Engineer who can design and build production-grade AI systems that combine advanced machine learning, Generative AI, and scalable software engineering.
This role goes beyond traditional data science and focuses on building end-to-end AI platforms, autonomous AI agents, intelligent decision systems, and enterprise AI applications.
You will work on real-world enterprise problems across industries, developing AI systems that automate reasoning, prediction, and decision-making at scale.
What You Will Build
Examples of systems you may work on:
• AI Copilots for enterprise workflows
• Autonomous AI agents for automation
• Decision intelligence platforms
• Retrieval-Augmented Generation (RAG) systems
• Predictive ML systems for forecasting and anomaly detection
• AI-powered knowledge assistants
• Intelligent automation platforms
Key Responsibilities
1. Advanced Machine Learning & Predictive Systems
Design and implement ML models including:
• Time series forecasting
• Predictive modeling
• Anomaly detection
• Recommendation systems
• NLP / text intelligence
• Deep learning models
Develop models using:
• PyTorch
• TensorFlow
• Scikit-learn
• XGBoost / LightGBM
2. Generative AI & LLM Systems
Build enterprise-grade GenAI applications including:
• AI copilots
• conversational agents
• document intelligence systems
• enterprise knowledge assistants
Develop LLM systems using:
• OpenAI / Claude / Gemini / Llama
• prompt engineering techniques
• embeddings and semantic search
• RAG architectures
3. Agentic AI Systems
Design autonomous AI systems capable of reasoning and executing tasks.
Build multi-agent architectures using:
• LangGraph
• CrewAI
• AutoGen
• Semantic Kernel
Integrate agents with:
• APIs
• enterprise data systems
• internal workflows
4. AI Platform Engineering
Develop scalable AI services and applications using:
• Python
• FastAPI / Flask
• asynchronous processing
• distributed compute frameworks
Build production-grade APIs and AI services.
5. Enterprise AI Deployment & MLOps
Deploy AI models into scalable production environments.
Work with:
• Docker
• Kubernetes
• CI/CD pipelines
• MLflow / experiment tracking
• model monitoring and drift detection
Deploy AI solutions on:
• Azure
• AWS
• GCP
6. Data Integration & AI Systems
Work with enterprise data sources including:
• relational databases
• data warehouses (Snowflake, Redshift, BigQuery)
• data lakes (S3 / Azure Data Lake)
• vector databases (Pinecone, Weaviate, FAISS)
Required Skills:
Programming
Expert-level proficiency in:
• Python
• software engineering best practices
• data structures and algorithms
Experience building production-ready systems.
Machine Learning
Strong expertise in:
• supervised learning
• unsupervised learning
• deep learning
• time-series modelling
• model evaluation and optimization
Generative AI
Experience working with:
• LLM APIs
• prompt engineering
• RAG pipelines
• embeddings and vector search
AI Architecture
Ability to design:
• scalable AI systems
• distributed ML systems
• intelligent automation platforms
Preferred Experience
• Building enterprise AI products
• Developing AI copilots or agents
• Designing decision intelligence platforms
• Experience with large-scale data systems
Ideal Candidate Profile
The ideal candidate is:
• A strong ML engineer AND software engineer
• Comfortable building AI systems end-to-end
• Experienced in deploying models to production
• Passionate about next-generation AI architectures
We value builders who ship real systems, not just research prototypes.
Education
Bachelor’s / Master’s in:
Computer Science
Artificial Intelligence
Machine Learning
Data Science
or related field.
Why Join Ampera
At Ampera, we are building AI-native enterprise platforms that transform how organizations use data and intelligence.
Engineers at Ampera work on:
• real-world enterprise AI systems
• cutting-edge GenAI and agentic architectures
• global enterprise clients across industries
• high-impact AI platforms that scale.
What Makes This Role Unique
You will help build the next generation of enterprise AI systems — where AI moves beyond prediction and becomes an autonomous decision-making layer for organizations.
About Ampera:
Ampera Technologies, a purpose driven Digital IT Services with primary focus on supporting our client with their Data, AI / ML, Accessibility and other Digital IT needs. We also ensure that equal opportunities are provided to Persons with Disabilities Talent. Ampera Technologies has its Global Headquarters in Chicago, USA and its Global Delivery Center is based out of Chennai, India. We are actively expanding our Tech Delivery team in Chennai and across India. We offer exciting benefits for our teams, such as 1) Hybrid and Remote work options available, 2) Opportunity to work directly with our Global Enterprise Clients, 3) Opportunity to learn and implement evolving Technologies, 4) Comprehensive healthcare, and 5) Conducive environment for Persons with Disability Talent meeting Physical and Digital Accessibility standards
Accessibility & Inclusion Statement
We are committed to creating an inclusive environment for all employees, including persons with disabilities. Reasonable accommodations will be provided upon request.
Equal Opportunity Employer (EOE) Statement
Ampera Technologies is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.
.
About MyOperator
MyOperator is a Business AI Operator and category leader that unifies WhatsApp, Calls, and AI-powered chatbots & voice bots into one intelligent business communication platform.Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single no-code platform.Trusted by 12,000+ brands including Amazon, Domino’s, Apollo, and Razor-pay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement
Role Summary
We’re hiring a Front Deployed Engineer (FDE)—a customer-facing, field-deployed engineer who owns the end-to-end delivery of AI bots/agents.
This role is “frontline”: you’ll work directly with customers (often onsite), translate business reality into bot workflows, do prompt engineering + knowledge grounding, ship deployments, and iterate until it works reliably in production.
Think: solutions engineer + implementation engineer + prompt engineer, with a strong bias for execution.
Responsibilities
Requirement Discovery & Stakeholder Interaction
- Join customer calls alongside Sales and Revenue teams.
- Ask targeted questions to understand business objectives, user journeys, automation expectations, and edge cases.
- Identify data sources (CRM, APIs, Excel, SharePoint, etc.) required for the solution.
- Act as the AI subject-matter expert during client discussions.
Use Case & Solution Documentation
- Convert discussions into clear, structured use case documents, including:
- Problem statement & goals.
- Current vs. proposed conversational flows.
- Chatbot conversation logic, integrations, and dependencies.
- Assumptions, limitations, and success criteria.
Customer Delivery Ownership
Own deployment of AI bots for customer use-cases (lead qualification, support, booking, etc.). Run workshops to capture processes, FAQs, edge cases, and success metrics. Drive the go-live process: requirements through monitoring and improvement.
Prompt Engineering & Conversation Design
Craft prompts, tool instructions, guardrails, fallbacks, and escalation policies for stable behavior. Build structured conversational flows: intents, entities, routing, handoff, and compliant responses. Create reusable prompt patterns and "prompt packs."
Testing, Debugging & Iteration
Analyze logs to find failure modes (misclassification, hallucination, poor handling). Create test sets ("golden conversations"), run regressions, and measure improvements. Coordinate with Product/Engineering for platform needs.
Integrations & Technical Coordination
Integrate bots with APIs/webhooks (CRM, ticketing, internal tools) to complete workflows. Troubleshoot production issues and coordinate fixes/root-cause analysis.
What Success Looks Like
- Customer bots go live quickly and show high containment + high task completion with low escalation.
- You can diagnose failures from transcripts/logs and fix them with prompt/workflow/knowledge changes.
- Customers trust you as the “AI delivery owner”—clear communication, realistic timelines, crisp execution.
Requirements (Must Have)
- 2–5 years in customer-facing delivery roles: implementation, solutions engineering, customer success engineering, or similar.
- Hands-on comfort with LLMs and prompt engineering (structured outputs, guardrails, tool use, iteration).
- Strong communication: workshops, requirement capture, crisp documentation, stakeholder management.
- Technical fluency: APIs/webhooks concepts, JSON, debugging logs, basic integration troubleshooting.
- Willingness to be front deployed (customer calls/visits as needed).
Good to Have (Nice to Have)
- Experience with chatbots/voicebots, IVR, WhatsApp automation, conversational AI platforms with at least a couple of projects.
- Understanding of metrics like containment, resolution rate, response latency, CSAT drivers.
- Prior SaaS onboarding/delivery experience in mid-market or enterprises.
Working Style & Traits We Value
- High agency: you don’t wait for perfect specs—you create clarity and ship.
- Customer empathy + engineering discipline.
- Strong bias for iteration: deploy → learn → improve.
- Calm under ambiguity (real customer environments are chaotic by default).
Job Title: Tech Lead
Location: Gachibowli, Hyderabad
Required Skills/Experience:
• 6+ years of experience in designing and developing enterprise and/or consumer-facing applications using technologies and frameworks like JavaScript, Node.js, ReactJS, Angular, SCSS, CSS, and React Native.
• 2+ years of experience in leading teams (guiding, designing, and tracking tasks) and taking responsibility for delivering projects as per agreed schedules.
• Hands-on experience with SQL and NoSQL databases.
• Hands-on experience working in Linux OS environments.
• Strong debugging, troubleshooting, and problem-resolution skills.
• Experience in developing responsive and scalable web applications.
• Good communication skills (verbal and written) to effectively interact with customers and internal teams.
• Ability and interest in learning new technologies and adapting to evolving technical requirements.
• Experience working in the complete product development lifecycle (prototyping, development, hardening, testing, and deployment).
• Exposure to AI/ML concepts and ability to integrate AI-based features into applications.
• Experience using AI tools such as ChatGPT, GitHub Copilot, Gemini, or similar tools for improving development productivity, automation, and documentation.
Additional Skills/Experience:
• Working experience with Python and NoSQL databases such as MongoDB and Cassandra.
• Exposure to AI, Machine Learning (ML), Natural Language Processing (NLP), and Predictive Analytics domains.
• Familiarity with modern AI frameworks or APIs and experience integrating AI-powered capabilities into applications is a plus.
• Eagerness to participate in product functional design and user experience discussions.
• Familiarity with internationalization (i18n) and the latest trends in UI/UX design.
• Experience implementing payment gateways applicable across different countries.
• Experience with CI/CD pipelines and tools such as Jenkins, Nginx, and related DevOps practices.
Educational Qualification:
• B.Tech / M.Tech in Computer Science Engineering (CSE), Information Technology (IT), Electronics & Communication Engineering (ECE), Artificial Intelligence (AI), Machine Learning (ML), or Data Science (DS) from a recognized university.
Job Title : AI Analyst (Fresher / Associate)
Experience : 0 to 3 Years
Location : Andheri West, Mumbai (Onsite)
Reporting To : AI Architect
Employment Type : Full-Time
About the Role :
We are hiring an AI Analyst to work with enterprise clients on the assessment, design, and validation of AI systems. This is a hands-on role at the intersection of business, technology, and responsible AI, focused on building production-ready, scalable, and governed AI solutions aligned with real business outcomes.
Mandatory Skills :
Artificial Intelligence (AI), Large Language Models (LLM), AI Agents, Generative AI, Machine Learning basics, Python, Prompt Engineering, Analytical Thinking.
Key Responsibilities :
- Review existing AI workflows, agents, and LLM usage to identify risks, gaps, and inefficiencies.
- Support the design of AI agent workflows aligned with business requirements.
- Help implement AI guardrails, governance frameworks, and safety mechanisms.
- Design evaluation and validation frameworks to test accuracy, reliability, and cost efficiency.
- Support AI pilot launches and production readiness.
- Communicate AI system behavior and insights to technical and non-technical stakeholders.
Required Skills :
- Strong analytical and systems thinking.
- Exposure to LLMs, AI agents, or AI workflows.
- Ability to translate business requirements into AI solutions.
- Good problem-solving and communication skills.
- Comfortable working in fast-paced environments.
Preferred :
- Consulting or client-facing experience.
- Exposure to enterprise AI deployments or regulated environments.
Education :
- Degree in Computer Science, Engineering, AI, or Data Science preferred.
- Strong practical AI skills are also valued.
Why Join Us :
- Work on real-world AI systems with enterprise clients, gain exposure to production AI and responsible AI deployment, and build a strong foundation in Applied AI and AI Systems Architecture.
Roles:
-Working on the full stack development (Both Front-end and Back-end)
-Working on any one of the following technologies:
• Java Application Programming
• Web Development with PHP
• Python Application Programming with Django
• Machine Learning
• Data Science
• Artificial Intelligence
• Cyber Security
Eligibility: BCA/MCA 2026/2027 students can apply
Duration: 1-6 months
Perks:
Internship Experience Certificate
Letter of Recommendation
Mode of internship: Online/Offline
Data Scientist or Senior Machine Learning Engineer
We are currently hiring for the position of Data Scientist/ Senior Machine Learning Engineer (6–7 years' experience).
Please find the detailed Job Description attached for your reference.
We are looking for candidates with strong experience in:
- Machine Learning model development
- Scalable data pipeline development (ETL/ELT)
- Python and SQL
- Cloud platforms such as Azure/AWS/Databricks
- ML deployment environments (SageMaker, Azure ML, etc.)
Kindly note:
- Location: Pune (Work from Office)
- Immediate joiners preferred
While sharing profiles, please ensure the following details are included:
- Current CTC
- Expected CTC
- Notice Period
- Current Location
- Confirmation on Pune WFO comfort
Must have Skills
- Machine Learning - 6 Years
- Python - 6 Years
- ETL (Extract, Transform, Load) - 6 Years
- SQL - 6 Years
- Azure - 6 Years
Request you to share relevant profiles at the earliest. Looking forward to your support.
Job Responsibilities
- Help develop an analytics platform that integrates insights from diverse data sources.
- Build, deploy, and test machine learning and classification models
- Train and retrain systems when necessary
- Design experiments, train and track performance of machine learning and AI models that meet specific business requirements
- ML and data labelling: Identify ways to gather and build training data with automated data labelling techniques and the creation of highly accurate training datasets.
- Automatic extraction of causal knowledge from diverse information sources such as databases, news, social media, videos and images etc.
- Develop customized machine learning solutions including data querying and knowledge extraction.
- Develop and implement approaches for extracting patterns and correlations from both internal and external data sources using time series machine learning models.
- Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors.
- Distil insights from complex data, communicating findings to technical and non-technical audiences.
- Develop, improve, or expand in-house computational pipelines, algorithms, models, and services used in crop product development.
- Constantly document and communicate results of research on data mining, analysis, and modelling approaches.
- 5+ years of experience working with NLP and ML technologies.
- Proven experience as a Machine Learning Engineer or similar role.
- Experience with NLP/ML frameworks and libraries
- Proficient with Python scripting language
- Background in machine learning frameworks like TensorFlow, PyTorch, Scikit Learn, etc.
- Demonstrated experience developing and executing machine learning, deep learning, data mining and classification models. Conversant with the latest in NLP and NLU models including transformer architectures and in creating explainable AI
- Ability to communicate the advantages and disadvantages of choosing specific models to various stakeholders
- Proficient with relational databases and SQL. Ability to write efficient queries and optimize the storage and retrieval of data within the database. Experience with creating and working on APIs, Serverless architectures and Containers.
- Creative, innovative, and strategic thinking; willingness to be bold and take risks on new ideas.
͏
͏
͏
͏
Mandatory Skills: AI : Artificial Intelligence .
Experience: 8-10 Years .
Job Summary
We are looking for a Data Scientist – AI/ML who has hands-on experience in building, training, fine-tuning, and deploying machine learning and deep learning models. The ideal candidate should be comfortable working with real-world datasets, collaborating with cross-functional teams, and communicating insights and solutions to clients.
Experience: Fresher to 5 Years
Location: Ahmedabad
Employment Type: Full-Time
Key Responsibilities
Develop, train, and optimize Machine Learning and Deep Learning models
Perform data cleaning, preprocessing, and feature engineering
Fine-tune ML/DL models to improve accuracy and performance
Deploy models into production using APIs or cloud platforms
Monitor model performance and retrain models as required
Work closely with clients to understand business problems and translate them into AI/ML solutions
Present findings, model outcomes, and recommendations to stakeholders
Collaborate with data engineers, developers, and product teams
Document workflows, models, and deployment processes
Required Skills & Qualifications
Strong understanding of Machine Learning concepts (Supervised, Unsupervised learning)
Hands-on experience with ML algorithms (Linear/Logistic Regression, Decision Trees, Random Forest, XGBoost, etc.)
Experience with Deep Learning frameworks (TensorFlow / PyTorch / Keras)
Proficiency in Python and AI/ML libraries (NumPy, Pandas, Scikit-learn)
Experience in model deployment using Flask/FastAPI, Docker, or cloud platforms (AWS/GCP/Azure)
Understanding of model fine-tuning and performance optimization
Basic knowledge of SQL and data handling
Good client communication and documentation skills
Good to Have
Experience with NLP, Computer Vision, or Generative AI
Exposure to MLOps tools (MLflow, Airflow, CI/CD pipelines)
Experience working on live or client-based AI projects
Kaggle, GitHub, or portfolio showcasing AI/ML projects
Education
Bachelor’s / Master’s degree in Computer Science, Data Science, AI/ML, or related field
Relevant certifications or project experience will be an added advantage
What We Offer
Opportunity to work on real-world AI/ML projects
Mentorship from experienced AI/ML professionals
Career growth in Data Science & Artificial Intelligence
Collaborative and learning-driven work culture
About NonStop io Technologies
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We're seeking an AI/ML Engineer to join our team. As AI/ML Engineer, you will be responsible for designing, developing, and implementing artificial intelligence (AI) and machine learning (ML) solutions to solve real-world business problems. You will work closely with engineering teams, including software engineers, domain experts, and product managers, to deploy and integrate Applied AI/ML solutions into the products that are being built at NonStop io. Your role will involve researching cutting-edge algorithms and data processing techniques, and implementing scalable solutions to drive innovation and improve the overall user experience.
Responsibilities
● Applied AI/ML engineering; Building engineering solutions on top of the AI/ML tooling available in the industry today. Eg: Engineering APIs around OpenAI
● AI/ML Model Development: Design, develop, and implement machine learning models and algorithms that address specific business challenges, such as natural language processing, computer vision, recommendation systems, anomaly detection, etc.
● Data Preprocessing and Feature Engineering: Cleanse, preprocess, and transform raw data into suitable formats for training and testing AI/ML models. Perform feature engineering to extract relevant features from the data
● Model Training and Evaluation: Train and validate AI/ML models using diverse datasets to achieve optimal performance. Employ appropriate evaluation metrics to assess model accuracy, precision, recall, and other relevant metrics
● Data Visualization: Create clear and insightful data visualizations to aid in understanding data patterns, model behaviour, and performance metrics
● Deployment and Integration: Collaborate with software engineers and DevOps teams to deploy AI/ML models into production environments and integrate them into various applications and systems
● Data Security and Privacy: Ensure compliance with data privacy regulations and implement security measures to protect sensitive information used in AI/ML processes
● Continuous Learning: Stay updated with the latest advancements in AI/ML research, tools, and technologies, and apply them to improve existing models and develop novel solutions
● Documentation: Maintain detailed documentation of the AI/ML development process, including code, models, algorithms, and methodologies for easy understanding and future reference.
Qualifications & Skills
● Bachelor's, Master's, or PhD in Computer Science, Data Science, Machine Learning, or a related field. Advanced degrees or certifications in AI/ML are a plus
● Proven experience as an AI/ML Engineer, Data Scientist, or related role, ideally with a strong portfolio of AI/ML projects
● Proficiency in programming languages commonly used for AI/ML. Preferably Python
● Familiarity with popular AI/ML libraries and frameworks, such as TensorFlow, PyTorch, scikit-learn, etc.
● Familiarity with popular AI/ML Models such as GPT3, GPT4, Llama2, BERT etc.
● Strong understanding of machine learning algorithms, statistics, and data structures
● Experience with data preprocessing, data wrangling, and feature engineering
● Knowledge of deep learning architectures, neural networks, and transfer learning
● Familiarity with cloud platforms and services (e.g., AWS, Azure, Google Cloud) for scalable AI/ML deployment
● Solid understanding of software engineering principles and best practices for writing maintainable and scalable code
● Excellent analytical and problem-solving skills, with the ability to think critically and propose innovative solutions
● Effective communication skills to collaborate with cross-functional teams and present complex technical concepts to non-technical stakeholders
Title: Quantitative Developer
Location : Mumbai
Candidates preferred with Master's
Who We Are
At Dolat Capital, we are a collective of traders, puzzle solvers, and tech enthusiasts passionate about decoding the intricacies of financial markets. From navigating volatile trading conditions with precision to continuously refining cutting-edge technologies and quantitative strategies, our work thrives at the intersection of finance and engineering.
We operate a robust, ultra-low latency infrastructure built for market-making and active trading across Equities, Futures, and Options—with some of the highest fill rates in the industry. If you're excited by technology, trading, and critical thinking, this is the place to evolve your skills into world class capabilities.
What You Will Do
This role offers a unique opportunity to work across both quantitative development and high frequency trading. You'll engineer trading systems, design and implement algorithmic strategies, and directly participate in live trading execution and strategy enhancement.
1. Quantitative Strategy & Trading Execution
- Design, implement, and optimize quantitative strategies for trading derivatives, index options, and ETFs
- Trade across options, equities, and futures, using proprietary HFT platforms
- Monitor and manage PnL performance, targeting Sharpe ratios of 6+
- Stay proactive in identifying market opportunities and inefficiencies in real-time HFT environments
- Analyze market behavior, particularly in APAC indices, to adjust models and positions dynamically
2. Trading Systems Development
- Build and enhance low-latency, high-throughput trading systems
- Develop tools to simulate trading strategies and access historical market data
- Design performance-optimized data structures and algorithms for fast execution
- Implement real-time risk management and performance tracking systems
3. Algorithmic and Quantitative Analysis
- Collaborate with researchers and traders to integrate strategies into live environments
- Use statistical methods and data-driven analysis to validate and refine models
- Work with large-scale HFT tick data using Python / C++
4. AI/ML Integration
- Develop and train AI/ML models for market prediction, signal detection, and strategy enhancement
- Analyze large datasets to detect patterns and alpha signals
5. System & Network Optimization
- Optimize distributed and concurrent systems for high-transaction throughput
- Enhance platform performance through network and systems programming
- Utilize deep knowledge of TCP/UDP and network protocols
6. Collaboration & Mentorship
- Collaborate cross-functionally with traders, engineers, and data scientists
- Represent Dolat in campus recruitment and industry events as a technical mentor
What We Are Looking For:
- Strong foundation in data structures, algorithms, and object-oriented programming (C++).
- Experience with AI/ML frameworks like TensorFlow, PyTorch, or Scikit-learn.
- Hands-on experience in systems programming within a Linux environment.
- Proficient and Hands on programming using python/ C++
- Familiarity with distributed computing and high-concurrency systems.
- Knowledge of network programming, including TCP/UDP protocols.
- Strong analytical and problem-solving skills.
A passion for technology-driven solutions in the financial markets.
Job Title: Deployment Lead (Python, Linux, AWS)
Location: Coimbatore
Overview
We are seeking an experienced Deployment Lead to oversee the end-to-end deployment lifecycle of our applications and services. The ideal candidate will have deep expertise in Python, strong Linux administration skills, and hands-on experience with AWS cloud infrastructure. You will work closely with engineering, DevOps, QA, and product teams to ensure reliable, repeatable, and scalable deployments across multiple environments.
Key Responsibilities
- Lead and manage deployment activities for all application releases across development, staging, and production environments.
- Develop and maintain deployment automation, scripts, and tools using Python and shell scripting.
- Own and optimize CI/CD pipelines (e.g., GitHub Actions, Jenkins, GitLab CI, or AWS CodePipeline).
- Oversee Linux server administration, including configuration, troubleshooting, performance optimization, and security hardening.
- Design, implement, and maintain AWS infrastructure (EC2, S3, Lambda, IAM, RDS, ECS/EKS, CloudFormation/Terraform).
- Ensure robust monitoring, logging, and alerting using tools such as CloudWatch, Grafana, Prometheus, or ELK.
- Collaborate with developers to improve code readiness for deployment and production reliability.
- Manage environment configurations and ensure consistency and version control across environments.
- Lead incident response during production issues; conduct root-cause analysis and implement long-term fixes.
- Establish and enforce best practices for deployment, configuration management, and operational excellence.
Required Skills & Qualifications
- 5+ years of experience in deployment engineering, DevOps, or site reliability engineering roles.
- Strong proficiency in Python for automation and tooling.
- Advanced experience with Linux systems administration (Ubuntu, CentOS, Amazon Linux).
- Hands-on work with AWS cloud services and infrastructure-as-code (CloudFormation or Terraform).
- Experience with containerization technologies such as Docker and orchestration platforms like ECS, EKS, or Kubernetes.
- Strong understanding of CI/CD tools and automated deployment strategies.
- Familiarity with networking concepts: DNS, load balancers, VPCs, firewalls, VPN, and routing.
- Expertise with monitoring, alerting, and logging solutions.
- Strong problem-solving and analytical skills; able to lead troubleshooting efforts.
- Excellent communication and leadership abilities.
Job Details
- Job Title: Android Developer
- Industry: IT- Services
- Function - Information technology (IT)
- Experience Required: 5-8 years
- Employment Type: Full Time
- Job Location: Delhi
- CTC Range: Best in Industry
Criteria:
· Strong technical background in Android application development and Kotlin
· Looking candidates having 5+ years of experience.
· Need candidates from Delhi NCR Only.
· All Academic backgrounds acceptable (except BCA).
· Immediate Joiners Preferred
· Candidate must have some experience working with IoT devices.
· Candidate should have experience working with Camera model X.
· Candidate's Academic scores must be 70% or above.
· Candidate having fluent communication will be an added advantage.
Job Description
About the Role:
Senior Android Team Lead will be responsible for testing, QC, debugging support for various Android and Java software/servers for products developed or procured by the company. The role includes debugging integration issues, handling on-field deployment challenges, and suggesting improvements or structured solutions. The candidate will also be responsible for scaling the architecture. You will work closely with other team members including Web Developers, Software Developers, Application Engineers, and Product Managers to test and deploy existing products. You will act as a Team Lead to coordinate and organize team efforts toward successful completion or demo of applications. This includes implementing projects from conception to deployment.
Responsibilities:
â— Working with the Android SDK, Java, Kotlin, NDK
â— Handling different Android versions and screen sizes
â— Applying Android UI design principles, patterns, and best practices
Requirements:
â— Strong technical background in Android application development and Kotlin
â— Solid programming skills
â— Detail-oriented with strong attention to specifics
â— Excellent written and verbal communication skills
â— Strong analytical and quick problem-solving ability
â— Ability to quickly document requirements from open discussions
â— Fast typing skills for documentation and communication
â— Familiarity with JIRA, EPICs, Excel, Google Sheets, and Agile methodologies
â— Team player with leadership qualities
â— Decision-making ability and team management skills
â— Interest in working in a startup environment with cutting-edge products
â— Experience with design and architecture patterns
â— Understanding of testing processes, debugging, code versioning, and repositories
â— UI/UX experience
â— Strong knowledge of Java & Kotlin
â— Software development experience with strong coding skills
â— Experience building services for data delivery to mobile clients
â— Experience with relational and non-relational databases
â— Knowledge of REST and JSON data handling
â— Experience with libraries like Retrofit, RxJava, Dagger 2, Lottie
â— Server integration (REST endpoints)
â— Experience with AWS stack and Linux
â— Apps shipped and available on Google Play
â— Backend API development
â— Familiarity with Android Studio, Eclipse IDE
â— Good knowledge of mobile hardware, software, and operating systems
â— Willingness to work in a fast-paced startup environment
â— Strong oral communication and presentation skills
â— Team-oriented, with a positive approach to technology and engineering
â— Result-oriented with a focus on efficiency and timeliness
â— Strong self-awareness and ability to work under deadlines
â— Proficiency in Microsoft Project, PowerPoint, Excel, Word
â— Willingness to mentor and manage team members
â— Willing to travel 5–10% of the time for demos, training, and collaboration
Preferred Background:
â— Understanding of Artificial Intelligence and Machine Learning
â— B.S. / M.S. in Computer Science, Electrical, or Electronics Engineering
â— 5+ years’ experience with Android, Java Server, JSP
â— Experience with Virtual Reality and Augmented Reality
â— Familiarity with Test-Driven Development
â— Background in CS or ECE
â— Python experience is a big plus
â— iOS development knowledge (not mandatory)
â— Strong foundation in data structures and algorithms
We are seeking a talented AI/ML Engineer with strong hands-on experience in Generative AI and Large Language Models (LLMs) to join our Business Intelligence team. The role involves designing, developing, and deploying advanced AI/ML and GenAI-driven solutions to unlock business insights and enhance data-driven decision-making.
Key Responsibilities:
• Collaborate with business analysts and stakeholders to identify AI/ML and Generative AI use cases.
• Design and implement ML models for predictive analytics, segmentation, anomaly detection, and forecasting.
• Develop and deploy Generative AI solutions using LLMs (GPT, LLaMA, Mistral, etc.).
• Build and maintain Retrieval-Augmented Generation (RAG) pipelines and semantic search systems.
• Work with vector databases (FAISS, Pinecone, ChromaDB) for embedding storage and retrieval.
• Develop end-to-end AI/ML pipelines from data preprocessing to deployment.
• Integrate AI/ML and GenAI solutions into BI dashboards and reporting tools.
• Optimize models for performance, scalability, and reliability.
• Maintain documentation and promote knowledge sharing within the team.
Mandatory Requirements:
• 4+ years of relevant experience as an AI/ML Engineer.
• Hands-on experience in Generative AI and Large Language Models (LLMs) – Mandatory.
• Experience implementing RAG pipelines and prompt engineering techniques.
• Strong programming skills in Python.
• Experience with ML frameworks (TensorFlow, PyTorch, scikit-learn).
• Experience with vector databases (FAISS, Pinecone, ChromaDB).
• Strong understanding of SQL and database systems.
• Experience integrating AI solutions into BI tools (Power BI, Tableau).
• Strong analytical, problem-solving, and communication skills. Good to Have
• Experience with cloud platforms (AWS, Azure, GCP).
• Experience with Docker or Kubernetes.
• Exposure to NLP, computer vision, or deep learning use cases.
• Experience in MLOps and CI/CD pipelines
- Strong experience in Azure – mainly Azure ML Studio, AKS, Blob Storage, ADF, ADO Pipelines.
- Ability and experience to register and deploy ML/AI/GenAI models via Azure ML Studio.
- Working knowledge of deploying models in AKS clusters.
- Design and implement data processing, training, inference, and monitoring pipelines using Azure ML.
- Excellent Python skills – environment setup and dependency management, coding as per best practices, and knowledge of automatic code review tools like linting and Black.
- Experience with MLflow for model experiments, logging artifacts and models, and monitoring.
- Experience in orchestrating machine learning pipelines using MLOps best practices.
- Experience in DevOps with CI/CD knowledge (Git in Azure DevOps).
- Experience in model monitoring (drift detection and performance monitoring).
- Fundamentals of data engineering.
- Docker-based deployment is good to have.

This is for one of our Reputed Entertainment organisation
Key Responsibilities
· Advanced ML & Deep Learning: Design, develop, and deploy end-to-end Machine Learning models for Content Recommendation Engines, Churn Prediction, and Customer Lifetime Value (CLV).
· Generative AI Implementation: Prototype and integrate GenAI solutions (using LLMs like Gemini/GPT) for automated Metadata Tagging, Script Summarization, or AI-driven Chatbots for viewer engagement.
· Develop and maintain high-scale video processing pipelines using Python, OpenCV, and FFmpeg to automate scene detection, ad-break identification, and visual feature extraction for content enrichment
· Cloud Orchestration: Utilize GCP (Vertex AI, BigQuery, Dataflow) to build scalable data pipelines and manage the full ML lifecycle (MLOps).
· Business Intelligence & Storytelling: Create high-impact, automated dashboards in to track KPIs for data-driven decision making
· Cross-functional Collaboration: Work closely with Product, Design, Engineering, Content, and Marketing teams to translate "viewership data" into "strategic growth."
Preferred Qualifications
· Experience in Media/OTT: Prior experience working with large scale data from broadcast channels, videos, streaming platforms or digital ad-tech.
· Education: Master’s/Bachelor’s degree in a quantitative field (Computer Science, Statistics, Mathematics, or Data Science).
· Product Mindset: Ability to not just build a model, but to understand the business implications of the solution.
· Communication: Exceptional ability to explain "Neural Network outputs" to a "Creative Content Producer" in simple terms.
About MyOperator
MyOperator is a Business AI Operator and category leader that unifies WhatsApp, Calls, and AI-powered chatbots & voice bots into one intelligent business communication platform.Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single no-code platform.Trusted by 12,000+ brands including Amazon, Domino’s, Apollo, and Razor-pay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement
Role Summary
We’re hiring a Front Deployed Engineer (FDE)—a customer-facing, field-deployed engineer who owns the end-to-end delivery of AI bots/agents.
This role is “frontline”: you’ll work directly with customers (often onsite), translate business reality into bot workflows, do prompt engineering + knowledge grounding, ship deployments, and iterate until it works reliably in production.
Think: solutions engineer + implementation engineer + prompt engineer, with a strong bias for execution.
Responsibilities
Requirement Discovery & Stakeholder Interaction
- Join customer calls alongside Sales and Revenue teams.
- Ask targeted questions to understand business objectives, user journeys, automation expectations, and edge cases.
- Identify data sources (CRM, APIs, Excel, SharePoint, etc.) required for the solution.
- Act as the AI subject-matter expert during client discussions.
Use Case & Solution Documentation
- Convert discussions into clear, structured use case documents, including:
- Problem statement & goals.
- Current vs. proposed conversational flows.
- Chatbot conversation logic, integrations, and dependencies.
- Assumptions, limitations, and success criteria.
Customer Delivery Ownership
Own deployment of AI bots for customer use-cases (lead qualification, support, booking, etc.). Run workshops to capture processes, FAQs, edge cases, and success metrics. Drive the go-live process: requirements through monitoring and improvement.
Prompt Engineering & Conversation Design
Craft prompts, tool instructions, guardrails, fallbacks, and escalation policies for stable behavior. Build structured conversational flows: intents, entities, routing, handoff, and compliant responses. Create reusable prompt patterns and "prompt packs."
Testing, Debugging & Iteration
Analyze logs to find failure modes (misclassification, hallucination, poor handling). Create test sets ("golden conversations"), run regressions, and measure improvements. Coordinate with Product/Engineering for platform needs.
Integrations & Technical Coordination
Integrate bots with APIs/webhooks (CRM, ticketing, internal tools) to complete workflows. Troubleshoot production issues and coordinate fixes/root-cause analysis.
What Success Looks Like
- Customer bots go live quickly and show high containment + high task completion with low escalation.
- You can diagnose failures from transcripts/logs and fix them with prompt/workflow/knowledge changes.
- Customers trust you as the “AI delivery owner”—clear communication, realistic timelines, crisp execution.
Requirements (Must Have)
- 2–5 years in customer-facing delivery roles: implementation, solutions engineering, customer success engineering, or similar.
- Hands-on comfort with LLMs and prompt engineering (structured outputs, guardrails, tool use, iteration).
- Strong communication: workshops, requirement capture, crisp documentation, stakeholder management.
- Technical fluency: APIs/webhooks concepts, JSON, debugging logs, basic integration troubleshooting.
- Willingness to be front deployed (customer calls/visits as needed).
Good to Have (Nice to Have)
- Experience with chatbots/voicebots, IVR, WhatsApp automation, conversational AI platforms with at least a couple of projects.
- Understanding of metrics like containment, resolution rate, response latency, CSAT drivers.
- Prior SaaS onboarding/delivery experience in mid-market or enterprises.
Working Style & Traits We Value
- High agency: you don’t wait for perfect specs—you create clarity and ship.
- Customer empathy + engineering discipline.
- Strong bias for iteration: deploy → learn → improve.
- Calm under ambiguity (real customer environments are chaotic by default).
About the Role
We're looking for a hands-on ML Engineer who combines strong machine learning fundamentals with the backend infrastructure instincts needed to take models from research into reliable, scalable production systems. This role sits at the intersection of cutting-edge generative video AI and the MLOps discipline required to run it at scale.
You'll work closely with backend, platform, and content teams to deliver high-performance ML components under strict quality, latency, and throughput requirements.
Key Responsibilities
- Train, fine-tune, and evaluate generative video and multimodal models (image-to-video, text-to-video, lip-sync, character consistency)
- Build and manage end-to-end ML pipelines: data ingestion, preprocessing, training, evaluation, and versioning
- Own model deployment and serving infrastructure — containerization, GPU-optimized inference, model registries, and rollout strategies
- Implement MLOps best practices: experiment tracking, model monitoring, drift detection, A/B testing, and observability
- Design and maintain scalable inference systems optimized for low latency, high throughput, and cost-efficient GPU utilization
- Develop caching and batching strategies to meet SLA targets in production video generation workflows
- Collaborate with backend engineering teams on integrating ML services into distributed systems
- Contribute to long-term roadmap: foundational model training strategies, LoRA fine-tuning pipelines, and multi-character generalization
Requirements
Required Qualifications
- 4-10 years of experience in Machine Learning / Applied ML Engineering
- Strong fundamentals in deep learning, Transformers, and generative model architectures
- Hands-on experience with model training at scale — including fine-tuning large models (LoRA, full fine-tune) on custom datasets
- Solid MLOps experience: experiment tracking (MLflow, W&B), CI/CD for ML, model versioning, and serving frameworks (Triton, TorchServe, vLLM, or equivalent)
- Strong Python skills and fluency with PyTorch and the modern ML stack
- Experience deploying and operating ML systems in distributed cloud environments (GCP, AWS, or Azure) — GPU provisioning, autoscaling, and cost management
- Comfort working on ambiguous, high-impact problems with cross-functional teams
Preferred Qualifications
- Experience with video generation, diffusion models, or multimodal architectures (DiT, U-Net, audio-video joint models)
- Familiarity with LoRA/IC-LoRA fine-tuning workflows for character or identity consistency
- Experience in media, OTT, sports, or large-scale content platforms
- Knowledge of inference optimization techniques: quantization (FP8/INT8), batching, async orchestration, and GPU memory management
- Exposure to TTS/voice cloning systems or audio-video synchronization pipelines
Benefits
What you get
- Best in class salary: We hire only the best, and we pay accordingly.
- Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field.
- Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day.
About us
Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media, and Entertainment companies in the world! We’re headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies.
Today, we are a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting-edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company’s success will be huge. You’ll have the chance to work with experienced leaders who have built and led multiple tech, product, and design teams.
We're at the forefront of creating advanced AI systems, from fully autonomous agents that provide intelligent customer interaction to data analysis tools that offer insightful business solutions. We are seeking enthusiastic interns who are passionate about AI and ready to tackle real-world problems using the latest technologies.
We are looking for a passionate AI/ML Intern with hands-on exposure to Large Language Models (LLMs), fine-tuning techniques like LoRA, and strong fundamentals in Data Structures & Algorithms (DSA). This role is ideal for someone eager to work on real-world AI applications, experiment with open-source models, and contribute to production-ready AI systems.
Duration: 6 months
Perks:
- Hands-on experience with real AI projects.
- Mentoring from industry experts.
- A collaborative, innovative and flexible work environment
After completion of the internship period, there is a chance to get a full-time opportunity as AI/ML engineer (6-8 LPA).
Compensation:
- Stipend: Base is INR 8000/- & can increase up to 20000/- depending upon performance matrix.
Key Responsibilities
- Work on Large Language Models (LLMs) for real-world AI applications.
- Implement and experiment with LoRA (Low-Rank Adaptation) and other parameter-efficient fine-tuning techniques.
- Perform model fine-tuning, evaluation, and optimization.
- Engage in prompt engineering to improve model outputs and performance.
- Develop backend services using Python for AI-powered applications.
- Utilize GitHub for version control, including managing branches, pull requests, and code reviews.
- Work with AI platforms such as Hugging Face and OpenAI to deploy and test models.
- Collaborate with the team to build scalable and efficient AI solutions.
Must-Have Skills
- Strong proficiency in Python.
- Hands-on experience with LLMs (open-source or API-based).
- Practical knowledge of LoRA or other parameter-efficient fine-tuning techniques.
- Solid understanding of Data Structures & Algorithms (DSA).
- Experience with GitHub and version control workflows.
- Familiarity with Hugging Face Transformers and/or OpenAI APIs.
- Basic understanding of Deep Learning and NLP concepts.
🤖 Robotics Engineer
Company: Pentabay Softwares
Location: Anna Salai (Mount Road), Chennai
Employment Type: Full-Time
🔹 Job Summary
Pentabay Softwares is seeking a highly skilled and innovative Robotics Engineer to design, develop, test, and implement robotic systems and automation solutions. The ideal candidate will have strong technical expertise in robotics programming, control systems, and hardware integration, along with a passion for building intelligent and efficient systems.
🔹 Key Responsibilities
Design, develop, and test robotic systems and automation solutions
Develop and implement control algorithms and motion planning systems
Integrate sensors, actuators, and embedded systems
Program robots using languages such as Python, C++, or ROS
Troubleshoot, debug, and optimize robotic applications
Collaborate with cross-functional teams including software, hardware, and AI engineers
Ensure compliance with safety and quality standards
Document system architecture, processes, and technical specifications
🔹 Required Qualifications
Bachelor’s or Master’s degree in Robotics, Mechatronics, Mechanical, Electronics, or related field
2+ years of experience in robotics development (preferred)
Strong knowledge of robotics frameworks (e.g., ROS)
Experience with microcontrollers, embedded systems, and sensor integration
Familiarity with AI/ML concepts is a plus
Strong analytical and problem-solving skills
🔹 Preferred Skills
Experience with computer vision systems
Knowledge of SLAM, kinematics, and motion planning
Experience with industrial automation or autonomous systems
Strong teamwork and communication skills
🌟 Why Join Pentabay Softwares?
Work on innovative and future-focused technologies
Collaborative and growth-oriented work culture
Opportunities for skill development and career advancement
Exposure to cutting-edge automation and AI-driven projects
In this role, you'll be responsible for building machine learning based systems and conduct data analysis that improves the quality of our large geospatial data. You’ll be developing NLP models to extract information, using outlier detection to identifying anomalies and applying data science methods to quantify the quality of our data. You will take part in the development, integration, productionisation and deployment of the models at scale, which would require a good combination of data science and software development.
Responsibilities
- Development of machine learning models
- Building and maintaining software development solutions
- Provide insights by applying data science methods
- Take ownership of delivering features and improvements on time
Must-have Qualifications
- 4 year's experience
- Senior data scientist preferable with knowledge of NLP
- Strong programming skills and extensive experience with Python
- Professional experience working with LLMs, transformers and open-source models from HuggingFace
- Professional experience working with machine learning and data science, such as classification, feature engineering, clustering, anomaly detection and neural networks
- Knowledgeable in classic machine learning algorithms (SVM, Random Forest, Naive Bayes, KNN etc.).
- Experience using deep learning libraries and platforms, such as PyTorch
- Experience with frameworks such as Sklearn, Numpy, Pandas, Polars
- Excellent analytical and problem-solving skills
- Excellent oral and written communication skills
Extra Merit Qualifications
- Knowledge in at least one of the following: NLP, information retrieval, data mining
- Ability to do statistical modeling and building predictive models
- Programming skills and experience with Scala and/or Java
Job Description -
Profile: AI/ML
Experience: 4-8 Years
Mode: Remote
Mandatory Skills - AI/ML, LLM, RAG, Agentic AI, Traditional ML, GCP
Must-Have:
● Proven experience as an AI/ML specifically with a focus on Generative AI and Large Language Models (LLMs) in production.
● Deep expertise in building Agentic Workflows using frameworks like LangChain, LangGraph, or AutoGen.
● Strong proficiency in designing RAG (Retrieval-Augmented Generation)
● Experience with Function Calling/Tool Use in LLMs to connect AI models with external APIs (REST/gRPC) for transactional tasks
● Hands-on experience with Google Cloud Platform (GCP), specifically Vertex AI, Model Garden, and deploying models on GPUs
● Proficiency in Python and deep learning frameworks (PyTorch or TensorFlow).
Job Description -
Profile: Senior ML Lead
Experience Required: 10+ Years
Work Mode: Remote
Key Responsibilities:
- Design end-to-end AI/ML architectures including data ingestion, model development, training, deployment, and monitoring
- Evaluate and select appropriate ML algorithms, frameworks, and cloud platforms (Azure, Snowflake)
- Guide teams in model operationalization (MLOps), versioning, and retraining pipelines
- Ensure AI/ML solutions align with business goals, performance, and compliance requirements
- Collaborate with cross-functional teams on data strategy, governance, and AI adoption roadmap
Required Skills:
- Strong expertise in ML algorithms, Linear Regression, and modeling fundamentals
- Proficiency in Python with ML libraries and frameworks
- MLOps: CI/CD/CT pipelines for ML deployment with Azure
- Experience with OpenAI/Generative AI solutions
- Cloud-native services: Azure ML, Snowflake
- 8+ years in data science with at least 2 years in solution architecture role
- Experience with large-scale model deployment and performance tuning
Good-to-Have:
- Strong background in Computer Science or Data Science
- Azure certifications
- Experience in data governance and compliance
JOB DETAILS:
* Job Title: Principal Data Scientist
* Industry: Healthcare
* Salary: Best in Industry
* Experience: 6-10 years
* Location: Bengaluru
Preferred Skills: Generative AI, NLP & ASR, Transformer Models, Cloud Deployment, MLOps
Criteria:
- Candidate must have 7+ years of experience in ML, Generative AI, NLP, ASR, and LLMs (preferably healthcare).
- Candidate must have strong Python skills with hands-on experience in PyTorch/TensorFlow and transformer model fine-tuning.
- Candidate must have experience deploying scalable AI solutions on AWS/Azure/GCP with MLOps, Docker, and Kubernetes.
- Candidate must have hands-on experience with LangChain, OpenAI APIs, vector databases, and RAG architectures.
- Candidate must have experience integrating AI with EHR/EMR systems, ensuring HIPAA/HL7/FHIR compliance, and leading AI initiatives.
Job Description
Principal Data Scientist
(Healthcare AI | ASR | LLM | NLP | Cloud | Agentic AI)
Job Details
- Designation: Principal Data Scientist (Healthcare AI, ASR, LLM, NLP, Cloud, Agentic AI)
- Location: Hebbal Ring Road, Bengaluru
- Work Mode: Work from Office
- Shift: Day Shift
- Reporting To: SVP
- Compensation: Best in the industry (for suitable candidates)
Educational Qualifications
- Ph.D. or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field
- Technical certifications in AI/ML, NLP, or Cloud Computing are an added advantage
Experience Required
- 7+ years of experience solving real-world problems using:
- Natural Language Processing (NLP)
- Automatic Speech Recognition (ASR)
- Large Language Models (LLMs)
- Machine Learning (ML)
- Preferably within the healthcare domain
- Experience in Agentic AI, cloud deployments, and fine-tuning transformer-based models is highly desirable
Role Overview
This position is part of company, a healthcare division of Focus Group specializing in medical coding and scribing.
We are building a suite of AI-powered, state-of-the-art web and mobile solutions designed to:
- Reduce administrative burden in EMR data entry
- Improve provider satisfaction and productivity
- Enhance quality of care and patient outcomes
Our solutions combine cutting-edge AI technologies with live scribing services to streamline clinical workflows and strengthen clinical decision-making.
The Principal Data Scientist will lead the design, development, and deployment of cognitive AI solutions, including advanced speech and text analytics for healthcare applications. The role demands deep expertise in generative AI, classical ML, deep learning, cloud deployments, and agentic AI frameworks.
Key Responsibilities
AI Strategy & Solution Development
- Define and develop AI-driven solutions for speech recognition, text processing, and conversational AI
- Research and implement transformer-based models (Whisper, LLaMA, GPT, T5, BERT, etc.) for speech-to-text, medical summarization, and clinical documentation
- Develop and integrate Agentic AI frameworks enabling multi-agent collaboration
- Design scalable, reusable, and production-ready AI frameworks for speech and text analytics
Model Development & Optimization
- Fine-tune, train, and optimize large-scale NLP and ASR models
- Develop and optimize ML algorithms for speech, text, and structured healthcare data
- Conduct rigorous testing and validation to ensure high clinical accuracy and performance
- Continuously evaluate and enhance model efficiency and reliability
Cloud & MLOps Implementation
- Architect and deploy AI models on AWS, Azure, or GCP
- Deploy and manage models using containerization, Kubernetes, and serverless architectures
- Design and implement robust MLOps strategies for lifecycle management
Integration & Compliance
- Ensure compliance with healthcare standards such as HIPAA, HL7, and FHIR
- Integrate AI systems with EHR/EMR platforms
- Implement ethical AI practices, regulatory compliance, and bias mitigation techniques
Collaboration & Leadership
- Work closely with business analysts, healthcare professionals, software engineers, and ML engineers
- Implement LangChain, OpenAI APIs, vector databases (Pinecone, FAISS, Weaviate), and RAG architectures
- Mentor and lead junior data scientists and engineers
- Contribute to AI research, publications, patents, and long-term AI strategy
Required Skills & Competencies
- Expertise in Machine Learning, Deep Learning, and Generative AI
- Strong Python programming skills
- Hands-on experience with PyTorch and TensorFlow
- Experience fine-tuning transformer-based LLMs (GPT, BERT, T5, LLaMA, etc.)
- Familiarity with ASR models (Whisper, Canary, wav2vec, DeepSpeech)
- Experience with text embeddings and vector databases
- Proficiency in cloud platforms (AWS, Azure, GCP)
- Experience with LangChain, OpenAI APIs, and RAG architectures
- Knowledge of agentic AI frameworks and reinforcement learning
- Familiarity with Docker, Kubernetes, and MLOps best practices
- Understanding of FHIR, HL7, HIPAA, and healthcare system integrations
- Strong communication, collaboration, and mentoring skills
Position: NDT Applications Engineer – PAUT ( Corrosion Mapping/ Weld Inspection / PWI/Advance FMC-TFM)
Location: Noida
Job Type: Full-time
Experience Level: Mid-Level / Senior-Level
Industry: Non-Destructive Testing (NDT), PAUT.
We are specifically looking for candidates with hands-on experience as an NDT Senior Engineer; only those with relevant NDT expertise should apply.
We are specifically looking for candidates with hands-on experience as an NDT Senior Engineer; only those with relevant NDT expertise should apply."
Job Summary
We are seeking a highly skilled NDT-Engineer with expertise in Ultrasonic Testing (UT) and Phased Array Ultrasonic Testing (PAUT) for robotic integration. This role involves coordinating with software and control system teams to integrate UT & PAUT ( Corrosion Mapping/Weld Inspection) into robotic NDT systems, ensuring optimal inspection performance.
The engineer will focus on sensor selection, ultrasonic parameter optimization, calibration, and data interpretation, while the software team handles control algorithms and motion planning. The ideal candidate should have strong experience in NDT automation, probe and frequency selection, phased array data acquisition, and defect characterization.
Key Responsibilities
1. NDT Inspection & Signal Optimization
• Optimize probe selection, wedge design, and beam focusing to achieve high-resolution imaging.
• Define scanning techniques (sectorial, linear, and compound scans) to detect various defect types.
• Analyse UT & PAUT signals, ensuring accurate defect detection, sizing, and characterization.
• Implement Time-of-Flight Diffraction (TOFD) and Full Matrix Capture (FMC) techniques to enhance detection capabilities.
• Address electromagnetic interference (EMI) and signal noise issues affecting robotic UT/PAUT.
• Develop procedures for coupling enhancement, including the use of water column, dry coupling, and adaptive surface-following mechanisms for robotic probes.
• Evaluate attenuation, beam divergence, and wave mode conversion for different material types.
• Work with AI-based defect recognition systems to automate data processing and anomaly detection.
• Test different scanning configurations for challenging surfaces, curved geometries, and weld seams.
• Optimize gain, pulse repetition frequency (PRF), and filtering settings to ensure the highest signal clarity.
• Implement phased array data interpretation techniques to differentiate between false indications and real defects.
• Develop and refine automated thickness gauging algorithms for robotic NDT systems.
• Ensure the compatibility of PAUT imaging with robotic motion constraints to avoid signal distortion.
2. NDT-Integration for Robotics (UT & PAUT)
•Select, integrate, and optimize ultrasonic transducers and phased array probes for robotic inspection systems.
•Define NDT scanning parameters (frequency, angle, probe type, and scanning speed) for robotic UT/PAUT applications.
•Ensure seamless coordination with control system and software teams for planning and automation.
•Work with robotic hardware teams to mount, position, and align UT/PAUT probes accurately.
•Conduct system calibration and validate UT/PAUT performance on robotic platforms.
3. Data Analysis & Reporting
•Interpret PAUT sectorial scans, full matrix capture (FMC), and total focusing method (TFM) data.
•Assist the software team in processing PAUT data for defect characterization and AI-based analysis.
•Validate robotic UT/PAUT inspection results and generate detailed technical reports.
•Ensure compliance with NDT standards (ASME, ISO 9712, ASTM, API 510/570) for ultrasonic inspections.
4. Coordination with Software & Control System Teams
•Work closely with the software team to define scan path strategies and automation logic.
•Collaborate with control engineers to ensure precise probe movement and stability.
•Provide technical input on robotic payload capacity, motion constraints, and scanning efficiency.
•Assist in integration of AI-driven defect recognition for automated data interpretation.
5. Field Deployment & Validation
•Supervise robotic UT/PAUT system trials in real-world inspection environments.
•Ensure compliance with safety regulations and industry best practices.
•Support on-site troubleshooting and optimization of robotic NDT performance.
•Train operators on robot-assisted ultrasonic testing procedures.
Required Qualifications & Skills
1. Educational Background
•Master’s Degree in Metallurgy/NDT/Mechanical.
•ASNT-Level II/III, ISO 9712, PCN, AWS CWI, or API 510/570 certifications in UT & PAUT preferred.
2. Technical Skills & Experience
•3-10 years of experience in Ultrasonic Testing (UT) and Phased Array Ultrasonic Testing (PAUT).
•Strong understanding of probe selection, frequency tuning, and phased array beamforming.
•Experience with NDT software
•Knowledge of electromagnetic shielding, signal integrity, and noise reduction techniques in ultrasonic systems.
•Ability to collaborate with software and control teams for robotic NDT development.
3. Soft Skills
•Strong problem-solving and analytical abilities.
•Excellent technical communication and coordination skills.
•Ability to work in cross-functional teams with robotics, software, and NDT specialists.
•Willingness to travel for on-site robotic NDT deployments.
Work Conditions
•Lab – Hands-on testing and robotic system deployment.
•Flexible Work Hours – Based on project
Benefits & Perks
•Competitive salary & performance incentives.
•Exposure to cutting-edge robotic and AI-driven NDT innovations.
•Training & certification support for career growth.
•Opportunities to work on pioneering robotic NDT projects.
Hi,
Greetings from Ampera!
we are looking for a Data Scientist with strong Python & Forecasting experience.
Title : Data Scientist – Python & Forecasting
Experience : 4 to 7 Yrs
Location : Chennai/Bengaluru
Type of hire : PWD and Non PWD
Employment Type : Full Time
Notice Period : Immediate Joiner
Working hours : 09:00 a.m. to 06:00 p.m.
Workdays : Mon - Fri
Job Description:
We are looking for an experienced Data Scientist with strong expertise in Python programming and forecasting techniques. The ideal candidate should have hands-on experience building predictive and time-series forecasting models, working with large datasets, and deploying scalable solutions in production environments.
Key Responsibilities
- Develop and implement forecasting models (time-series and machine learning based).
- Perform exploratory data analysis (EDA), feature engineering, and model validation.
- Build, test, and optimize predictive models for business use cases such as demand forecasting, revenue prediction, trend analysis, etc.
- Design, train, validate, and optimize machine learning models for real-world business use cases.
- Apply appropriate ML algorithms based on business problems and data characteristics
- Write clean, modular, and production-ready Python code.
- Work extensively with Python Packages & libraries for data processing and modelling.
- Collaborate with Data Engineers and stakeholders to deploy models into production.
- Monitor model performance and improve accuracy through continuous tuning.
- Document methodologies, assumptions, and results clearly for business teams.
Technical Skills Required:
Programming
- Strong proficiency in Python
- Experience with Pandas, NumPy, Scikit-learn
Forecasting & Modelling
- Hands-on experience in Time Series Forecasting (ARIMA, SARIMA, Prophet, etc.)
- Experience with ML-based forecasting models (XGBoost, LightGBM, Random Forest, etc.)
- Understanding of seasonality, trend decomposition, and statistical modeling
Data & Deployment
- Experience handling structured and large datasets
- SQL proficiency
- Exposure to model deployment (API-based deployment preferred)
- Knowledge of MLOps concepts is an added advantage
Tools (Preferred)
- TensorFlow / PyTorch (optional)
- Airflow / MLflow
- Cloud platforms (AWS / Azure / GCP)
Educational Qualification
- Bachelor’s or Master’s degree in Data Science, Statistics, Computer Science, Mathematics, or related field.
Key Competencies
- Strong analytical and problem-solving skills
- Ability to communicate insights to technical and non-technical stakeholders
- Experience working in agile or fast-paced environments
Accessibility & Inclusion Statement
We are committed to creating an inclusive environment for all employees, including persons with disabilities. Reasonable accommodations will be provided upon request.
Equal Opportunity Employer (EOE) Statement
Ampera Technologies is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.
Position: Assistant Professor
Department: CSE / IT
Experience: 0 – 15 Years
Joining: Immediate / Within 1 Month
Salary: As per norms and experience
🎓 Qualification:
ME / M.Tech in Computer Science Engineering / Information Technology
Ph.D. (Preferred but not mandatory)
First Class in UG & PG as per AICTE norms
🔍 Roles & Responsibilities:
Deliver high-quality lectures for UG / PG programs
Prepare lesson plans, course materials, and academic content
Guide student projects and internships
Participate in curriculum development and academic planning
Conduct internal assessments, evaluations, and result analysis
Mentor students for academic and career growth
Participate in departmental research activities
Publish research papers in reputed journals (Scopus/SCI preferred)
Attend Faculty Development Programs (FDPs), workshops, and conferences
Contribute to NAAC / NBA accreditation processes
Support institutional administrative responsibilities
💡 Required Skills:
Strong subject knowledge in CSE / IT domains
Programming proficiency (Python, Java, C++, Data Structures, AI/ML, Cloud, etc.)
Excellent communication and presentation skills
Research orientation and academic enthusiasm
Team collaboration and mentoring ability
Review Criteria:
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred:
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria:
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities:
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate:
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
ROLE - TECH LEAD/ARCHITECT with AI Expertise
Experience: 10–15 Years
Location: Bangalore (Onsite)
Company Type: Product-based | AI B2B SaaS
About ProductNova
ProductNova is a fast-growing product development organization that partners with
ambitious companies to build, modernize, and scale high-impact digital products. Our teams
of product leaders, engineers, AI specialists, and growth experts work at the intersection of
strategy, technology, and execution to help organizations create differentiated product
portfolios and accelerate business outcomes.
Founded in early 2023, ProductNova has successfully designed, built, and launched 20+
large-scale, AI-powered products and platforms across industries. We specialize in solving
complex business problems through thoughtful product design, robust engineering, and
responsible use of AI.
Product Development
We design and build user-centric, scalable, AI-native B2B SaaS products that are deeply
aligned with business goals and long-term value creation.
Our end-to-end product development approach covers the full lifecycle:
1. Product discovery and problem definition
2. User research and product strategy
3. Experience design and rapid prototyping
4. AI-enabled engineering, testing, and platform architecture
5. Product launch, adoption, and continuous improvement
From early concepts to market-ready solutions, we focus on building products that are
resilient, scalable, and ready for real-world adoption. Post-launch, we work closely with
customers to iterate based on user feedback and expand products across new use cases,
customer segments, and markets.
Growth & Scale
For early-stage companies and startups, we act as product partners—shaping ideas into
viable products, identifying target customers, achieving product-market fit, and supporting
go-to-market execution, iteration, and scale.
For established organizations, we help unlock the next phase of growth by identifying
opportunities to modernize and scale existing products, enter new geographies, and build
entirely new product lines. Our teams enable innovation through AI, platform re-
architecture, and portfolio expansion to support sustained business growth.
Role Overview
We are looking for a Tech Lead / Architect to drive the end-to-end technical design and
development of AI-powered B2B SaaS products. This role requires a strong hands-on
technologist who can work closely with ML Engineers and Full Stack Development teams,
own the product architecture, and ensure scalability, security, and compliance across the
platform.
Key Responsibilities
• Lead the end-to-end architecture and development of AI-driven B2B SaaS products
• Collaborate closely with ML Engineers, Data Scientists, and Full Stack Developers to
integrate AI/ML models into production systems
• Define and own the overall product technology stack, including backend, frontend,
data, and cloud infrastructure
• Design scalable, resilient, and high-performance architectures for multi-tenant SaaS
platforms
• Drive cloud-native deployments (Azure) using modern DevOps and CI/CD practices
• Ensure data privacy, security, compliance, and governance (SOC2, GDPR, ISO, etc.)
across the product
• Take ownership of application security, access controls, and compliance
requirements
• Actively contribute hands-on through coding, code reviews, complex feature development and architectural POCs
• Mentor and guide engineering teams, setting best practices for coding, testing, and
system design
• Work closely with Product Management and Leadership to translate business
requirements into technical solutions
Qualifications:
• 10–15 years of overall experience in software engineering and product
development
• Strong experience building B2B SaaS products at scale
• Proven expertise in system architecture, design patterns, and distributed systems
• Hands-on experience with cloud platforms (Azure, AWS/GCP)
• Solid background in backend technologies (Python/ .NET / Node.js / Java) and
modern frontend frameworks like (React, JS, etc.)
• Experience working with AI/ML teams in deploying and tuning ML models into production
environments
• Strong understanding of data security, privacy, and compliance frameworks
• Experience with microservices, APIs, containers, Kubernetes, and cloud-native
architectures
• Strong working knowledge of CI/CD pipelines, DevOps, and infrastructure as code
• Excellent communication and leadership skills with the ability to work cross-
functionally
• Experience in AI-first or data-intensive SaaS platforms
• Exposure to MLOps frameworks and model lifecycle management
• Experience with multi-tenant SaaS security models
• Prior experience in product-based companies or startups
Why Join Us
• Build cutting-edge AI-powered B2B SaaS products
• Own architecture and technology decisions end-to-end
• Work with highly skilled ML and Full Stack teams
• Be part of a fast-growing, innovation-driven product organization
If you are a results-driven Technical Lead with a passion for developing innovative products that drives business growth, we invite you to join our dynamic team at ProductNova.
Job Title : QA Lead (AI/ML Products)
Employment Type : Full Time
Experience : 4 to 8 Years
Location : On-site
Mandatory Skills : Strong hands-on experience in testing AI/ML (LLM, RAG) applications with deep expertise in API testing, SQL/NoSQL database validation, and advanced backend functional testing.
Role Overview :
We are looking for an experienced QA Lead who can own end-to-end quality for AI-influenced products and backend-heavy systems. This role requires strong expertise in advanced functional testing, API validation, database verification, and AI model behavior testing in non-deterministic environments.
Key Responsibilities :
- Define and implement comprehensive test strategies aligned with business and regulatory goals.
- Validate AI/ML and LLM-driven applications, including RAG pipelines, hallucination checks, prompt injection scenarios, and model response validation.
- Perform deep API testing using Postman/cURL and validate JSON/XML payloads.
- Execute complex SQL queries (MySQL/PostgreSQL) and work with MongoDB for backend and data integrity validation.
- Analyze server logs and transactional flows to debug issues and ensure system reliability.
- Conduct risk analysis and report key QA metrics such as defect leakage and release readiness.
- Establish and refine QA processes, templates, standards, and agile testing practices.
- Identify performance bottlenecks and basic security vulnerabilities (e.g., IDOR, data exposure).
- Collaborate closely with developers, product managers, and domain experts to translate business requirements into testable scenarios.
- Own feature quality independently from conception to release.
Required Skills & Experience :
- 4+ years of hands-on experience in software testing and QA.
- Strong understanding of testing AI/ML products, LLM validation, and non-deterministic behavior testing.
- Expertise in API Testing, server log analysis, and backend validation.
- Proficiency in SQL (MySQL/PostgreSQL) and MongoDB.
- Deep knowledge of SDLC and Bug Life Cycle.
- Strong problem-solving ability and structured approach to ambiguous scenarios.
- Awareness of performance testing and basic security testing practices.
- Excellent communication skills to articulate defects and QA strategies.
What We’re Looking For :
A proactive QA professional who can go beyond UI testing, understands backend systems deeply, and can confidently test modern AI-driven applications while driving quality standards across the team.

We are building VALLI AI SecurePay, an AI-powered fintech and cybersecurity platform focused on fraud detection and transaction risk scoring.
We are looking for an AI / Machine Learning Engineer with strong AWS experience to design, develop, and deploy ML models in a cloud-native environment.
Responsibilities:
- Build ML models for fraud detection and anomaly detection
- Work with transactional and behavioral data
- Deploy models on AWS (S3, SageMaker, EC2/Lambda)
- Build data pipelines and inference workflows
- Integrate ML models with backend APIs
Requirements:
- Strong Python and Machine Learning experience
- Hands-on AWS experience
- Experience deploying ML models in production
- Ability to work independently in a remote setup
Job Type: Contract / Freelance
Duration: 3–6 months (extendable)
Location: Remote (India)
Hi,
PFB the Job Description for Data Science with ML
Type of hire : PWD and Non PWD
Employment Type : Full Time
Notice Period : Immediate Joiner
Work Days : Mon - Fri
About Ampera:
Ampera Technologies, a purpose driven Digital IT Services with primary focus on supporting our client with their Data, AI / ML, Accessibility and other Digital IT needs. We also ensure that equal opportunities are provided to Persons with Disabilities Talent. Ampera Technologies has its Global Headquarters in Chicago, USA and its Global Delivery Center is based out of Chennai, India. We are actively expanding our Tech Delivery team in Chennai and across India. We offer exciting benefits for our teams, such as 1) Hybrid and Remote work options available, 2) Opportunity to work directly with our Global Enterprise Clients, 3) Opportunity to learn and implement evolving Technologies, 4) Comprehensive healthcare, and 5) Conducive environment for Persons with Disability Talent meeting Physical and Digital Accessibility standards
About the Role
We are looking for a skilled Data Scientist with strong Machine Learning experience to design, develop, and deploy data-driven solutions. The role involves working with large datasets, building predictive and ML models, and collaborating with cross-functional teams to translate business problems into analytical solutions.
Key Responsibilities
- Analyze large, structured and unstructured datasets to derive actionable insights.
- Design, build, validate, and deploy Machine Learning models for prediction, classification, recommendation, and optimization.
- Apply statistical analysis, feature engineering, and model evaluation techniques.
- Work closely with business stakeholders to understand requirements and convert them into data science solutions.
- Develop end-to-end ML pipelines including data preprocessing, model training, testing, and deployment.
- Monitor model performance and retrain models as required.
- Document assumptions, methodologies, and results clearly.
- Collaborate with data engineers and software teams to integrate models into production systems.
- Stay updated with the latest advancements in data science and machine learning.
Required Skills & Qualifications
- Bachelor’s or Master’s degree in computer science, Data Science, Statistics, Mathematics, or related fields.
- 5+ years of hands-on experience in Data Science and Machine Learning.
- Strong proficiency in Python (NumPy, Pandas, Scikit-learn).
- Experience with ML algorithms:
- Regression, Classification, Clustering
- Decision Trees, Random Forest, Gradient Boosting
- SVM, KNN, Naïve Bayes
- Solid understanding of statistics, probability, and linear algebra.
- Experience with data visualization tools (Matplotlib, Seaborn, Power BI, Tableau – preferred).
- Experience working with SQL and relational databases.
- Knowledge of model evaluation metrics and optimization techniques.
Preferred / Good to Have
- Experience with Deep Learning frameworks (TensorFlow, PyTorch, Keras).
- Exposure to NLP, Computer Vision, or Time Series forecasting.
- Experience with big data technologies (Spark, Hadoop).
- Familiarity with cloud platforms (AWS, Azure, GCP).
- Experience with MLOps, CI/CD pipelines, and model deployment.
Soft Skills
- Strong analytical and problem-solving abilities.
- Excellent communication and stakeholder interaction skills.
- Ability to work independently and in cross-functional teams.
- Curiosity and willingness to learn new tools and techniques.
Accessibility & Inclusion Statement
We are committed to creating an inclusive environment for all employees, including persons with disabilities. Reasonable accommodations will be provided upon request.
Equal Opportunity Employer (EOE) Statement
Ampera Technologies is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.
.
We are looking for an AI/ML Engineer to build AI-powered applications for prediction, analytics, and intelligent reporting.
You will work with structured databases and unstructured data (PDFs, documents, logs).
Design and implement data ingestion, preprocessing, and feature pipelines.
Build ML models for prediction, trend analysis, and pattern detection.
Enable chat-based insights using LLMs for querying data and generating reports.
Implement role-based access control (RBAC) and secure AI workflows.
Integrate AI models into web/mobile applications via APIs.
Optimize model performance, accuracy, and scalability.
Work with vector databases, embeddings, and semantic search.
Collaborate with product and engineering teams on AI architecture.
Ensure data security, privacy, and compliance best practices.
Stay updated with latest AI/ML tools and frameworks
JOB DETAILS:
- Job Title: Lead II - Software Engineering- React Native - React Native, Mobile App Architecture, Performance Optimization & Scalability
- Industry: Global digital transformation solutions provider
- Experience: 7-9 years
- Working Days: 5 days/week
- Job Location: Mumbai
- CTC Range: Best in Industry
Job Description
Job Title
Lead React Native Developer (6–8 Years Experience)
Position Overview
We are looking for a Lead React Native Developer to provide technical leadership for our mobile applications. This role involves owning architectural decisions, setting development standards, mentoring teams, and driving scalable, high-performance mobile solutions aligned with business goals.
Must-Have Skills
- 6–8 years of experience in mobile application development
- Extensive hands-on experience leading React Native projects
- Expert-level understanding of React Native architecture and internals
- Strong knowledge of mobile app architecture patterns
- Proven experience with performance optimization and scalability
- Experience in technical leadership, team management, and mentorship
- Strong problem-solving and analytical skills
- Excellent communication and collaboration abilities
- Proficiency in modern React Native development practices
- Experience with Expo toolkit and libraries
- Strong understanding of custom hooks development
- Focus on writing clean, maintainable, and scalable code
- Understanding of mobile app lifecycle
- Knowledge of cross-platform design consistency
Good-to-Have Skills
- Experience with microservices architecture
- Knowledge of cloud platforms such as AWS, Firebase, etc.
- Understanding of DevOps practices and CI/CD pipelines
- Experience with A/B testing and feature flag implementation
- Familiarity with machine learning integration in mobile applications
- Exposure to innovation-driven technical decision-making
Skills: React native, mobile app development, devops, machine learning
******
Notice period - 0 to 15 days only (Need Feb Joiners)
Location: Navi Mumbai, Belapur
About PGAGI
At PGAGI, we believe in a future where AI and human intelligence coexist in harmony, creating a world that is smarter, faster, and better. We are not just building AI; we are shaping a future where AI is a fundamental and positive force for businesses, societies, and the planet.
Position Overview:
We are excited to announce openings for AI/ML Interns who are enthusiastic about artificial intelligence, machine learning, and data-driven technologies. This internship is designed for individuals who want to apply their knowledge of algorithms, data analysis, and model development to solve real-world problems. Interns will work closely with our AI engineering teams to develop, train, and deploy machine learning models, contributing to innovative solutions across various domains. This is an excellent opportunity to gain hands-on experience with cutting-edge tools and frameworks in a collaborative, research-oriented environment.
Key Responsibilities:
- Experience working with python, LLM, Deep Learning, NLP, etc..
- Utilize GitHub for version control, including pushing and pulling code updates.
- Work with Hugging Face and OpenAI platforms for deploying models and exploring open-source AI models.
- Engage in prompt engineering and the fine-tuning process of AI models.
Compensation
- Monthly Stipend: Base stipend of INR 8,000 per month, with the potential to increase up to INR 20,000 based on performance evaluations.
- Performance-Based Pay Scale: Eligibility for monthly performance-based bonuses, rewarding exceptional project contributions and teamwork.
- Additional Benefits: Access to professional development opportunities, including workshops, tech talks, and mentoring sessions.
Requirements:
- Proficiency in Python programming.
- Experience with GitHub and version control workflows.
- Familiarity with AI platforms such as Hugging Face and OpenAI.
- Understanding of prompt engineering and model fine-tuning.
- Excellent problem-solving abilities and a keen interest in AI technology.
Only students currently in their final year of a Bachelor's degree in Computer Science, Engineering,
or related fields /graduates are eligible to apply.
Perks:
- Hands-on experience with real AI projects.
- Mentoring from industry experts.
- A collaborative, innovative and flexible work environment
How to Apply
Interested candidates are invited to submit their resume and complete the assignment using
Shortlisted candidates will be contacted for an interview.
Selection Process
- Initial Screening: We'll review your application for evidence of your skills, experience, and a strong foundation in AI.
- Task Assignment: Candidates need to submit assignment which is already being attached in careers page , designed to assess your practical skills.
- Performance Review: Our experts will evaluate your task submission, with excellence in this stage being crucial for further consideration.
- Interview: Impressive task performers will be invited for an interview to discuss their potential contribution to our team.
- Onboarding: Successful candidates will join our team, with exciting projects ahead
Apply now to embark on a transformative career journey with PGAGI, where innovation and talent converge!
#artificialintelligence #Machinelearning #AI #AIML #LLM #FastAPI #NLP #openAI #AImodels #AIMLInternship #AIintern #Internship #aimlgraduate #TTS #Voice #Speech
About E2M:
E2M Solutions works as a trusted white-label partner for digital agencies. We support agencies with consistent and reliable delivery through services such as website design, web development, eCommerce, SEO, AI SEO, PPC, AI automation, and content writing .Founded on strong business ethics, we are an equal opportunity organization powered by 300+ experienced professionals, partnering with 400+ digital agencies across the US, UK, Canada, Europe, and Australia. At E2M, we value ownership, consistency, and people who are committed to doing meaningful work and growing together .If you’re someone who dreams big and has the gumption to make them come true, E2M has a place for you.
Role Overview:
We are seeking a highly skilled and client-centric AI Consultant/AI Adoption Specialist to join our growing team. In this pivotal role, you'll serve as a vital link between our clients' strategic objectives and the transformative power of AI. You'll primarily focus on understanding their needs, scoping opportunities, and architecting actionable AI roadmaps.
Key Responsibilities:
- Collaborate closely with clients to understand their challenges and identify opportunities to apply AI.
- Assess client requirements and prepare solution strategies using AI tools and methodologies.
- Work with internal teams to design, propose, and help execute AI-powered solutions.
- Provide AI-based recommendations that align with the client’s business objectives.
- Communicate technical possibilities in a business-friendly manner to decision-makers.
- Take ownership of the client journey from discovery to implementation and support.
- Stay updated with AI trends, tools, and real-world use cases that can benefit clients.
Required Skills & Qualifications:
- Minimum 2+ Years of hands on experience into Custom AI Development.
- Minimum 3+ years of experience in roles like Project Manager, Customer Success Manager, or Account Manager, preferably in a service-based company or digital agency.
- Strong understanding of AI concepts, trends, and tools (e.g., NLP, ML, Chatbots, Automation, native cloud technologies).
- Some hands-on experience in AI projects – either through execution, coordination, or implementation.
- Ability to manage multiple client engagements and communicate effectively with both technical and non-technical stakeholders.
- Strong problem-solving mind set with the ability to translate business needs into AI opportunities.
- Flexible to work with international clients, especially in the US time zone as needed.
Required Skills & Qualifications
● Strong hands-on experience with LLM frameworks and models, including LangChain,
OpenAI (GPT-4), and LLaMA
● Proven experience in LLM orchestration, workflow management, and multi-agent
system design using frameworks such as LangGraph
● Strong problem-solving skills with the ability to propose end-to-end solutions and
contribute at an architectural/system design level
● Experience building scalable AI-backed backend services using FastAPI and
asynchronous programming patterns
● Solid experience with cloud infrastructure on AWS, including EC2, S3, and Load
Balancers
● Hands-on experience with Docker and containerization for deploying and managing
AI/ML applications
● Good understanding of Transformer-based architectures and how modern LLMs work
internally
● Strong skills in data processing and analysis using NumPy and Pandas
● Experience with data visualization tools such as Matplotlib and Seaborn for analysis
and insights
● Hands-on experience with Retrieval-Augmented Generation (RAG), including
document ingestion, embeddings, and vector search pipelines
● Experience in model optimization and training techniques, including fine-tuning,
LoRA, and QLoRA
Nice to Have / Preferred
● Experience designing and operating production-grade AI systems
● Familiarity with cost optimization, observability, and performance tuning for
LLM-based applications
● Exposure to multi-cloud or large-scale AI platforms


















