50+ Machine Learning (ML) Jobs in India
Apply to 50+ Machine Learning (ML) Jobs on CutShort.io. Find your next job, effortlessly. Browse Machine Learning (ML) Jobs and apply today!





Job Overview:
We are looking for a skilled professional with:
- 7+ years of overall experience, including minimum 5 years in Computer Vision, Machine Learning, Deep Learning, and algorithm development.
- Proficiency in Data Science and Data Analysis techniques.
- Hands-on programming experience with Python, R, MATLAB or Octave.
- Experience with AI frameworks like TensorFlow, PySpark, Theano, and libraries such as PyTorch, Pandas, NumPy, etc.
- Strong understanding of algorithms like Regression, SVM, Decision Trees, KNN, and Neural Networks.
Key Skills & Attributes:
- Fast learner with strong problem-solving abilities
- Innovative thinking and approach
- Excellent communication skills
- High standards of integrity, accountability, and transparency
- Exposure to or experience with international work environments
Notice Period : Immediate to 30Days


- Design and implement cloud solutions, build MLOps on Azure
- Build CI/CD pipelines orchestration by GitLab CI, GitHub Actions, Circle CI, Airflow or similar tools
- Data science model review, run the code refactoring and optimization, containerization, deployment, versioning, and monitoring of its quality
- Data science models testing, validation and tests automation
- Deployment of code and pipelines across environments
- Model performance metrics
- Service performance metrics
- Communicate with a team of data scientists, data engineers and architect, document the processes


Tech Lead(Fullstack) – Nexa (Conversational Voice AI Platform)
Location: Bangalore Type: Full-time
Experience: 4+ years (preferably in early-stage startups)
Tech Stack: Python (core), Node.js, React.js
About Nexa
Nexa is a new venture by the founders of HeyCoach—Pratik Kapasi and Aditya Kamat—on a mission to build the most intuitive voice-first AI platform. We’re rethinking how humans interact with machines using natural, intelligent, and fast conversational interfaces.
We're looking for a Tech Lead to join us at the ground level. This is a high-ownership, high-speed role for builders who want to move fast and go deep.
What You’ll Do
● Design, build, and scale backend and full-stack systems for our voice AI engine
● Work primarily with Python (core logic, pipelines, model integration), and support full-stack features using Node.js and React.js
● Lead projects end-to-end—from whiteboard to production deployment
● Optimize systems for performance, scale, and real-time processing
● Collaborate with founders, ML engineers, and designers to rapidly prototype and ship features
● Set engineering best practices, own code quality, and mentor junior team members as we grow
✅ Must-Have Skills
● 4+ years of experience in Python, building scalable production systems
● Has led projects independently, from design through deployment
● Excellent at executing fast without compromising quality
● Strong foundation in system design, data structures and algorithms
● Hands-on experience with Node.js and React.js in a production setup
● Deep understanding of backend architecture—APIs, microservices, data flows
● Proven success working in early-stage startups, especially during 0→1 scaling phases
● Ability to debug and optimize across the full stack
● High autonomy—can break down big problems, prioritize, and deliver without hand-holding
🚀 What We Value
● Speed > Perfection: We move fast, ship early, and iterate
● Ownership mindset: You act like a founder-even if you're not one
● Technical depth: You’ve built things from scratch and understand what’s under the hood
● Product intuition: You don’t just write code—you ask if it solves the user’s problem
● Startup muscle: You’re scrappy, resourceful, and don’t need layers of process
● Bias for action: You unblock yourself and others. You push code and push thinking
Humility and curiosity
: You challenge ideas, accept better ones, and never stop learning
💡 Nice-to-Have
● Experience with NLP, speech interfaces, or audio processing
● Familiarity with cloud platforms (GCP/AWS), CI/CD, Docker, Kubernetes
● Contributions to open-source or technical blogs
● Prior experience integrating ML models into production systems
Why Join Nexa?
● Work directly with founders on a product that pushes boundaries in voice AI
● Be part of the core team shaping product and tech from day one
● High-trust environment focused on output and impact, not hours
● Flexible work style and a flat, fast culture


About FileSpin.io
FileSpin’s mission is to bring excellence and joy to the enterprise. We are a fully remote team spread across the UK, Europe and India. We bootstrapped in a garage (true story) and have been profitable from day one.
We value innovation and uncompromising professional excellence. Work at FileSpin is challenging, fun and highly rewarding. Come and be part of a unique company that is doing big things without the bloat.
About the Job
Location: Remote
We’re looking for a Junior and Senior Platform Engineer to join us and be on our ambitious growth journey. In this role, you’ll help build FileSpin into the most innovative AI-Enabled Digital Asset Management platform in the world. You'll have ample opportunities to work in areas solving awesome technical challenges and learning along the way.
Our roadmap focuses on creating an amazing API and UI, scaling our cloud infrastructure to deal with an order of magnitude higher media processing volume, implementing ML-pipelines and tuning the stack for high-performance.
Qualifications & Responsibilities
- Proficient in Troubleshooting and Infrastructure management
- Strong skills in Software Development and Programming
- Experience with Databases
- Excellent analytical and problem-solving skills
- Ability to work independently and remotely
- Bachelor's degree in Computer Science, Information Technology, or related field preferred
Essential skills
- Excellent Python Programming skills
- Good Experience with SQL
- Excellent Experience with at least one web frameworks such as Tornado, Flask, FastAPI
- Experience with Video encoding using ffmpeg, Image processing (GraphicsMagick, PIL)
- Good Experience with Git, CI/CD, DevOps tools
- Experience with React, TypeScript, HTML5/CSS3
Nice to have skills
- Experience in ML model training and deployments is a plus
- Web/Proxy servers (nginx/Apache/Traefik)
- SaaS stacks such as task queues, search engines, cache servers
The intangibles
- Culture that values your contribution and gives your autonomy
- Startup ethos, no useless meetings
- Continuous Learning Budget
- An entrepreneurial workplace, we value creativity and innovation
Interview Process
Qualifying test, introductory chat, technical round, HR discussion and job offer.


Desired Competencies (Technical/Behavioral Competency)
Must-Have
- Experience in working with various ML libraries and packages like Scikit learn, Numpy, Pandas, Tensorflow, Matplotlib, Caffe, etc.
- Deep Learning Frameworks: PyTorch, spaCy, Keras
- Deep Learning Architectures: LSTM, CNN, Self-Attention and Transformers
- Experience in working with Image processing, computer vision is must
- Designing data science applications, Large Language Models(LLM) , Generative Pre-trained Transformers (GPT), generative AI techniques, Natural Language Processing (NLP), machine learning techniques, Python, Jupyter Notebook, common data science packages (tensorflow, scikit-learn,kerasetc.,.) , LangChain, Flask,FastAPI, prompt engineering.
- Programming experience in Python
- Strong written and verbal communications
- Excellent interpersonal and collaboration skills.
Role descriptions / Expectations from the Role
Design and implement scalable and efficient data architectures to support generative AI workflows.
Fine tune and optimize large language models (LLM) for generative AI, conduct performance evaluation and benchmarking for LLMs and machine learning models
Apply prompt engineer techniques as required by the use case
Collaborate with research and development teams to build large language models for generative AI use cases, plan and breakdown of larger data science tasks to lower-level tasks
Lead junior data engineers on tasks such as design data pipelines, dataset creation, and deployment, use data visualization tools, machine learning techniques, natural language processing , feature engineering, deep learning , statistical modelling as required by the use case.


Desired Competencies (Technical/Behavioral Competency)
Must-Have
- Hands-on knowledge in machine learning, deep learning, TensorFlow, Python, NLP
- Stay up to date on the latest AI emergences relevant to the business domain.
- Conduct research and development processes for AI strategies.
4. Experience developing and implementing generative AI models, with a strong understanding of deep learning techniques such as GPT, VAE, and GANs.
5. Experience with transformer models such as BERT, GPT, RoBERTa, etc, and a solid understanding of their underlying principles is a plus
Good-to-Have
- Have knowledge of software development methodologies, such as Agile or Scrum
- Have strong communication skills, with the ability to effectively convey complex technical concepts to a diverse audience.
- Have experience with natural language processing (NLP) techniques and tools, such as SpaCy, NLTK, or Hugging Face
- Ensure the quality of code and applications through testing, peer review, and code analysis.
- Root cause analysis and bugs correction
- Familiarity with version control systems, preferably Git.
- Experience with building or maintaining cloud-native applications.
- Familiarity with cloud platforms such as AWS, Azure, or Google Cloud is Plus


Design and implement scalable and efficient data architectures to support generative AI workflows.
2 Fine tune and optimize large language models (LLM) for generative AI, conduct performance evaluation and benchmarking for LLMs and machine learning models
3 Apply prompt engineer techniques as required by the use case
4 Collaborate with research and development teams to build large language models for generative AI use cases, plan and breakdown of larger data science tasks to lower-level tasks
5 Lead junior data engineers on tasks such as design data pipelines, dataset creation, and deployment, use data visualization tools, machine learning techniques, natural language processing , feature engineering, deep learning , statistical modelling as required by the use case.



3+ years’ experience as Python Developer / Designer and Machine learning 2. Performance Improvement understanding and able to write effective, scalable code 3. security and data protection solutions 4. Expertise in at least one popular Python framework (like Django, Flask or Pyramid) 5. Knowledge of object-relational mapping (ORM) 6. Familiarity with front-end technologies (like JavaScript and HTML5




Job description
Job Title: AI-Driven Data Science Automation Intern – Machine Learning Research Specialist
Location: Remote (Global)
Compensation: $50 USD per month
Company: Meta2 Labs
www.meta2labs.com
About Meta2 Labs:
Meta2 Labs is a next-gen innovation studio building products, platforms, and experiences at the convergence of AI, Web3, and immersive technologies. We are a lean, mission-driven collective of creators, engineers, designers, and futurists working to shape the internet of tomorrow. We believe the next wave of value will come from decentralized, intelligent, and user-owned digital ecosystems—and we’re building toward that vision.
As we scale our roadmap and ecosystem, we're looking for a driven, aligned, and entrepreneurial AI-Driven Data Science Automation Intern – Machine Learning Research Specialist to join us on this journey.
The Opportunity:
We’re seeking a part-time AI-Driven Data Science Automation Intern – Machine Learning Research Specialist to join Meta2 Labs at a critical early stage. This is a high-impact role designed for someone who shares our vision and wants to actively shape the future of tech. You’ll be an equal voice at the table and help drive the direction of our ventures, partnerships, and product strategies.
Responsibilities:
- Collaborate on the vision, strategy, and execution across Meta2 Labs' portfolio and initiatives.
- Drive innovation in areas such as AI applications, Web3 infrastructure, and experiential product design.
- Contribute to go-to-market strategies, business development, and partnership opportunities.
- Help shape company culture, structure, and team expansion.
- Be a thought partner and problem-solver in all key strategic discussions.
- Lead or support verticals based on your domain expertise (e.g., product, technology, growth, design, etc.).
- Act as a representative and evangelist for Meta2 Labs in public or partner-facing contexts.
Ideal Profile:
- Passion for emerging technologies (AI, Web3, XR, etc.).
- Comfortable operating in ambiguity and working lean.
- Strong strategic thinking, communication, and collaboration skills.
- Open to wearing multiple hats and learning as you build.
- Driven by purpose and eager to gain experience in a cutting-edge tech environment.
Commitment:
- Flexible, part-time involvement.
- Remote-first and async-friendly culture.
Why Join Meta2 Labs:
- Join a purpose-led studio at the frontier of tech innovation.
- Help build impactful ventures with real-world value and long-term potential.
- Shape your own role, focus, and future within a decentralized, founder-friendly structure.
- Be part of a collaborative, intellectually curious, and builder-centric culture.
Job Types: Part-time, Internship
Pay: $50 USD per month
Work Location: Remote
Job Types: Full-time, Part-time, Internship
Contract length: 3 months
Pay: Up to ₹5,000.00 per month
Benefits:
- Flexible schedule
- Health insurance
- Work from home
Work Location: Remote


Responsibilities
Develop and maintain web and backend components using Python, Node.js, and Zoho tools
Design and implement custom workflows and automations in Zoho
Perform code reviews to maintain quality standards and best practices
Debug and resolve technical issues promptly
Collaborate with teams to gather and analyze requirements for effective solutions
Write clean, maintainable, and well-documented code
Manage and optimize databases to support changing business needs
Contribute individually while mentoring and supporting team members
Adapt quickly to a fast-paced environment and meet expectations within the first month
Leadership Opportunities
Lead and mentor junior developers in the team
Drive projects independently while collaborating with the broader team
Act as a technical liaison between the team and stakeholders to deliver effective solutions
Selection Process
1. HR Screening: Review of qualifications and experience
2. Online Technical Assessment: Test coding and problem-solving skills
3. Technical Interview: Assess expertise in web development, Python, Node.js, APIs, and Zoho
4. Leadership Evaluation: Evaluate team collaboration and leadership abilities
5. Management Interview: Discuss cultural fit and career opportunities
6. Offer Discussion: Finalize compensation and role specifics
Experience Required
2–5 years of relevant experience as a Software Developer
Proven ability to work as a self-starter and contribute individually
Strong technical and interpersonal skills to support team members effectively


- Design, develop, and maintain data pipelines and ETL workflows on AWS platform
- Work with AWS services like S3, Glue, Lambda, Redshift, EMR, and Athena for data ingestion, transformation, and analytics
- Collaborate with Data Scientists, Analysts, and Business teams to understand data requirements
- Optimize data workflows for performance, scalability, and reliability
- Troubleshoot data issues, monitor jobs, and ensure data quality and integrity
- Write efficient SQL queries and automate data processing tasks
- Implement data security and compliance best practices
- Maintain technical documentation and data pipeline monitoring dashboards


Apply only if:
- You are an AI agent.
- OR you know how to build an AI agent that can do this job.
What You’ll Do: At LearnTube, we’re pushing the boundaries of Generative AI to revolutionize how the world learns. As an Agentic AI Engineer, you’ll:
- Develop intelligent, multimodal AI solutions across text, image, audio, and video to power personalized learning experiences and deep assessments for millions of users.
- Drive the future of live learning by building real-time interaction systems with capabilities like instant feedback, assistance, and personalized tutoring.
- Conduct proactive research and integrate the latest advancements in AI & agents into scalable, production-ready solutions that set industry benchmarks.
- Build and maintain robust, efficient data pipelines that leverage insights from millions of user interactions to create high-impact, generalizable solutions.
- Collaborate with a close-knit team of engineers, agents, founders, and key stakeholders to align AI strategies with LearnTube's mission.
About Us: At LearnTube, we’re on a mission to make learning accessible, affordable, and engaging for millions of learners globally. Using Generative AI, we transform scattered internet content into dynamic, goal-driven courses with:
- AI-powered tutors that teach live, solve doubts in real time, and provide instant feedback.
- Seamless delivery through WhatsApp, mobile apps, and the web, with over 1.4 million learners across 64 countries.
Meet the Founders: LearnTube was founded by Shronit Ladhani and Gargi Ruparelia, who bring deep expertise in product development and ed-tech innovation. Shronit, a TEDx speaker, is an advocate for disrupting traditional learning, while Gargi’s focus on scalable AI solutions drives our mission to build an AI-first company that empowers learners to achieve career outcomes.
We’re proud to be recognized by Google as a Top 20 AI Startup and are part of their 2024 Startups Accelerator: AI First Program, giving us access to cutting-edge technology, credits, and mentorship from industry leaders.
Why Work With Us? At LearnTube, we believe in creating a work environment that’s as transformative as the products we build. Here’s why this role is an incredible opportunity:
- Cutting-Edge Technology: You’ll work on state-of-the-art generative AI applications, leveraging the latest advancements in LLMs, multimodal AI, and real-time systems.
- Autonomy and Ownership: Experience unparalleled flexibility and independence in a role where you’ll own high-impact projects from ideation to deployment.
- Rapid Growth: Accelerate your career by working on impactful projects that pack three years of learning and growth into one.
- Founder and Advisor Access: Collaborate directly with founders and industry experts, including the CTO of Inflection AI, to build transformative solutions.
- Team Culture: Join a close-knit team of high-performing engineers and innovators, where every voice matters, and Monday morning meetings are something to look forward to.
- Mission-Driven Impact: Be part of a company that’s redefining education for millions of learners and making AI accessible to everyone.


Are you passionate about the power of data and excited to leverage cutting-edge AI/ML to drive business impact? At Poshmark, we tackle complex challenges in personalization, trust & safety, marketing optimization, product experience, and more.
Why Poshmark?
As a leader in Social Commerce, Poshmark offers an unparalleled opportunity to work with extensive multi-platform social and commerce data. With over 130 million users generating billions of daily events and petabytes of rapidly growing data, you’ll be at the forefront of data science innovation. If building impactful, data-driven AI solutions for millions excites you, this is your place.
What You’ll Do
- Drive end-to-end data science initiatives, from ideation to deployment, delivering measurable business impact through projects such as feed personalization, product recommendation systems, and attribute extraction using computer vision.
- Collaborate with cross-functional teams, including ML engineers, product managers, and business stakeholders, to design and deploy high-impact models.
- Develop scalable solutions for key areas like product, marketing, operations, and community functions.
- Own the entire ML Development lifecycle: data exploration, model development, deployment, and performance optimization.
- Apply best practices for managing and maintaining machine learning models in production environments.
- Explore and experiment with emerging AI trends, technologies, and methodologies to keep Poshmark at the cutting edge.
Your Experience & Skills
- Ideal Experience: 6-9 years of building scalable data science solutions in a big data environment. Experience with personalization algorithms, recommendation systems, or user behavior modeling is a big plus.
- Machine Learning Knowledge: Hands-on experience with key ML algorithms, including CNNs, Transformers, and Vision Transformers. Familiarity with Large Language Models (LLMs) and techniques like RAG or PEFT is a bonus.
- Technical Expertise: Proficiency in Python, SQL, and Spark (Scala or PySpark), with hands-on experience in deep learning frameworks like PyTorch or TensorFlow. Familiarity with ML engineering tools like Flask, Docker, and MLOps practices.
- Mathematical Foundations: Solid grasp of linear algebra, statistics, probability, calculus, and A/B testing concepts.
- Collaboration & Communication: Strong problem-solving skills and ability to communicate complex technical ideas to diverse audiences, including executives and engineers.


AccioJob is conducting an offline hiring drive with Gaian Solutions India for the position of AI /ML Intern.
Required Skills - Python,SQL, ML libraries like (scikit-learn, pandas, TensorFlow, etc.)
Apply Here - https://go.acciojob.com/tUxTdV
Eligibility -
- Degree: B.Tech/BE/BCA/MCA/M.Tech
- Graduation Year: 2023, 2024, and 2025
- Branch: All Branches
- Work Location: Hyderabad
Compensation -
- Internship stipend: 20- 25k
- Internship duration: 3 months
- CTC:- 4.5-6 LPA
Evaluation Process -
- Assessment at the AccioJob Skill Centre in Pune
- 2 Technical Interviews
Apply Here - https://go.acciojob.com/tUxTdV
Important: Please bring your laptop & earphones for the test.


🚀 Job Title : Python AI/ML Engineer
💼 Experience : 3+ Years
📍 Location : Gurgaon (Work from Office, 5 Days/Week)
📅 Notice Period : Immediate
Summary :
We are looking for a Python AI/ML Engineer with strong experience in developing and deploying machine learning models on Microsoft Azure.
🔧 Responsibilities :
- Build and deploy ML models using Azure ML.
- Develop scalable Python applications with cloud-first design.
- Create data pipelines using Azure Data Factory, Blob Storage & Databricks.
- Optimize performance, fix bugs, and ensure system reliability.
- Collaborate with cross-functional teams to deliver intelligent features.
✅ Requirements :
- 3+ Years of software development experience.
- Strong Python skills; experience with scikit-learn, pandas, NumPy.
- Solid knowledge of SQL and relational databases.
- Hands-on with Azure ML, Data Factory, Blob Storage.
- Familiarity with Git, REST APIs, Docker.


About Data Axle:
Data Axle Inc. has been an industry leader in data, marketing solutions, sales and research for over 50 years in the USA. Data Axle now as an established strategic global centre of excellence in Pune. This centre delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business & consumer databases.
Data Axle Pune is pleased to have achieved certification as a Great Place to Work!
Roles & Responsibilities:
We are looking for a Senior Data Scientist to join the Data Science Client Services team to continue our success of identifying high quality target audiences that generate profitable marketing return for our clients. We are looking for experienced data science, machine learning and MLOps practitioners to design, build and deploy impactful predictive marketing solutions that serve a wide range of verticals and clients. The right candidate will enjoy contributing to and learning from a highly talented team and working on a variety of projects.
We are looking for a Senior Data Scientist who will be responsible for:
- Ownership of design, implementation, and deployment of machine learning algorithms in a modern Python-based cloud architecture
- Design or enhance ML workflows for data ingestion, model design, model inference and scoring
- Oversight on team project execution and delivery
- Establish peer review guidelines for high quality coding to help develop junior team members’ skill set growth, cross-training, and team efficiencies
- Visualize and publish model performance results and insights to internal and external audiences
Qualifications:
- Masters in a relevant quantitative, applied field (Statistics, Econometrics, Computer Science, Mathematics, Engineering)
- Minimum of 5 years of work experience in the end-to-end lifecycle of ML model development and deployment into production within a cloud infrastructure (Databricks is highly preferred)
- Proven ability to manage the output of a small team in a fast-paced environment and to lead by example in the fulfilment of client requests
- Exhibit deep knowledge of core mathematical principles relating to data science and machine learning (ML Theory + Best Practices, Feature Engineering and Selection, Supervised and Unsupervised ML, A/B Testing, etc.)
- Proficiency in Python and SQL required; PySpark/Spark experience a plus
- Ability to conduct a productive peer review and proper code structure in Github
- Proven experience developing, testing, and deploying various ML algorithms (neural networks, XGBoost, Bayes, and the like)
- Working knowledge of modern CI/CD methods This position description is intended to describe the duties most frequently performed by an individual in this position.
It is not intended to be a complete list of assigned duties but to describe a position level.

Data Architect/Engineer
Job Summary:
We are seeking an experienced Data Engineer/Architect to join our data and analytics team. The ideal candidate will have a strong background in data engineering, ETL pipeline development, and experience working with one or more data visualization tools (e.g., Power BI, Tableau, Looker). This role will involve designing, building, and maintaining scalable data solutions that empower business decision-making.
Experience: 8 to 12 yrs
Work location: JP Nagar 3rd phase, Bangalore.
Work type: work from office
Key Responsibilities:
- Define and maintain the overall data architecture strategy in line with business goals.
- Design and implement scalable, reliable, and secure data models, data lakes, and data warehouses.
- Design, develop, and maintain robust data pipelines and ETL workflows.
- Work with stakeholders to understand data requirements and translate them into technical solutions.
- Build and manage data models, data marts, and data lakes.
- Collaborate with BI and analytics teams to support dashboards and data visualizations.
- Ensure data quality, performance, and reliability across systems.
- Optimize data processing using modern cloud-based data platforms and tools.
- Support data governance and security best practices.
- Support the development of enterprise dashboards and reporting frameworks using tools like Power BI, Tableau, or Looker.
- Ensure compliance with data security and privacy regulations.
Required Skills & Qualifications:
- 8–12 years of experience in data engineering or related roles.
- Deep understanding of data modelling, database design, and data warehousing concepts.
- Technology evaluation & selection – Execute Proof of concept and Proof of value for various technology solutions and frameworks
- Strong knowledge of SQL, Python, and/or Scala.
- Experience with ETL tools (e.g., Apache Airflow, Talend, Informatica, dbt).
- Hands-on experience with cloud platforms (AWS, Azure, or GCP) and data services (e.g., Redshift, BigQuery, Snowflake).
- Exposure to one or more data visualization tools like Power BI, Tableau, Looker, or QlikView.
- Familiarity with data modeling, data warehousing, and real-time data streaming.
- Strong problem-solving and communication skills.
Preferred Qualifications:
- Experience working in Agile environments.
- Knowledge of CI/CD for data pipelines.
- Exposure to ML/AI data preparation is a plus.

About Moative
Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots and predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries. Through Moative Labs, we aspire to build micro-products and launch AI startups in vertical markets.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Role
We seek experienced ML/AI professionals with strong backgrounds in computer science, software engineering, or related elds to join our Azure-focused MLOps team. If you’re passionate about deploying complex machine learning models in real-world settings, bridging the gap between research and production, and working on high-impact projects, this role is for you.
Work you’ll do
As an operations engineer, you’ll oversee the entire ML lifecycle on Azure—spanning initial proofs-of-concept to large-scale production deployments. You’ll build and maintain automated training, validation, and deployment pipelines using Azure DevOps, Azure ML, and related services, ensuring models are continuously monitored, optimized for performance, and cost-eective. By integrating MLOps practices such as MLow and CI/CD, you’ll drive rapid iteration and experimentation. In close collaboration with senior ML engineers, data scientists, and domain experts, you’ll deliver robust, production-grade ML solutions that directly impact business outcomes.
Responsibilities
- ML-focused DevOps: Set up robust CI/CD pipelines with a strong emphasis on model versioning, automated testing, and advanced deployment strategies on Azure.
- Monitoring & Maintenance: Track and optimize the performance of deployed models through live metrics, alerts, and iterative improvements.
- Automation: Eliminate repetitive tasks around data preparation, model retraining, and inference by leveraging scripting and infrastructure as code (e.g., Terraform, ARM templates).
- Security & Reliability: Implement best practices for securing ML workows on Azure, including identity/access management, container security, and data encryption.
- Collaboration: Work closely with the data science teams to ensure model performance is within agreed SLAs, both for training and inference.
Skills & Requirements
- 2+ years of hands-on programming experience with Python (PySpark or Scala optional).
- Solid knowledge of Azure cloud services (Azure ML, Azure DevOps, ACI/AKS).
- Practical experience with DevOps concepts: CI/CD, containerization (Docker, Kubernetes), infrastructure as code (Terraform, ARM templates).
- Fundamental understanding of MLOps: MLow or similar frameworks for tracking and versioning.
- Familiarity with machine learning frameworks (TensorFlow, PyTorch, XGBoost) and how to operationalize them in production.
- Broad understanding of data structures and data engineering.
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, eiciency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less.
Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, x or improve – anything that isn’t done right, irrespective of who did it. Be selsh about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply here. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.

Job Description: AI/ML Specialist
We are looking for a highly skilled and experienced AI/ML Specialist to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with expertise in deploying and managing applications on AWS. Proficiency in Django Rest Framework (DRF), a solid understanding of machine learning concepts, and hands-on experience with tools like PyTorch, TensorFlow, and transformer architectures are essential.
Key Responsibilities
● Develop and maintain web applications using Django and Flask frameworks.
● Design and implement RESTful APIs using Django Rest Framework (DRF).
● Deploy, manage, and optimize applications on AWS services, including EC2, S3, RDS, Lambda, and CloudFormation.
● Build and integrate APIs for AI/ML models into existing systems.
● Create scalable machine learning models using frameworks like PyTorch, TensorFlow, and scikit-learn.
● Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases.
● Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization.
● Deploy and manage machine learning models in production environments using tools like TensorFlow Serving, TorchServe, and AWS SageMaker.
● Ensure the scalability, performance, and reliability of applications and deployed models.
● Collaborate with cross-functional teams to analyze requirements and deliver effective technical solutions.
● Write clean, maintainable, and efficient code following best practices.
● Conduct code reviews and provide constructive feedback to peers.
● Stay up-to-date with the latest industry trends and technologies, particularly in AI/ML.
Required Skills and Qualifications
● Bachelor’s degree in Computer Science, Engineering, or a related field.
● 3+ years of professional experience as a AI/ML Specialist
● Proficient in Python with a strong understanding of its ecosystem.
● Extensive experience with Django and Flask frameworks.
● Hands-on experience with AWS services for application deployment and management.
● Strong knowledge of Django Rest Framework (DRF) for building APIs.
● Expertise in machine learning frameworks such as PyTorch, TensorFlow, and scikit-learn.
● Experience with transformer architectures for NLP and advanced AI solutions.
● Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB).
● Familiarity with MLOps practices for managing the machine learning lifecycle.
● Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus.
● Excellent problem-solving skills and the ability to work independently and as part of a team.
● Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders.


Python Developer
We are looking for an enthusiastic and skilled Python Developer with a passion for AI-based application development to join our growing technology team. This position offers the opportunity to work at the intersection of software engineering and data analytics, contributing to innovative AIdriven solutions that drive business impact. If you have a strong foundation in Python, a flair for problem-solving, and an eagerness to build intelligent systems, we would love to meet you!
Key Responsibilities
• Develop and deploy AI-focused applications using Python and associated frameworks.
• Collaborate with Developers, Product Owners, and Business Analysts to design and implement machine learning pipelines.
• Create interactive dashboards and data visualizations for actionable insights.
• Automate data collection, transformation, and processing tasks.
• Utilize SQL for data extraction, manipulation, and database management.
• Apply statistical methods and algorithms to derive insights from large datasets.
Required Skills and Qualifications
• 2–3 years of experience as a Python Developer, with a strong portfolio of relevant projects.
• Bachelor’s degree in Computer Science, Data Science, or a related technical field.
• In-depth knowledge of Python, including frameworks and libraries such as NumPy, Pandas, SciPy, and PyTorch.
• Proficiency in front-end technologies like HTML, CSS, and JavaScript.
• Familiarity with SQL and NoSQL databases and their best practices.
• Excellent communication and team-building skills.
• Strong problem-solving abilities with a focus on innovation and self-learning.
• Knowledge of cloud platforms such as AWS is a plus.
Additional Requirements This opportunity enhances your work life balance with allowance for remote work.
To be successful your computer hardware and internet must meet these minimum requirements:
1. Laptop or Desktop: • Operating System: Windows • Screen Size: 14 Inches • Screen Resolution: FHD (1920×1080) • Processor: I5 or higher • RAM: Minimum 8GB (Must) • Type: Windows Laptop • Software: AnyDesk • Internet Speed: 100 MBPS or higher
About ARDEM
ARDEM is a leading Business Process Outsourcing and Business Process Automation Service provider. For over twenty years ARDEM has successfully delivered business process outsourcing and business process automation services to our clients in USA and Canada. We are growing rapidly. We are constantly innovating to become a better service provider for our customers. We continuously strive for excellence to become the Best Business Process Outsourcing and Business Process Automation company


Knowledge of Gen AI technology ecosystem including top tier LLMs, prompt engineering, knowledge of development frameworks such as LLMaxindex and LangChain, LLM fine tuning and experience in architecting RAGs and other LLM based solution for enterprise use cases. 1. Strong proficiency in programming languages like Python and SQL. 2. 3+ years of experience of predictive/prescriptive analytics including Machine Learning algorithms (Supervised and Unsupervised) and deep learning algorithms and Artificial Neural Networks such as Regression , classification, ensemble model,RNN,LSTM,GRU. 3. 2+ years of experience in NLP, Text analytics, Document AI, OCR, sentiment analysis, entity recognition, topic modeling 4. Proficiency in LangChain and Open LLM frameworks to perform summarization, classification, Name entity recognition, Question answering 5. Proficiency in Generative techniques prompt engineering, Vector DB, LLMs such as OpenAI,LlamaIndex, Azure OpenAI, Open-source LLMs will be important 6. Hands-on experience in GenAI technology areas including RAG architecture, fine tuning techniques, inferencing frameworks etc 7. Familiarity with big data technologies/frameworks 8. Sound knowledge of Microsoft Azure


Job Title: AI & Machine Learning Developer
Location: Surat, near Railway Station
Experience: 1-2 Years
Responsibilities:
- Develop and optimize machine learning models for core product features.
- Collaborate with product and engineering teams to integrate AI solutions.
- Work with data pipelines, model training, and deployment workflows.
- Continuously improve models using feedback and new data.
Requirements:
- 1+ years of experience in ML or AI development.
- Strong Python skills; hands-on with libraries like scikit-learn, TensorFlow, or PyTorch.
- Experience in data preprocessing, model evaluation, and basic deployment.
- Familiarity with APIs and integrating ML into production (e.g., Flask/FastAPI).

Job Summary:
We’re seeking an innovative and business-savvy AI Strategist to lead the integration of artificial intelligence across all departments within our organization. This role is ideal for someone who has a deep understanding of AI capabilities, trends, and tools — but is more focused on strategic implementation, process improvement, and cross-functional collaboration than on technical development.
As our AI Strategist, you'll identify high-impact opportunities where AI can streamline operations, improve efficiency, and unlock new value. You’ll work closely with stakeholders in operations, marketing, HR, customer service, finance, and more to evaluate needs, recommend solutions, and support the adoption of AI-powered tools
and workflows.
Key Responsibilities:
• Partner with department heads to assess workflows and identify opportunities for AI integration
• Develop and maintain a company-wide AI roadmap aligned with business goals
• Evaluate and recommend AI solutions and platforms (e.g., automation tools, chatbots, predictive analytics, NLP applications)
• Serve as a liaison between internal teams and external AI vendors or technical consultants
• Educate teams on AI use cases, capabilities, and best practices
• Oversee pilot programs and track the effectiveness of AI initiatives
• Ensure ethical and compliant AI use, including data privacy and bias mitigation
• Stay current on emerging AI trends and make recommendations to maintain a competitive edge
Qualifications:
• Bachelor’s or Master’s degree in Business, Data Science, Information Systems,
or related fields
• Strong understanding of AI concepts, tools, and use cases across business
functions
• 3+ years of experience in strategy, operations, digital transformation, or a related role
• Proven track record of implementing new technologies or process improvements
• Excellent communication and change management skills
• Ability to translate complex AI concepts into business value
• Strategic thinker with a data-driven mindset
• Bonus: Experience working with AI vendors, SaaS platforms, or enterprise AI tools

🚀 We’re Hiring! | AI/ML Engineer – Computer Vision
📍 Location: Noida | 🕘 Full-Time
🔍 What We’re Looking For:
• 4+ years in AI/ML (Computer Vision)
• Python, OpenCV, TensorFlow, PyTorch, etc.
• Hands-on with object detection, face recognition, classification
• Git, Docker, Linux experience
• Curious, driven, and ready to build impactful products
💡 Be part of a fast-growing team, build products used by brands like Biba, Zivame, Costa Coffee & more!

TL;DR
Founding Software Engineer (Next.js / React / TypeScript) — ₹17,000–₹24,000 net ₹/mo — 100% remote (India) — ~40 h/wk — green-field stack, total autonomy, ship every week. If you can own the full lifecycle and prove impact every Friday, apply.
🏢 Mega Style Apartments
We rent beautifully furnished 1- to 4-bedroom flats that feel like home but run like a hotel—so travellers can land, unlock the door, and live like locals from hour one. Tech is now the growth engine, and you’ll be employee #1 in engineering, laying the cornerstone for a tech platform that will redefine the premium furnished apartment experience.
✨ Why This Role Rocks
💡 Green-field Everything
Choose the stack, CI, even the linter.
🎯 Visible Impact & Ambition
Every deploy reaches real guests this week. Lay rails for ML that can boost revenue 20%.
⏱️ Radical Autonomy
Plan sprints, own deploys; no committees.
- Direct line to decision-makers → zero red tape
- Modern DX: Next.js & React (latest stable), Tailwind, Prisma/Drizzle, Vercel, optional AI copilots – building mostly server-rendered, edge-ready flows.
- Async-first, with structured weekly 1-on-1s to ensure you’re supported, not micromanaged.
- Unmatched Career Acceleration: Build an entire tech foundation from zero, making decisions that will define your trajectory and our company's success.
🗓️ Your Daily Rhythm
- Morning: Check metrics, pick highest-impact task
- Day: Build → ship → measure
- Evening: 10-line WhatsApp update (done, next, blockers)
- Friday: Live demo of working software (no mock-ups)
📈 Success Milestones
- Week 1: First feature in production
- Month 1: Automation that saves ≥10 h/week for ops
- Month 3: Core platform stable; conversion up, load times down (aiming for <1s LCP); ready for future ML pricing (stretch goal: +20% revenue within 12 months).
🔑 What You’ll Own
- Ship guest-facing features with Next.js (App Router / RSC / Server Actions).
- Automate ops—dashboards & LLM helpers that delete busy-work.
- Full lifecycle: idea → spec → code → deploy → measure → iterate.
- Set up CI/CD & observability on Vercel; a dedicated half-day refactor slot each sprint keeps tech-debt low.
- Optimise for outcomes—conversion, CWV, security, reliability; laying the groundwork for future capabilities in dynamic pricing and guest personalization.
Prototype > promise. Results > hours-in-chair.
💻 Must-Have Skills
Frontend Focus:
- Next.js (App Router/RSC/Server Actions)
- React (latest stable), TypeScript
- Tailwind CSS + shadcn/ui
- State mgmt (TanStack Query / Zustand / Jotai)
Backend & DevOps Focus:
- Node.js APIs, Prisma/Drizzle ORM
- Solid SQL schema design (e.g., PostgreSQL)
- Auth.js / Better-Auth, web security best practices
- GitHub Flow, automated tests, CI, Vercel deploys
- Excellent English; explain trade-offs to non-tech peers
- Self-starter—comfortable as the engineer (for now)
🌱 Nice-to-Haves (Learn Here or Teach Us)
A/B testing & CRO, Python/basic ML, ETL pipelines, Advanced SEO & CWV, Payment APIs (Stripe, Merchant Warrior), n8n automation
🎁 Perks & Benefits
- 100% remote anywhere in 🇮🇳
- Flexible hours (~40 h/wk)
- 12 paid days off (holiday + sick)
- ₹1,700/mo health insurance reimbursement (post-probation)
- Performance bonuses for measurable wins
- 6-month paid probation → permanent role & full benefits (this is a full-time employment role)
- Blank-canvas stack—your decisions live on
- Equity is not offered at this time; we compensate via performance bonuses and a clear path for growth, with future leadership opportunities as the company and engineering team scales.
⏩ Hiring Process (7–10 Days, Fast & Fair)
All stages are async & remote.
- Apply: 5-min form + short quiz (approx. 15 min total)
- Test 1: TypeScript & logic (1 h)
- Test 2: Next.js / React / Node / SQL deep-dive (1 h)
- Final: AI Video interview (1 h)
.
🚫 Who Shouldn’t Apply
- Need daily hand-holding
- Prefer consensus to decisions
- Chase perfect code over shipped value
- “Move fast & learn” culture feels scary
🚀 Ready to Own the Stack?
If you read this and thought “Finally—no bureaucracy,” and you're ready to set the technical standard for a growing company, show us something you’ve built and apply here →


We are looking for a Senior AI/ML Engineer with expertise in Generative AI (GenAI) integrations, APIs, and Machine Learning (ML) algorithms who should have strong hands-on experience in Python and statistical and predictive modeling.
Key Responsibilities:
• Develop and integrate GenAI solutions using APIs and custom models.
• Design, implement, and optimize ML algorithms for predictive modeling and data-driven insights.
• Leverage statistical techniques to improve model accuracy and performance.
• Write clean, well-documented, and testable code while adhering to
coding standards and best practices.
Required Skills:
• 4+ years of experience in AI/ML, with a strong focus on GenAI integrations and APIs.
• Proficiency in Python, including libraries like TensorFlow, PyTorch, Scikit-learn, and Pandas.
• Strong expertise in statistical modeling and ML algorithms (Regression, Classification, Clustering, NLP, etc.).
• Hands-on experience with RESTful APIs and AI model deployment.

Job Title : AI/ML Engineer – DevOps & Cloud Automation
Experience : 3+ Years
Location : Gurgaon (WFO)
Job Summary :
We’re looking for a talented AI/ML Engineer to help build an AI-driven DevOps automation platform. The ideal candidate has hands-on experience in ML, NLP, and cloud automation.
Key Responsibilities :
- Develop AI/ML models for predictive analytics, anomaly detection, and automation
- Build NLP bots, observability tools, and real-time monitoring systems
- Analyze system logs/metrics and automate workflows
- Integrate AI with DevOps pipelines, cloud-native apps, and APIs
- Research & apply deep learning, generative AI, reinforcement learning.
Requirements :
- 3+ years in AI/ML, ideally in DevOps/cloud/security environments.
- Strong in Python, TensorFlow/PyTorch, NLP, and LLMs.
- Experience with AWS/GCP/Azure, Kubernetes, MLOps, and CI/CD.
- Knowledge of cybersecurity, big data, and real-time systems.
- Bonus : AIOps, RAG, blockchain AI, federated learning.

Job Description
Position: Senior Machine Learning Engineer (AWS / GCP)
Work Location: Bengaluru / Pune
Mode: Work from office
Experience : 6-8 years
Responsibilities:
Experience in Building and Deploying Productionized Gen AI Solutions
Hands-on experience working with AI Agents
Analyze and evaluate ML algorithms to solve specific problems, ranking them by
success probability.
Explore, analyze, and visualize data to gain insights and understand its structure.
Ensure data quality across various datasets and oversee the data acquisition process
when necessary.
Define model validation strategies and develop preprocessing or feature engineering
workflows.
Design and implement data augmentation pipelines.
Train models, tune hyperparameters, and analyze model errors to devise solutions.
Establish and manage A/B testing setups.
Deploy and maintain MLOps pipelines, ensuring their smooth operation.
Technical Expertise:
Hands-on experience with machine learning frameworks like TensorFlow, PyTorch,
Keras, and Caffe.
Proficiency in developing supervised machine learning algorithms and deep learning
models such as CNN, RNN, LSTM, BERT, NLU, and YOLO.
Familiarity with cloud platforms, specifically AWS and GCP, and tools like SageMaker
and Vertex AI.
Experience in data wrangling and PySpark, with a working knowledge of EMR and
Glue.
Strong Python development skills and experience working in Linux environments.
Experience containerizing applications using Docker.
Proficiency in SQL and at least one NoSQL data store such as Elasticsearch, MongoDB,
Cassandra, or HBase.
Experience with branch-based deployments.
Preferred Skills:
6-8 years’ experience in Machine Learning
Knowledge of Langchain for building language models and vector databases.
Familiarity with embeddings and their applications in ML models.
Understanding of MLFlow and Kubeflow for managing the ML lifecycle.
Consulting experience
Experience optimizing performance across multiple GPUs.


Project Overview
Be part of developing "Fenrir Security" - a groundbreaking autonomous security testing platform. We're creating an AI-powered security testing solution that integrates with an Electron desktop application. This contract role offers the opportunity to build cutting-edge autonomous agent technology for security testing applications.
Contract Details
- Duration: Initial 4-month contract with possibility of extension
- Work Arrangement: Remote with regular online collaboration
- Compensation: Competitive rates based on experience (₹1,00,000-₹1,80,000 monthly)
- Hours: Flexible, approximately 40 hours weekly
Role & Responsibilities
- Develop the core autonomous agent architecture for security testing
- Design and implement the agent's planning and execution capabilities
- Create natural language interfaces for security test configuration
- Build knowledge representation systems for security testing methodologies
- Implement security vulnerability detection and analysis components
- Integrate autonomous capabilities with the Electron application
- Create learning mechanisms to improve testing efficacy over time
- Collaborate with security expert to encode testing approaches
- Deliver functional autonomous testing components at regular milestones
- Participate in technical planning and architecture decisions
Skills & Experience
- 3+ years of AI/ML development experience
- Strong background in autonomous agent systems or similar AI architectures
- Experience with LLM integration and prompt engineering
- Proficiency in Python and relevant AI/ML frameworks
- Knowledge of natural language processing techniques
- Understanding of machine learning approaches for security applications (preferred)
- Ability to work independently with minimal supervision
- Strong problem-solving abilities and communication skills
Why Join Us
- Work at the cutting edge of AI and cybersecurity technology
- Flexible working arrangements and competitive compensation
- Opportunity to solve novel technical challenges
- Potential for equity or profit-sharing in future funding rounds
- Build portfolio-worthy work in an innovative field
Selection Process
- Initial screening call
- Technical assessment (paid task)
- Final interview with founder
- Contract discussion and onboarding


Title: Senior Software Engineer – Python (Remote: Africa, India, Portugal)
Experience: 9 to 12 Years
INR : 40 LPA - 50 LPA
Location Requirement: Candidates must be based in Africa, India, or Portugal. Applicants outside these regions will not be considered.
Must-Have Qualifications:
- 8+ years in software development with expertise in Python
- kubernetes is important
- Strong understanding of async frameworks (e.g., asyncio)
- Experience with FastAPI, Flask, or Django for microservices
- Proficiency with Docker and Kubernetes/AWS ECS
- Familiarity with AWS, Azure, or GCP and IaC tools (CDK, Terraform)
- Knowledge of SQL and NoSQL databases (PostgreSQL, Cassandra, DynamoDB)
- Exposure to GenAI tools and LLM APIs (e.g., LangChain)
- CI/CD and DevOps best practices
- Strong communication and mentorship skills


What you will be doing at Webkul?
- Python Proficiency and API Integration:
- Demonstrate strong proficiency in Python programming language.
- Design and implement scalable, efficient, and maintainable code for machine learning applications.
- Integrate machine learning models with APIs to facilitate seamless communication between different software components.
- Machine Learning Model Deployment, Training, and Performance:
- Develop and deploy machine learning models for real-world applications.
- Conduct model training, optimization, and performance evaluation.
- Collaborate with cross-functional teams to ensure the successful integration of machine learning solutions into production systems.
- Large Language Model Understanding and Integration:
- Possess a deep understanding of large language models (LLMs) and their applications.
- Integrate LLMs into existing systems and workflows to enhance natural language processing capabilities.
- Stay abreast of the latest advancements in large language models and contribute insights to the team.
- Langchain and RAG-Based Systems (e.g., LLamaindex):
- Familiarity with Langchain and RAG-based systems, such as LLamaindex, will be a significant advantage.
- Work on the design and implementation of systems that leverage Langchain and RAG-based approaches for enhanced performance and functionality.
- LLM Integration with Vector Databases (e.g., Pinecone):
- Experience in integrating large language models with vector databases, such as Pinecone, for efficient storage and retrieval of information.
- Optimize the integration of LLMs with vector databases to ensure high-performance and low-latency interactions.
- Natural Language Processing (NLP):
- Expertise in NLP techniques such as tokenization, named entity recognition, sentiment analysis, and language translation.
- Experience with NLP libraries and frameworks like NLTK, SpaCy, Hugging Face Transformers
- Computer Vision:
- Proficiency in computer vision tasks such as image classification, object detection, segmentation, and image generation.
- Experience with computer vision libraries like OpenCV, PIL, and frameworks like TensorFlow, PyTorch, and Keras.
- Deep Learning:
- Strong understanding of deep learning concepts and architectures, including convolutional neural networks (CNNs) and recurrent neural networks (RNNs).
- Proficiency in using deep learning frameworks like TensorFlow, PyTorch, and Keras.
- Experience with model optimization, hyperparameter tuning, and transfer learning.
- Data Manipulation:
- Strong skills in data manipulation and analysis using libraries like Pandas, NumPy, and SciPy.
- Proficiency in data cleaning, preprocessing, and augmentation techniques.


Title: Data Engineer II (Remote – India/Portugal)
Exp: 4- 8 Years
CTC: up to 30 LPA
Required Skills & Experience:
- 4+ years in data engineering or backend software development
- AI / ML is important
- Expert in SQL and data modeling
- Strong Python, Java, or Scala coding skills
- Experience with Snowflake, Databricks, AWS (S3, Lambda)
- Background in relational and NoSQL databases (e.g., Postgres)
- Familiar with Linux shell and systems administration
- Solid grasp of data warehouse concepts and real-time processing
- Excellent troubleshooting, documentation, and QA mindset
If interested, kindly share your updated CV to 82008 31681
Job Title: Senior AIML Engineer – Immediate Joiner (AdTech)
Location: Pune – Onsite
About Us:
We are a cutting-edge technology company at the forefront of digital transformation, building innovative AI and machine learning solutions for the digital advertising industry. Join us in shaping the future of AdTech!
Role Overview:
We are looking for a highly skilled Senior AIML Engineer with AdTech experience to develop intelligent algorithms and predictive models that optimize digital advertising performance. Immediate joiners preferred.
Key Responsibilities:
- Design and implement AIML models for real-time ad optimization, audience targeting, and campaign performance analysis.
- Collaborate with data scientists and engineers to build scalable AI-driven solutions.
- Analyze large volumes of data to extract meaningful insights and improve ad performance.
- Develop and deploy machine learning pipelines for automated decision-making.
- Stay updated on the latest AI/ML trends and technologies to drive continuous innovation.
- Optimize existing models for speed, scalability, and accuracy.
- Work closely with product managers to align AI solutions with business goals.
Requirements:
- Minimum 4-6 years of experience in AIML, with a focus on AdTech (Mandatory).
- Strong programming skills in Python, R, or similar languages.
- Hands-on experience with machine learning frameworks like TensorFlow, PyTorch, or Scikit-learn.
- Expertise in data processing and real-time analytics.
- Strong understanding of digital advertising, programmatic platforms, and ad server technology.
- Excellent problem-solving and analytical skills.
- Immediate joiners preferred.
Preferred Skills:
- Knowledge of big data technologies like Spark, Hadoop, or Kafka.
- Experience with cloud platforms like AWS, GCP, or Azure.
- Familiarity with MLOps practices and tools.
How to Apply:
If you are a passionate AIML engineer with AdTech experience and can join immediately, we want to hear from you. Share your resume and a brief note on your relevant experience.
Join us in building the future of AI-driven digital advertising!


Job description:
Design, develop, and deploy ML models.
Build scalable AI solutions for real-world problems.
Optimize model performance and infrastructure.
Collaborate with the Technical Team and execute any other tasks assigned by the company/its representatives.
Required Candidate profile:
Strong Python & ML frameworks (TensorFlow/PyTorch).
Experience with data pipelines & model deployment.
Problem-solving & teamwork skills.
Passion for AI innovation.
Perks and benefits:
Learning Environment, Guidance & Support


About Moative
Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots and predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries. Through Moative Labs, we aspire to build micro-products and launch AI startups in vertical markets.
Work you’ll do
As a ML/ AI Engineer, you will be responsible for designing and developing intelligent software to solve business problems. You will collaborate with data scientists and domain experts to incorporate ML and AI technologies into existing or new workflows. You’ll analyze new opportunities and ideas. You’ll train and evaluate ML models, conduct experiments, develop PoCs and prototypes.
Responsibilities
- Designing, training, improving & launching machine learning models using tools such as XGBoost, Tensorflow, PyTorch.
- Own the end-to-end ML lifecycle and MLOps, including model deployment, performance tuning, on-going evaluation and maintenance.
- Improve the way we evaluate and monitor model and system performances.
- Proposing and implementing ideas that directly impact our operational and strategic metrics.
- Create tools and frameworks that accelerate the delivery of ML/ AI products.
Who you are
You are an engineer who is passionate about using AL/ML to improve processes, products and delight customers. You have experience working with less than clean data, developing ML models, and orchestrating the deployment of them to production. You thrive on taking initiatives, are very comfortable with ambiguity and can passionately defend your decisions.
Requirements and skills
- 4+ years of experience in programming languages such as Python, PySpark, or Scala.
- Proficient Knowledge of cloud platforms (e.g., AWS, Azure, GCP) and containerization, DevOps (Docker, Kubernetes), and MLOps practices and platforms like MLflow.
- Strong understanding of ML algorithms and frameworks (e.g., TensorFlow, PyTorch).
- Experience with AI foundational models and associated architectural and solution development frameworks
- Broad understanding of data structures, data engineering, statistical methodologies and machine learning models.
- Strong communication skills and teamwork.
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less.
Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that out loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.



Roles & Responsibilities:
We are looking for a Data Scientist to join the Data Science Client Services team to continue our success of identifying high quality target audiences that generate profitable marketing return for our clients. We are looking for experienced data science, machine learning and MLOps practitioners to design, build and deploy impactful predictive marketing solutions that serve a wide range of verticals and clients. The right candidate will enjoy contributing to and learning from a highly talented team and working on a variety of projects.
We are looking for a Lead Data Scientist who will be responsible for
- Ownership of design, implementation, and deployment of machine learning algorithms in a modern Python-based cloud architecture
- Design or enhance ML workflows for data ingestion, model design, model inference and scoring 3. Oversight on team project execution and delivery
- Establish peer review guidelines for high quality coding to help develop junior team members’ skill set growth, cross-training, and team efficiencies
- Visualize and publish model performance results and insights to internal and external audiences
Qualifications:
- Masters in a relevant quantitative, applied field (Statistics, Econometrics, Computer Science, Mathematics, Engineering)
- Minimum of 9+ years of work experience in the end-to-end lifecycle of ML model development and deployment into production within a cloud infrastructure (Databricks is highly preferred)
- Exhibit deep knowledge of core mathematical principles relating to data science and machine learning (ML Theory + Best Practices, Feature Engineering and Selection, Supervised and Unsupervised ML, A/B Testing, etc.)
- Proficiency in Python and SQL required; PySpark/Spark experience a plus
- Ability to conduct a productive peer review and proper code structure in Github
- Proven experience developing, testing, and deploying various ML algorithms (neural networks, XGBoost, Bayes, and the like)
- Working knowledge of modern CI/CD methods
This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.

What You will do:
● Create beautiful software experiences for our clients using design thinking, lean, and agile methodology.
● Work on software products designed from scratch using the latest cutting-edge technologies, platforms, and languages such as NodeJS, JavaScript.
● Work in a dynamic, collaborative, transparent, non-hierarchical culture.
● Work in collaborative, fast-paced,d and value-driven teams to build innovative customer experiences for our clients.
● Help to grow the next generation of developers and have a positive impact on the industry.
Basic Qualifications :
● Experience: 4+ years.
● Hands-on development experience with a broad mix of languages such as NodeJS,
● Server-side development experience, mainly in NodeJS, can be considerable
● UI development experience in AngularJS
● Passion for software engineering and following the best coding concepts .
● Good to great problem-solving and communication skills.
Nice to have Qualifications:
● Product and customer-centric mindset.
● Great OO skills, including design patterns.
● Experience with devops, continuous integration & deployment.
● Exposure to big data technologies, Machine Learning, and NLP will be a plus.

What You will do:
● Play the role of Data Analyst / ML Engineer
● Collection, cleanup, exploration and visualization of data
● Perform statistical analysis on data and build ML models
● Implement ML models using some of the popular ML algorithms
● Use Excel to perform analytics on large amounts of data
● Understand, model and build to bring actionable business intelligence out of data that is available in different formats
● Work with data engineers to design, build, test and monitor data pipelines for ongoing business operations
Basic Qualifications:
● Experience: 4+ years.
● Hands-on development experience playing the role of Data Analyst and/or ML Engineer.
● Experience in working with excel for data analytics
● Experience with statistical modelling of large data sets
● Experience with ML models and ML algorithms
● Coding experience in Python
Nice to have Qualifications:
● Experience with wide variety of tools used in ML
● Experience with Deep learning
Benefits:
● Competitive salary.
● Hybrid work model.
● Learning and gaining experience rapidly.
● Reimbursement for basic working set up at home.
● Insurance (including a top up insurance for COVID


Requirement:
● Role: Fullstack Developer
● Location: Noida (Hybrid)
● Experience: 1-3 years
● Type: Full-Time
Role Description : We’re seeking a Fullstack Developer to join our fast-moving team at Velto. You’ll be responsible for building robust backend services and user-facing features using a modern tech stack. In this role, you’ll also get hands-on exposure to applied AI, contributing to the development of LLM-powered workflows, agentic systems, and custom fi ne-tuning pipelines.
Responsibilities:
● Develop and maintain backend services using Python and FastAPI
● Build interactive frontend components using React
● Work with SQL databases, design schema, and integrate data models with Python
● Integrate and build features on top of LLMs and agent frameworks (e.g., LangChain, OpenAI, HuggingFace)
● Contribute to AI fi ne-tuning pipelines, retrieval-augmented generation (RAG) setups, and contract intelligence workfl ows
● Profi ciency with unit testing libraries like jest, React testing library and pytest.
● Collaborate in agile sprints to deliver high-quality, testable, and scalable code
● Ensure end-to-end performance, security, and reliability of the stack
Required Skills:
● Proficient in Python and experienced with web frameworks like FastAPI
● Strong grasp of JavaScript and React for frontend development
● Solid understanding of SQL and relational database integration with Python
● Exposure to LLMs, vector databases, and AI-based applications (projects, internships, or coursework count)
● Familiar with Git, REST APIs, and modern software development practices
● Bachelor’s degree in Computer Science or equivalent fi eld
Nice to Have:
● Experience working with LangChain, RAG pipelines, or building agentic workfl ows
● Familiarity with containerization (Docker), basic DevOps, or cloud deployment
● Prior project or internship involving AI/ML, NLP, or SaaS products
Why Join Us?
● Work on real-world applications of AI in enterprise SaaS
● Fast-paced, early-stage startup culture with direct ownership
● Learn by doing—no layers, no red tape
● Hybrid work setup and merit-based growth


Join CD Edverse, an innovative EdTech app, as AI Specialist! Develop a deep research tool to generate comprehensive courses and enhance AI mentors. Must have strong Python, NLP, and API integration skills. Be part of transforming education! Apply now.

Loyalty Juggernaut Inc. (LJI) is a Silicon Valley-based product company, founded by industry veterans with decades of expertise in CRM, Loyalty, and Mobile AdTech. With a global footprint spanning the USA, Europe, UAE, India, and Latin America, we are trusted partners for customer centric enterprises across diverse industries including Airlines, Airport, Retail, Hospitality, Banking, F&B, Telecom, Insurance and Ecosystem.
As pioneers in next-generation loyalty and customer engagement solutions, we are not just transforming loyalty—we are redefining it. With a passion for innovation and a commitment to excellence, LJI is reshaping the loyalty landscape, enabling enterprises to create meaningful, long-lasting relationships with their customers. We are THE JUGGERNAUTS, driving innovation and impact in the loyalty ecosystem.
At the core of our innovation is GRAVTY®, a revolutionary Digital Transformation SaaS Product that empowers multinational enterprises to build deeper customer connections. Designed for scalability and personalization, GRAVTY® delivers cutting-edge loyalty solutions that transform customer engagement across diverse markets.
Our Impact:
- 400+ million members connected through our platform.
- Trusted by 100+ global brands/partners, driving loyalty and brand devotion worldwide.
Proud to be a Three-Time Champion for Best Technology Innovation in Loyalty!!
Explore more about us at www.lji.io.
We are seeking a highly skilled Machine Learning Engineer to join our dynamic team. The ideal candidate will have a proven track record of developing and deploying machine learning models in production that drive significant business impact. With a passion for data science and a competitive edge demonstrated through Kaggle rankings, you will play a crucial role in advancing our Machine Learning initiatives.
What you will OWN:
- Design, develop, and deploy machine learning models to enhance customer loyalty and engagement.
- Collaborate with cross-functional teams to understand business needs and translate them into technical solutions.
- Conduct data exploration and feature engineering to improve model performance.
- Stay abreast of the latest machine learning techniques and technologies, applying them to solve challenging problems.
- Evaluate and monitor model performance, making iterative improvements to achieve business objectives.
You would make a GREAT FIT if you have :
- Bachelor's or Master's degree in Computer Science, Data Science, Statistics, or a related field.
- Minimum of 5 years of experience in building and deploying machine learning models.
- Strong proficiency in programming languages such as Python or R.
- Experience with machine learning frameworks (e.g., TensorFlow, PyTorch) and libraries (e.g., scikit-learn, pandas).
- Knowledge on cloud computing services (e.g., AWS, Google Cloud, Azure) and their machine learning offerings.
- Proven track record on Kaggle(or similar) with a high ranking, demonstrating expertise in data science competitions.
- Experience with deep learning/Neural networks (nice to have).
- Ability to work in a fast-paced, agile development environment.
- Passion for innovation and a desire to stay at the cutting edge of machine learning and data science.
- Experience with building and deploying ML models in production.
- Excellent analytical and problem-solving skills.
- Strong communication and collaboration skills.
- Knowledge of reinforcement learning and recommendation systems – Nice to have.
Why should you consider us?
- Opportunity to Innovate with and Learn from a World-class technology team
- Dabble into the future of technology - Enterprise Cloud Computing, Blockchain, Machine Learning and AI, Mobile, Digital Wallets, and much more...
- Grow with a fast growing company with global presence and recognition.

Responsibilities
• Model Development and Optimization: Design, build, and deploy NLP models, including transformer models (e.g., BERT, GPT, T5) and other SOTA architectures, as well as traditional machine learning algorithms (e.g., SVMs, Logistic Regression) for specific applications.
• Data Processing and Feature Engineering: Develop robust pipelines for text preprocessing, feature extraction, and data augmentation for structured and unstructured data.
• Model Fine-Tuning and Transfer Learning:
Fine-tune large language models for specific applications, leveraging transfer learning techniques, domain adaptation, and a mix of deep learning and traditional ML models.
• Performance Optimization: Optimize model performance for scalability and latency, applying techniques such as quantization, ONNX formats etc.
• Research and Innovation: Stay updated with the latest research in NLP, Deep Learning, and Generative AI, applying innovative solutions and techniques (e.g., RAG applications, Prompt engineering, Self-supervised learning).
• Stakeholder Communication: Collaborate with stakeholders to gather requirements, conduct due diligence, and communicate project updates effectively, ensuring alignment between technical solutions and business goals.
• Evaluation and Testing: Establish metrics, benchmarks, and methodologies for model evaluation, including cross-validation, and error analysis, ensuring models meet accuracy, fairness, and reliability standards. • Deployment and Monitoring: Oversee the deployment of NLP models in production, ensuring seamless integration, model monitoring, and retraining processes.


AI Architect
Location and Work Requirements
- Position is based in KSA or UAE
- Must be eligible to work abroad without restrictions
- Regular travel within the region required
Key Responsibilities
- Minimum 7+ years of experience in Data & Analytics domain and minimum 2 years as AI Architect
- Drive technical solution design engagements and implementations
- Support customer implementations across various deployment modes (Public SaaS, Single-Tenant SaaS, and Self-Managed Kubernetes)
- Provide advanced technical support, including deployment troubleshooting and coordinating with customer AI Architect and product development teams when needed
- Guide customers in implementing generative AI solutions, including LLM integration, vector database management, and prompt engineering
- Coordinate and oversee platform installations and configuration work
- Assist customers with platform integration, including API implementation and custom model deployment
- Establish and promote best practices for AI governance and MLOps
- Proactively identify and address potential technical challenges before they impact customer success
Required Technical Skills
- Strong programming skills in Python with experience in data processing libraries (Pandas, NumPy)
- Proficiency in SQL and experience with various database technologies including MongoDB
- Container technologies: Docker (build, modify, deploy) and Kubernetes (kubectl, helm)
- Version control systems (Git) and CI/CD practices
- Strong networking fundamentals (TCP/IP, SSH, SSL/TLS)
- Shell scripting (Linux/Unix environments)
- Experience in working on on-prem, airgapped environments
- Experience with cloud platforms (AWS, Azure, GCP)
Required AI/ML Skills
- Deep expertise in both predictive machine learning and generative AI technologies
- Proven experience implementing and operationalizing large language models (LLMs)
- Strong knowledge of vector databases, embedding technologies, and similarity search concepts
- Advanced understanding of prompt engineering, LLM evaluation, and AI governance methods
- Practical experience with machine learning deployment and production operations
- Understanding of AI safety considerations and risk mitigation strategies
Required Qualities
- Excellent English communication skills with ability to explain complex technical concepts. Arabic language is advantageous.
- Strong consultative approach to understanding and solving business problems
- Proven ability to build trust through proactive customer engagement
- Strong problem-solving abilities and attention to detail
- Ability to work independently and as part of a distributed team
- Willingness to travel within the Middle East & Africa region as needed


Role - AI Architect
Location - Noida/Remote
Mode - Hybrid - 2 days WFO
As an AI Architect at CLOUDSUFI, you will play a key role in driving our AI strategy for customers in the Oil & Gas, Energy, Manufacturing, Retail, Healthcare, and Fintech sectors. You will be responsible for delivering large-scale AI transformation programs for multinational organizations, preferably Fortune 500 companies. You will also lead a team of 10-25 Data Scientists to ensure successful project execution.
Required Experience
● Minimum 12+ years of experience in Data & Analytics domain and minimum 3 years as AI Architect
● Master’s or Ph.D. in a discipline such as Computer Science, Statistics or Applied Mathematics with an emphasis or thesis work on one or more of the following: deep learning, machine learning, Generative AI and optimization.
● Must have experience of articulating and presenting business transformation journey using AI / Gen AI technology to C-Level Executives
● Proven experience in delivering large-scale AI and GenAI transformation programs for multinational organizations
● Strong understanding of AI and GenAI algorithms and techniques
● Must have hands-on experience in open-source software development and cloud native technologies especially on GCP tech stack
● Proficiency in python and prominent ML packages Proficiency in Neural Networks is desirable, though not essential
● Experience leading and managing teams of Data Scientists, Data Engineers and Data Analysts
● Ability to work independently and as part of a team Additional Skills
(Preferred):
● Experience in the Oil & Gas, Energy, Manufacturing, Retail, Healthcare, or Fintech sectors
● Knowledge of cloud platforms (AWS, Azure, GCP)
● GCP Professional Cloud Architect and GCP Professional Machine Learning Engineer Certification
● Experience with AI frameworks and tools (TensorFlow, PyTorch, Keras)


🚀 We're Hiring: Python Developer! (1-3 Years Experience) 🐍
Are you a passionate Python Developer looking for your next challenge? Join our team and work on cutting-edge applications that make a real impact! 💡
Location: Ahmedabad (On-site)
🔹 Key Responsibilities:
✅ Design, develop, and maintain Python-based applications & services.
✅ Optimize applications for speed, scalability, and security.
✅ Work with Django, Flask, or FastAPI to build robust backend systems.
✅ Integrate databases (PostgreSQL, MySQL, MongoDB) and RESTful APIs.
✅ Collaborate with cross-functional teams to ship new features.
✅ Stay updated with industry trends & best practices.
🔹 Requirements:
✔️ 1 to 3 years of experience in Python development.
✔️ Strong expertise in Django, Flask, or FastAPI.
✔️ Experience with RESTful APIs, Git, and cloud platforms (AWS, Azure, GCP).
✔️ Familiarity with Docker, Kubernetes, and CI/CD pipelines is a plus.
✔️ Hands-on experience with LLM model integration & RAG is a bonus!
💼 Interested? Apply now!
📩 Drop your resume in DM for more details!


Dear Candidate;
Greetings from!!
Vinga Software Solutions
Product Details:
Bautomate is a Intelligent Business Process Automation Software comprehensive hyper automation platform housed within a single software system. | AI-Powered Process Automation Solution
Vinga Software Solutions is the Parent Company of Bautomate Product https://www.vinga.biz/about/ and you'll be in the role of Vinga Software Solutions.
About the Product:
Bautomate offers cognitive automation solutions designed to automate repetitive tasks, eliminate bottlenecks, and enable seamless workflows.
The product combines artificial intelligence, business process management, robotic process automation (RPA), and optical character recognition (OCR) to streamline and optimize various business processes. It provides a transformative solution to empower businesses of all sizes across industries to achieve unprecedented levels of productivity and success.
Unique features of Bautomate's business process automation solutions include:
Workflow Automation: Bautomate's intuitive drag-and-drop interface enables users to easily automate complex workflows, leveraging pre-built components for effective intelligent automation.
Data Integration: Seamless integration with all existing systems and applications ensures smooth data transfer and real-time information exchange, enhancing collaboration and decision-making.
Intelligent Analytics: By harnessing advanced analytics capabilities, businesses can gain valuable insights into their processes, identify areas for improvement, and make data-driven decisions. It allows organizations to optimize their operations and drive growth based on comprehensive data analysis.
Cognitive Automation: Our comprehensive solution encompasses Intelligent Document Capture utilizing OCR & NLP, Predictive Analytics for Forecasting, Computer Vision and Image Processing, Anomaly Detection, and an Intelligent Decision Engine.
Scalability and Flexibility: Bautomate platform is highly scalable, accommodating the evolving needs of businesses as they grow. It offers flexible deployment options, including on-premises and cloud-based solutions.
About Us: We are leading provider of business process automation, aiding firms to streamline operations, boost efficiency, and spur growth. Our suite includes AP automation, purchase order automation, P2P automation, invoice automation, IVR testing automation, form etc.
AI/ML Developer – Lead (LLM & Gen AI)
Experience Required: 5 to 9 years
Job Location: Madurai
Role Overview:
We are looking for a Senior AI/ML Developer with expertise in Large Language Models (LLM) & Generative AI. The ideal candidate should have experience in developing and deploying AI-driven solutions.
Key Responsibilities:
- Design and develop AI/ML models focusing on LLMs & Generative AI.
- Collaborate with data scientists to optimize model performance.
- Deploy AI solutions on cloud platforms (AWS, GCP, Azure).
- Lead AI projects and mentor junior developers.
Required Skills:
- Expertise in LLMs, Gen AI, NLP, and Deep Learning.
- Strong coding skills in Python, TensorFlow, PyTorch.
- Experience in ML model deployment using Docker/Kubernetes.
- Knowledge of cloud-based AI/ML services.
Can you pls fill these details
Total work experience :
Experience in AI/ML :
Exp in LLM & Generative AI :
Exp in NLP, deep learning, python / pytorch / tensorflow :
Exp in
Current CTC ;
Expected CTC :
Last working day :
Notice Period:
Current Location :
Native Place :
Reason for Job Change :
Marital Status :
Do you have any offer in hand :

- Model Development and Deployment:
- Design, develop, and implement machine learning models for various applications (e.g., classification, regression, natural language processing, computer vision).
- Deploy and maintain machine learning models in production environments.
- Optimize model performance and efficiency through feature engineering, hyperparameter tuning, and model selection.
- Build and maintain scalable machine learning pipelines.
- Data Engineering and Management:
- Work with large datasets, including data cleaning, preprocessing, and feature extraction.
- Develop and maintain data pipelines for data ingestion, transformation, and storage.
- Ensure data quality and consistency.
About Moative
Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots and predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries. Through Moative Labs, we aspire to build micro-products and launch AI startups in vertical markets.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Work you’ll do
As an AI Engineer at Moative, you will be at the forefront of applying cutting-edge AI to solve real-world problems. You will be instrumental in designing and developing intelligent software solutions, leveraging the power of foundation models to automate and optimize critical workflows. Collaborating closely with domain experts, data scientists, and ML engineers, you will integrate advanced ML and AI technologies into both existing and new systems. This role offers a unique opportunity to explore innovative ideas, experiment with the latest foundation models, and build impactful products that directly enhance the lives of citizens by transforming how government services are delivered. You'll be working on challenging and impactful projects that move the needle on traditionally difficult-to-automate processes.
Responsibilities
- Utilize and adapt foundation models, particularly in vision and data extraction, as the core building blocks for developing impactful products aimed at improving government service delivery. This includes prompt engineering, fine-tuning, and evaluating model performance
- Architect, build, and deploy intelligent AI agent-driven workflows that automate and optimize key processes within government service delivery. This encompasses the full lifecycle from conceptualization and design to implementation and monitoring
- Contribute directly to enhancing our model evaluation and monitoring methodologies to ensure robust and reliable system performance. Proactively identify areas for improvement and implement solutions to optimize model accuracy and efficiency
- Continuously learn and adapt to the rapidly evolving landscape of AI and foundation models, exploring new techniques and technologies to enhance our capabilities and solutions
Who you are
You are a passionate and results-oriented engineer who is driven by the potential of AI/ML to revolutionize processes, enhance products, and ultimately improve user experiences. You thrive in dynamic environments and are comfortable navigating ambiguity. You possess a strong sense of ownership and are eager to take initiative, advocating for your technical decisions while remaining open to feedback and collaboration.
You are adept at working with real-world, often imperfect data, and have a proven ability to develop, refine, and deploy AI/ML models into production in a cost-effective and scalable manner. You are excited by the prospect of directly impacting government services and making a positive difference in the lives of citizens
Skills & Requirements
- 3+ years of experience in programming languages such as Python or Scala
- Proficient knowledge of cloud platforms (e.g., AWS, Azure, GCP) and containerization, DevOps (Docker, Kubernetes)
- Tuning and deploying foundation models, particularly for vision tasks and data extraction
- Excellent analytical and problem-solving skills with the ability to break down complex challenges into actionable steps
- Strong written and verbal communication skills, with the ability to effectively articulate technical concepts to both technical and non-technical audiences
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less.
Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that out loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.

Need Excellent Communication skills as the company is dealing with US Clients also
• 3+ years in AI development, with experience in multi-agent systems, logistics, or related fields.
• Proven experience in conducting A/B testing and beta testing for AI systems.
• Hands-on experience with CrewAI and LangChain tools.
• Should have hands-on experience working with end-to-end chatbot development, specifically with Agentic and RAG-based chatbots. It is essential that the candidate has been involved in the entire lifecycle of chatbot creation, from design to deployment.
• Should have practical experience with LLM application deployment.
• Proficiency in Python and machine learning frameworks (e.g., TensorFlow, PyTorch).
• Experience in setting up monitoring dashboards with tools like Grafana, Tableau, or similar.
• Proficiency with cloud platforms (AWS, Azure, GCP)



Dear Professionals!
We are HiringGENAIML Developer!
Key Skills & Qualifications
- Strong proficiency in Python, with a focus on GenAI best practices and frameworks.
- Expertise in machine learning algorithms, data modeling, and model evaluation.
- Experience with NLP techniques, computer vision, or generative AI.
- Deep knowledge of LLMs, prompt engineering, and GenAI technologies.
- Proficiency in data analysis tools like Pandas and NumPy.
- Hands-on experience with vector databases such as Weaviate or Pinecone.
- Familiarity with cloud platforms (AWS, Azure, GCP) for AI deployment.
- Strong problem-solving skills and critical-thinking abilities.
- Experience with AI model fairness, bias detection, and adversarial testing.
- Excellent communication skills to translate business needs into technical solutions.
Preferred Qualifications
- Bachelors or Masters degree in Computer Science, AI, or a related field.
- Experience with MLOps practices for model deployment and maintenance.
- Strong understanding of data pipelines, APIs, and cloud infrastructure.
- Advanced degree in Computer Science, Machine Learning, or a related field (preferred).