50+ Machine Learning (ML) Jobs in India
Apply to 50+ Machine Learning (ML) Jobs on CutShort.io. Find your next job, effortlessly. Browse Machine Learning (ML) Jobs and apply today!



Experience in working with various ML libraries and packages like Scikit learn, Numpy, Pandas, Tensorflow, Matplotlib, Caffe, etc. 2. Deep Learning Frameworks: PyTorch, spaCy, Keras 3. Deep Learning Architectures: LSTM, CNN, Self-Attention and Transformers 4. Experience in working with Image processing, computer vision is must 5. Designing data science applications, Large Language Models(LLM) , Generative Pre-trained Transformers (GPT), generative AI techniques, Natural Language Processing (NLP), machine learning techniques, Python, Jupyter Notebook, common data science packages (tensorflow, scikit-learn,keras etc.,.) , LangChain, Flask, FastAPI, prompt engineering. 6. Programming experience in Python 7. Strong written and verbal communications 8. Excellent interpersonal and collaboration skills.
Good-to-Have 1. Experience working with vectored databases and graph representation of documents. 2. Experience with building or maintaining MLOps pipelines. 3. Experience in Cloud computing infrastructures like AWS Sagemaker or Azure ML for implementing ML solutions is preferred. 4. Exposure to Docker, Kubernetes
SN Role descriptions / Expectations from the Role
1 Design and implement scalable and efficient data architectures to support generative AI workflows.
2 Fine tune and optimize large language models (LLM) for generative AI, conduct performance evaluation and benchmarking for LLMs and machine learning models
3 Apply prompt engineer techniques as required by the use case
4 Collaborate with research and development teams to build large language models for generative AI use cases, plan and breakdown of larger data science tasks to lower-level tasks
5 Lead junior data engineers on tasks such as design data pipelines, dataset creation, and deployment, use data visualization tools, machine learning techniques, natural language processing , feature engineering, deep learning , statistical modelling as required by the use case.

Job Title: Generative AI Engineer
Experience: 6–9 years
Job description:
We are seeking a Generative AI Engineer with 6–9 years of experience who can independently
explore, prototype, and present the art of the possible using LLMs, agentic frameworks, and
emerging Gen AI techniques. This role combines deep technical hands-on development with
non-technical influence and presentation skills.
You will contribute to key Gen AI innovation initiatives, help define new protocols (like MCP
and A2A) and deliver fully functional prototypes that push the boundaries of enterprise AI — not
just in Jupyter notebooks, but as real applications ready for production exploration.
Key Responsibilities:
· LLM Applications & Agentic Frameworks
· Design and implement end-to-end LLM applications using OpenAI, Claude, Mistral,
· Gemini, or LLaMA on AWS, Databricks, Azure or GCP.
· Build intelligent, autonomous agents using LangGraph, AutoGen, LlamaIndex, Crew.ai,or custom frameworks.
· Develop Multi Model, Multi Agent, Retrieval-Augmented Generation (RAG) applications with secure context embedding and tracing with reports.
· Rapidly explore and showcase the art of the possible through functional, demonstrable POCs
· Advanced AI Experimentation
· Fine-tune LLMs and Small Language Models (SLMs) for domain-specific use.
· Create and leverage synthetic datasets to simulate edge cases and scale training.
· Evaluate agents using custom agent evaluation frameworks (success rates, latency,reliability)
· Evaluate emerging agent communication standards — A2A (Agent-to-Agent) and MCP (Model Context Protocol), Business Alignment & Cross-Team Collaboration
· Translate ambiguous requirements into structured, AI-enabled solutions.
· Clearly communicate and present ideas, outcomes, and system behaviors to technical and non-technical stakeholders
Good-To-Have:
· Microsoft Copilot Studio
· DevRev
· Codium
· Cursor
· Atlassian AI
· Databricks Mosaic AI
Qualifications:
· 6–9 years of experience in software development or AI/ML engineering
· At least 3 years working with LLMs, GenAI applications, or agentic frameworks.
· Proficient in AI/ML, MLOps concepts, Python, embeddings, prompt engineering, and
· model orchestration
· Proven track record of developing functional AI prototypes beyond notebooks.
· Strong presentation and storytelling skills to clearly convey GenAI concepts and value.

Role Overview:
Zolvit is looking for a highly skilled and self-driven Lead Machine Learning Engineer / Lead Data Scientist to lead the design and development of scalable, production-grade ML systems. This role is ideal for someone who thrives on solving complex problems using data, is deeply passionate about machine learning, and has a strong understanding of both classical techniques and modern AI systems like Large Language Models (LLMs).
You will work closely with engineering, product, and business teams to identify impactful ML use cases, build data pipelines, design training workflows, and ensure the deployment of robust, high-performance models at scale.
Key Responsibilities:
● Design and implement scalable ML systems, from experimentation to deployment.
● Build and maintain end-to-end data pipelines for data ingestion, preprocessing, feature engineering, and monitoring.
● Lead the development and deployment of ML models across a variety of use cases — including classical ML and LLM-based applications like summarization, classification, document understanding, and more.
● Define model training and evaluation pipelines, ensuring reproducibility and performance tracking.
● Apply statistical methods to interpret data, validate assumptions, and inform modeling decisions.
● Collaborate cross-functionally with engineers, data analysts, and product managers to solve high-impact business problems using ML.
● Ensure proper MLOps practices are in place for model versioning, monitoring, retraining, and performance management.
● Keep up-to-date with the latest advancements in AI/ML, and actively evaluate and incorporate LLM capabilities and frameworks into solutions.
● Mentor junior ML engineers and data scientists, and help scale the ML function across the organization.
Required Qualifications:
● 7+ years of hands-on experience in ML/AI, building real-world ML systems at scale.
● Proven experience with classical ML algorithms (e.g., regression, classification,
clustering, ensemble models).
● Deep expertise in modern LLM frameworks (e.g., OpenAI, HuggingFace, LangChain)
and their integration into production workflows.
● Strong experience with Python, and frameworks such as Scikit-learn, TensorFlow,
PyTorch, or equivalent.
● Solid background in statistics and the ability to apply statistical thinking to real-world
problems.
● Experience with data engineering tools and platforms (e.g., Spark, Airflow, SQL,
Pandas, AWS Glue, etc.).
● Familiarity with cloud services (AWS preferred) and containerization tools (Docker,
Kubernetes) is a plus.
● Strong communication and leadership skills, with experience mentoring and guiding
junior team members.
● Self-starter attitude with a bias for action and ability to thrive in fast-paced environments.
● Master’s degree in Machine Learning, Artificial Intelligence, Statistics, or a related
field is preferred.
Preferred Qualifications:
● Experience deploying ML systems in microservices or event-driven architectures.
● Hands-on experience with vector databases, embeddings, and retrieval-augmented
generation (RAG) systems.
● Understanding of Responsible AI principles and practices.
Why Join Us?
● Lead the ML charter in a mission-driven company solving real-world challenges.
● Work on cutting-edge LLM use cases and platformize ML capabilities for scale.
● Collaborate with a passionate and technically strong team in a high-impact environment.
● Competitive compensation, flexible working model, and ample growth opportunities.


Job Description : Quantitative R&D Engineer
As a Quantitative R&D Engineer, you’ll explore data and design logic that becomes live trading strategies. You’ll bridge the gap between raw research and deployed, autonomous capital systems.
What You’ll Work On
- Analyze on-chain and market data to identify inefficiencies and behavioral patterns.
- Develop and prototype systematic trading strategies using statistical and ML-based techniques.
- Contribute to signal research, backtesting infrastructure, and strategy evaluation frameworks.
- Monitor and interpret DeFi protocol mechanics (AMMs, perps, lending markets) for alpha generation.
- Collaborate with engineers to turn research into production-grade, automated trading systems.
Ideal Traits
- Strong in data structures, algorithms, and core CS fundamentals.
- Proficiency in any programming language
- Understanding of probability, statistics, or ML concepts.
- Self-driven and comfortable with ambiguity, iteration, and fast learning cycles.
- Strong interest in markets, trading, or algorithmic systems.
Bonus Points For
- Experience with backtesting or feature engineering.
- Exposure to crypto primitives (AMMs, perps, mempools, etc.)
- Projects involving alpha signals, strategy testing, or DeFi bots.
- Participation in quant contests, hackathons, or open-source work.
What You’ll Gain:
- Cutting-Edge Tech Stack: You'll work on modern infrastructure and stay up to date with the latest trends in technology.
- Idea-Driven Culture: We welcome and encourage fresh ideas. Your input is valued, and you're empowered to make an impact from day one.
- Ownership & Autonomy: You’ll have end-to-end ownership of projects. We trust our team and give them the freedom to make meaningful decisions.
- Impact-Focused: Your work won’t be buried under bureaucracy. You’ll see it go live and make a difference in days, not quarters
What We Value:
- Craftsmanship over shortcuts: We appreciate engineers who take the time to understand the problem deeply and build durable solutions—not just quick fixes.
- Depth over haste: If you're the kind of person who enjoys going one level deeper to really "get" how something works, you'll thrive here.
- Invested mindset: We're looking for people who don't just punch tickets, but care about the long-term success of the systems they build.
- Curiosity with follow-through: We admire those who take the time to explore and validate new ideas, not just skim the surface.
Compensation:
- INR 6 - 12 LPA
- Performance Bonuses: Linked to contribution, delivery, and impact.


A Concise Glimpse into the Role
We’re on the hunt for young, energetic, and hustling talent ready to bring fresh ideas and unstoppable drive to the table.
This isn’t just another role—it’s a launchpad for change-makers. If you’re driven to disrupt, innovate, and challenge the norm, we want you to make your mark with us.
Are you ready to redefine the future?
Apply now and step into a career where your ideas power the impossible!
Your Time Will Be Invested In
· AI/ML Model Innovation and Research
· Are you ready to lead transformative projects at the cutting edge of AI and machine learning? We're looking for a visionary mind with a passion for building ground breaking solutions that redefine the possible.
What You'll Own
Pioneering AI/ML Model Innovation
· Take ownership of designing, developing, and deploying sophisticated AI and ML models that push boundaries.
· Spearhead the creation of generative AI applications that revolutionize real-world experiences.
· Drive end-to-end implementation of AI-driven products with a focus on measurable impact.
Data Engineering and Advanced Development
· Architect robust pipelines for data collection, pre-processing, and analysis, ensuring precision at every stage.
· Deliver clean, scalable, and high-performance Python code that empowers our AI systems to excel.
Trailblazing Research and Strategic Collaboration
· Dive into the latest research to stay ahead of AI/ML trends, identifying opportunities to integrate state-of-the-art techniques.
· Foster innovation by brainstorming with a dynamic team to conceptualize novel AI solutions.
· Elevate the team's expertise by preparing insightful technical documentation and presenting actionable findings.
What We Want You to Have
· 1-2 years' experience with live AI project experience, from conceptualization to real-world deployment.
· Foundational knowledge in AI, ML, and generative AI applications.
· Proficient in Python and familiar with libraries like TensorFlow, PyTorch, Scikit-learn.
· Experience working with structured & unstructured data, as well as predictive analytics.
· Basic understanding of Deep Learning Techniques.
· Knowledge of AutoGen for building scalable multi-agent AI systems & familiarity with LangChain or similar frameworks for building AI Agents.
· Knowledge of using AI tools like VS Copilot.
· Proficient in working with vector databases for managing and retrieving data.
· Understanding of AI/ML deployment tools such as Docker, Kubernetes.
· Understanding JavaScript, TypeScript with React and Tailwind.
· Proficiency in Prompt Engineering for various use cases, including content generation and data extraction.
· Ability to work independently and as part of a collaborative team.
· Excellent communication skills and a strong willingness to learn.
Nice to Have
· Prior project or coursework experience in AI/ML.
· Background in Big Data technologies (Spark, Hadoop, Databricks).
· Experience with containerization and deployment tools.
· Proficiency in SQL & NoSQL databases.
· Familiarity with Data Visualization tools (e.g., Matplotlib, Seaborn).
Soft Skills
· Strong problem-solving and analytical capabilities.
· Excellent teamwork and interpersonal communication.
· Ability to thrive in a fast-paced and innovation-driven environment.


Lead Data Scientist role
Work Location- Remote
Exp-7+ Years Relevant
Notice Period- Immediate
Job Overview:
We are seeking a highly skilled and experienced Senior Data Scientist with expertise in Machine Learning (ML), Natural Language Processing (NLP), Generative AI (GenAI) and Deep Learning (DL).
Mandatory Skills:
• 5+ years of work experience in writing code in Python
• Experience in using various Python libraries like Pandas, NumPy
• Experience in writing good quality code in Python and code refactoring techniques (e.g.,IDE’s – PyCharm, Visual Studio Code; Libraries – Pylint, pycodestyle, pydocstyle, Black)
• Strong experience on AI assisted coding experience.
• AI assisted coding for existing IDE's like vscode.
• Experimented multiple AI assisted tools and done research around it.
• Deep understanding of data structures, algorithms, and excellent problem-solving skills
• Experience in Python, Exploratory Data Analysis (EDA), Feature Engineering, Data Visualisation
• Machine Learning libraries like Scikit-learn, XGBoost
• Experience in CV, NLP or Time Series.
• Experience in building models for ML tasks (Regression, Classification)
• Should have Experience into LLM, LLM Fine Tuning, Chatbot, RAG Pipeline Chatbot, LLM Solution, Multi Modal LLM Solution, GPT, Prompt, Prompt Engineering, Tokens, Context Window, Attention Mecanism, Embeddings
• Experience of model training and serving on any of the cloud environments (AWS, GCP,Azure)
• Experience in distributed training of models on Nvidia GPU’s
• Familiarity in Dockerizing the model and create model end points (Rest or gRPC)
• Strong working knowledge of source code control tools such as Git, Bitbucket
• Prior experience of designing, developing and maintaining Machine Learning solution through its Life Cycle is highly advantageous
• Strong drive to learn and master new technologies and techniques
• Strong communication and collaboration skills
• Good attitude and self-motivated
Mandatory Skills- *Strong Python coding, Machine Learning, Software Engineering, Deep Learning, Generative AI, LLM, AI Assisted coding tools.*


We are looking for a dynamic and skilled Business Analyst Trainer with 2 to 5 years of hands-on industry and/or teaching experience. The ideal candidate should be able to simplify complex data concepts, mentor aspiring professionals, and deliver effective training programs in Business Analyst, Power BI, Tableau, Machine learning


About Us
DAITA is a German AI startup revolutionizing the global textile supply chain by digitizing factory-to-brand workflows. We are building cutting-edge AI-powered SaaS and Agentic Systems that automate order management, production tracking, and compliance — making the supply chain smarter, faster, and more transparent.
Fresh off a $500K pre-seed raise, our passionate team is on the ground in India, collaborating directly with factories and brands to build our MVP and create real-world impact. If you’re excited by the intersection of AI, SaaS, and supply chain innovation, join us to help reshape how textiles move from factory floors to global brands.
Role Overview
We’re seeking a versatile Full-Stack Engineer to join our growing engineering team. You’ll be instrumental in designing and building scalable, secure, and high-performance applications that power our AI-driven platform. Working closely with Founders, ML Engineers, and Pilot Customers, you’ll transform complex AI workflows into intuitive, production-ready features.
What You’ll Do
• Design, develop, and deploy backend services, APIs, and microservices powering our platform.
• Build responsive, user-friendly frontend applications tailored for factory and brand users.
• Integrate AI/ML models and agentic workflows into seamless production environments.
• Develop features supporting order parsing, supply chain tracking, compliance, and reporting.
• Collaborate cross-functionally to iterate rapidly, test with users, and deliver impactful releases.
• Optimize applications for performance, scalability, and cost-efficiency on cloud platforms.
• Establish and improve CI/CD pipelines, deployment processes, and engineering best practices.
• Write clear documentation and maintain clean, maintainable code.
Required Skills
• 3–5 years of professional Full-Stack development experience
• Strong backend skills with frameworks like Node.js, Python (FastAPI, Django), Go, or similar
• Frontend experience with React, Vue.js, Next.js, or similar modern frameworks
• Solid knowledge and experience with relational databases (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Redis, Neon)
• Strong API design skills (REST mandatory; GraphQL a plus)
• Containerization expertise with Docker
• Container orchestration and management with Kubernetes (including experience with Helm charts, operators, or custom resource definitions)
• Cloud deployment and infrastructure experience on AWS, GCP or Azure
• Hands-on experience deploying AI/ML models in cloud-native environments (AWS, GCP or Azure) with scalable infrastructure and monitoring.
• Experience with managed AI/ML services like AWS SageMaker, GCP Vertex AI, Azure ML, Together.ai, or similar
• Experience with CI/CD pipelines and DevOps tools such as Jenkins, GitHub Actions, Terraform, Ansible, or ArgoCD
• Familiarity with monitoring, logging, and observability tools like Prometheus, Grafana, ELK stack (Elasticsearch, Logstash, Kibana), or Helicone
Nice-to-have
• Experience with TypeScript for full-stack AI SaaS development
• Use of modern UI frameworks and tooling like Tailwind CSS
• Familiarity with modern AI-first SaaS concepts viz. vector databases for fast ML data retrieval, prompt engineering for LLM integration, integrating with OpenRouter or similar LLM orchestration frameworks etc.
• Knowledge of MLOps tools like Kubeflow, MLflow, or Seldon for model lifecycle management.
• Background in building data pipelines, real-time analytics, and predictive modeling.
• Knowledge of AI-driven security tools and best practices for SaaS compliance.
• Proficiency in cloud automation, cost optimization, and DevOps for AI workflows.
• Ability to design and implement hyper-personalized, adaptive user experiences.
What We Value
• Ownership: You take full responsibility for your work and ship high-quality solutions quickly.
• Bias for Action: You’re pragmatic, proactive, and focused on delivering results.
• Clear Communication: You articulate ideas, challenges, and solutions effectively across teams.
• Collaborative Spirit: You thrive in a cross-functional, distributed team environment.
• Customer Focus: You build with empathy for end users and real-world usability.
• Curiosity & Adaptability: You embrace learning, experimentation, and pivoting when needed.
• Quality Mindset: You write clean, maintainable, and well-tested code.
Why Join DAITA?
• Be part of a mission-driven startup transforming a $1+ Trillion global industry.
• Work closely with founders and AI experts on cutting-edge technology.
• Directly impact real-world supply chains and sustainability.
• Grow your skills in AI, SaaS, and supply chain tech in a fast-paced environment.


We are seeking a passionate and knowledgeable Data Science and Data Analyst Trainer to deliver engaging and industry-relevant training programs. The trainer will be responsible for teaching core concepts in data analytics, machine learning, data visualization, and related tools and technologies. The ideal candidate will have hands-on experience in the data domain with 2-5 years and a flair for teaching and mentoring students or working professionals.


We are looking for a dynamic and skilled Data Science and Data Analyst Trainer with 2 to 5 years of hands-on industry and/or teaching experience. The ideal candidate should be able to simplify complex data concepts, mentor aspiring professionals, and deliver effective training programs in data analytics, data science, and business intelligence tools.


We are seeking a dynamic and experienced Data Analytics and Data Science Trainer to deliver high-quality training sessions, mentor learners, and design engaging course content. The ideal candidate will have a strong foundation in statistics, programming, and data visualization tools, and should be passionate about teaching and guiding aspiring professionals.


Job Title: Data Science Intern
Location: 6th Sector HSR Layout, Bangalore - Work from Office 5.5 Days
Duration: 3 Months | Stipend: Upto ₹12,000 per month
Post-Internship Offer (PPO): Available based on performance
🧑💻 About the Role
We are looking for a passionate and proactive Data Science Assistant Intern who is equally excited about mentoring learners and gaining hands-on experience with real-world data operations.
This is a 50% technical + 50% mentorship role that blends classroom support with practical data work. Ideal for those looking to build a career in EdTech and Applied Data Science.
🚀 What You'll Do
- Technical Responsibilities (50%)Create and manage dashboards using Python or BI tools like Power BI/Tableau
- Write and optimize SQL queries to extract and analyze backend data
- Support in data gathering, cleaning, and basic analysis
- Contribute to building data pipelines to assist internal decision-making and analytics
🚀Mentorship & Support (50%)
- Assist instructors during live Data Science sessions
- Solve doubts related to Python, Machine Learning, and Statistics
- Create and review quizzes, assignments, and other content
- Provide one-on-one academic support and mentoring
- Foster a positive and interactive learning environment
✅ Requirements
- Bachelor’s degree in Data Science, Computer Science, Statistics, or a related field
- Strong knowledge of:
- Python (Data Structures, Functions, OOP, Debugging)
- Pandas, NumPy, Matplotlib
- Machine Learning algorithms (scikit-learn)
- SQL and basic data wrangling
- APIs, Web Scraping, and Time-Series basics
- Advanced Excel: Lookup & reference (VLOOKUP, INDEX+MATCH, XLOOKUP, SUMIF), Logical functions (IF, AND, OR), Statistical & Aggregate Functions: (COUNTIFS, STDEV, PERCENTILE), Text cleanup (TRIM, SUBSTITUTE), Time functions (DATEDIF, NETWORKDAYS), Pivot Tables, Power Query, Conditional Formatting, Data Validation, What-If Analysis, and dynamic dashboards using charts & slicers.
- Excellent communication and interpersonal skills
- Prior mentoring, teaching, or tutoring experience is a big plus
- Passion for helping others learn and grow


Job Title: Data Science Intern
Location: 6th Sector HSR Layout, Bangalore - Work from Office 5.5 Days
Duration: 3 Months | Stipend: Upto ₹12,000 per month
Post-Internship Offer (PPO): Available based on performance
🧑💻 About the Role
We are looking for a passionate and proactive Data Science Assistant Intern who is equally excited about mentoring learners and gaining hands-on experience with real-world data operations.
This is a 50% technical + 50% mentorship role that blends classroom support with practical data work. Ideal for those looking to build a career in EdTech and Applied Data Science.
🚀 What You'll Do
- Technical Responsibilities (50%)Create and manage dashboards using Python or BI tools like Power BI/Tableau
- Write and optimize SQL queries to extract and analyze backend data
- Support in data gathering, cleaning, and basic analysis
- Contribute to building data pipelines to assist internal decision-making and analytics
🚀Mentorship & Support (50%)
- Assist instructors during live Data Science sessions
- Solve doubts related to Python, Machine Learning, and Statistics
- Create and review quizzes, assignments, and other content
- Provide one-on-one academic support and mentoring
- Foster a positive and interactive learning environment
✅ Requirements
- Bachelor’s degree in Data Science, Computer Science, Statistics, or a related field
- Strong knowledge of:
- Python (Data Structures, Functions, OOP, Debugging)
- Pandas, NumPy, Matplotlib
- Machine Learning algorithms (scikit-learn)
- SQL and basic data wrangling
- APIs, Web Scraping, and Time-Series basics
- Advanced Excel: Lookup & reference (VLOOKUP, INDEX+MATCH, XLOOKUP), Logical functions (IF, AND, OR), Statistical & Aggregate Functions: (COUNTIFS, STDEV, PERCENTILE), Text cleanup (TRIM, SUBSTITUTE), Time functions (DATEDIF, NETWORKDAYS), Pivot Tables, Power Query, Conditional Formatting, Data Validation, What-If Analysis, and dynamic dashboards using charts & slicers.
- Excellent communication and interpersonal skills
- Prior mentoring, teaching, or tutoring experience is a big plus
- Passion for helping others learn and grow

3+ years of experience in cybersecurity, with a focus on application and cloud security.
· Proficiency in security tools such as Burp Suite, Metasploit, Nessus, OWASP ZAP, and SonarQube.
· Familiarity with data privacy regulations (GDPR, CCPA) and best practices.
· Basic knowledge of AI/ML security frameworks and tools.


Desired Competencies (Technical/Behavioral Competency)
Must-Have 1. Experience in working with various ML libraries and packages like Scikit learn, Numpy, Pandas, Tensorflow, Matplotlib, Caffe, etc.
2. Deep Learning Frameworks: PyTorch, spaCy, Keras
3. Deep Learning Architectures: LSTM, CNN, Self-Attention and Transformers
4. Experience in working with Image processing, computer vision is must
5. Designing data science applications, Large Language Models(LLM) , Generative Pre-trained Transformers (GPT), generative AI techniques, Natural Language Processing (NLP), machine learning techniques, Python, Jupyter Notebook, common data science packages (tensorflow, scikit-learn,kerasetc.,.) , LangChain, Flask,FastAPI, prompt engineering.
6. Programming experience in Python
7. Strong written and verbal communications
8. Excellent interpersonal and collaboration skills.
Good-to-Have 1. Experience working with vectored databases and graph representation of documents.
2. Experience with building or maintaining MLOpspipelines.
3. Experience in Cloud computing infrastructures like AWS Sagemaker or Azure ML for implementing ML solutions is preferred.
4. Exposure to Docker, Kubernetes


Duties
About Us:
We are a UK-based conveyancing firm dedicated to transforming property transactions through cutting-edge artificial intelligence. We are seeking a talented Machine Learning Engineer with 1–2 years of experience to join our growing AI team. This role offers a unique opportunity to work on scalable ML systems and Generative AI applications in a dynamic and impactful environment.
Responsibilities:
Design, Build, and Deploy Scalable ML Models
You will be responsible for end-to-end development of machine learning and deep learning models that can be scaled to handle real-world data and use cases. This includes training, testing, validating, and deploying models efficiently in production environments.
Develop NLP-Based Automation Solutions
You'll create natural language processing pipelines that automate tasks such as document understanding, text classification, and summarisation, enabling intelligent handling of property-related documents.
Prototype and Implement Generative AI Tools
Work closely with AI researchers and developers to experiment with and implement Generative AI techniques for tasks like content generation, intelligent suggestions, and workflow automation.
Integrate ML Models with APIs and Tools
Integrate machine learning models with external APIs and internal systems to support business operations and enhance customer service workflows.
Maintain CI/CD for ML Features
Collaborate with DevOps teams to manage CI/CD pipelines that automate testing, validation, and deployment of ML features and updates.
Review, Debug, and Optimise Models
Participate in thorough code reviews and model debugging sessions. Continuously monitor and fine-tune deployed models to improve their performance and reliability.
Cross-Team Communication
Communicate technical concepts effectively across teams, translating complex ML ideas into actionable business value.
· Design, build, and deploy scalable ML and deep learning models for real-world applications.
· Develop NLP-based and Gen AI based solutions for automating document understanding, classification, and summarisation.
· Collaborate with AI researchers and developers to prototype and implement Generative AI tools.
· Integrate ML and Gen AI models with APIs and internal tools to support business operations.
· Work with CI/CD pipelines to ensure continuous delivery of ML features and updates.
· Participate in code reviews, debugging, and performance optimisation of deployed models.
· Communicate technical concepts effectively across cross-functional teams.
Essentials From Day 1:
Security and Compliance:
• Ensure ML systems are built with GDPR compliance in mind.
• Adhere to RBAC policies and maintain secure handling of personal and property data.
Sandboxing and Risk Management:
• Use sandboxed environments for testing new ML features.
• Conduct basic risk analysis for model performance and data bias.
• Use sandboxed environments for testing and development.
• Evaluate and mitigate potential risks in model behavior and data pipelines
Qualifications:
· 1–2 years of professional experience in Machine Learning and Deep Learning projects.
· Proficient in Python, Object-Oriented Programming (OOPs), and Data Structures & Algorithms (DSA).
· Strong understanding of NLP and its real-world applications.
· Exposure to building scalable ML systems and deploying models into production.
· Basic working knowledge of Generative AI techniques and frameworks.
· Familiarity with CI/CD tools and experience with API-based integration.
· Excellent analytical thinking and debugging capabilities.
· Strong interpersonal and communication skills for effective team collaboration.

Hands-on knowledge in machine learning, deep learning, TensorFlow, Python, NLP 2. Stay up to date on the latest AI emergences relevant to the business domain. 3. Conduct research and development processes for AI strategies. 4. Experience developing and implementing generative AI models, with a strong understanding of deep learning techniques such as GPT, VAE, and GANs. 5. Experience with transformer models such as BERT, GPT, RoBERTa, etc, and a solid understanding of their underlying principles is a plus
Good-to-Have 1. Have knowledge of software development methodologies, such as Agile or Scrum 2. Have strong communication skills, with the ability to effectively convey complex technical concepts to a diverse audience. 3. Have experience with natural language processing (NLP) techniques and tools, such as SpaCy, NLTK, or Hugging Face 4. Ensure the quality of code and applications through testing, peer review, and code analysis. 5. Root cause analysis and bugs correction 6. Familiarity with version control systems, preferably Git. 7. Experience with building or maintaining cloud-native applications. 8. Familiarity with cloud platforms such as AWS, Azure, or Google Cloud is Plus
SN Role descriptions / Expectations from the Role
1 Design, Develop and configure GenAI applications to meet the business requirements.
2 Optimizing existing generative AI models for improved performance, scalability, and efficiency
3 Developing and maintaining AI pipelines, including data preprocessing, feature extraction, model training, and evaluation

Job Overview:
We are seeking a highly experienced and innovative Senior AI Engineer with a strong background in Generative AI, including LLM fine-tuning and prompt engineering. This role requires hands-on expertise across NLP, Computer Vision, and AI agent-based systems, with the ability to build, deploy, and optimize scalable AI solutions using modern tools and frameworks.
Required Skills & Qualifications:
- Bachelor’s or Master’s in Computer Science, AI, Machine Learning, or related field.
- 5+ years of hands-on experience in AI/ML solution development.
- Proven expertise in fine-tuning LLMs (e.g., LLaMA, Mistral, Falcon, GPT-family) using techniques like LoRA, QLoRA, PEFT.
- Deep experience in prompt engineering, including zero-shot, few-shot, and retrieval-augmented generation (RAG).
- Proficient in key AI libraries and frameworks:
- LLMs & GenAI: Hugging Face Transformers, LangChain, LlamaIndex, OpenAI API, Diffusers
- NLP: SpaCy, NLTK.
- Vision: OpenCV, MMDetection, YOLOv5/v8, Detectron2
- MLOps: MLflow, FastAPI, Docker, Git
- Familiarity with vector databases (Pinecone, FAISS, Weaviate) and embedding generation.
- Experience with cloud platforms like AWS, GCP, or Azure, and deployment on in house GPU-backed infrastructure.
- Strong communication skills and ability to convert business problems into technical solutions.
Preferred Qualifications:
- Experience building multimodal systems (text + image, etc.)
- Practical experience with agent frameworks for autonomous or goal-directed AI.
- Familiarity with quantization, distillation, or knowledge transfer for efficient model deployment.
Key Responsibilities:
- Design, fine-tune, and deploy generative AI models (LLMs, diffusion models, etc.) for real-world applications.
- Develop and maintain prompt engineering workflows, including prompt chaining, optimization, and evaluation for consistent output quality.
- Build NLP solutions for Q&A, summarization, information extraction, text classification, and more.
- Develop and integrate Computer Vision models for image processing, object detection, OCR, and multimodal tasks.
- Architect and implement AI agents using frameworks such as LangChain, AutoGen, CrewAI, or custom pipelines.
- Collaborate with cross-functional teams to gather requirements and deliver tailored AI-driven features.
- Optimize models for performance, cost-efficiency, and low latency in production.
- Continuously evaluate new AI research, tools, and frameworks and apply them where relevant.
- Mentor junior AI engineers and contribute to internal AI best practices and documentation.



Data Scientist
Job Id: QX003
About Us:
QX impact was launched with a mission to make AI accessible and affordable and deliver AI Products/Solutions at scale for enterprises by bringing the power of Data, AI, and Engineering to drive digital transformation. We believe without insights, businesses will continue to face challenges to better understand their customers and even lose them; Secondly, without insights businesses won't’ be able to deliver differentiated products/services; and finally, without insights, businesses can’t achieve a new level of “Operational Excellence” is crucial to remain competitive, meeting rising customer expectations, expanding markets, and digitalization.
Position Overview:
We are seeking a collaborative and analytical Data Scientist who can bridge the gap between business needs and data science capabilities. In this role, you will lead and support projects that apply machine learning, AI, and statistical modeling to generate actionable insights and drive business value.
Key Responsibilities:
- Collaborate with stakeholders to define and translate business challenges into data science solutions.
- Conduct in-depth data analysis on structured and unstructured datasets.
- Build, validate, and deploy machine learning models to solve real-world problems.
- Develop clear visualizations and presentations to communicate insights.
- Drive end-to-end project delivery, from exploration to production.
- Contribute to team knowledge sharing and mentorship activities.
Must-Have Skills:
- 3+ years of progressive experience in data science, applied analytics, or a related quantitative role, demonstrating a proven track record of delivering impactful data-driven solutions.
- Exceptional programming proficiency in Python, including extensive experience with core libraries such as Pandas, NumPy, Scikit-learn, NLTK and XGBoost.
- Expert-level SQL skills for complex data extraction, transformation, and analysis from various relational databases.
- Deep understanding and practical application of statistical modeling and machine learning techniques, including but not limited to regression, classification, clustering, time series analysis, and dimensionality reduction.
- Proven expertise in end-to-end machine learning model development lifecycle, including robust feature engineering, rigorous model validation and evaluation (e.g., A/B testing), and model deployment strategies.
- Demonstrated ability to translate complex business problems into actionable analytical frameworks and data science solutions, driving measurable business outcomes.
- Proficiency in advanced data analysis techniques, including Exploratory Data Analysis (EDA), customer segmentation (e.g., RFM analysis), and cohort analysis, to uncover actionable insights.
- Experience in designing and implementing data models, including logical and physical data modeling, and developing source-to-target mappings for robust data pipelines.
- Exceptional communication skills, with the ability to clearly articulate complex technical findings, methodologies, and recommendations to diverse business stakeholders (both technical and non-technical audiences).
- Experience in designing and implementing data models, including logical and physical data modeling, and developing source-to-target mappings for robust data pipelines.
- Exceptional communication skills, with the ability to clearly articulate complex technical findings, methodologies, and recommendations to diverse business stakeholders (both technical and non-technical audiences).
Good-to-Have Skills:
- Experience with cloud platforms (Azure, AWS, GCP) and specific services like Azure ML, Synapse, Azure Kubernetes and Databricks.
- Familiarity with big data processing tools like Apache Spark or Hadoop.
- Exposure to MLOps tools and practices (e.g., MLflow, Docker, Kubeflow) for model lifecycle management.
- Knowledge of deep learning libraries (TensorFlow, PyTorch) or experience with Generative AI (GenAI) and Large Language Models (LLMs).
- Proficiency with business intelligence and data visualization tools such as Tableau, Power BI, or Plotly.
- Experience working within Agile project delivery methodologies.
Competencies:
· Tech Savvy - Anticipating and adopting innovations in business-building digital and technology applications.
· Self-Development - Actively seeking new ways to grow and be challenged using both formal and informal development channels.
· Action Oriented - Taking on new opportunities and tough challenges with a sense of urgency, high energy, and enthusiasm.
· Customer Focus - Building strong customer relationships and delivering customer-centric solutions.
· Optimizes Work Processes - Knowing the most effective and efficient processes to get things done, with a focus on continuous improvement.
Why Join Us?
- Be part of a collaborative and agile team driving cutting-edge AI and data engineering solutions.
- Work on impactful projects that make a difference across industries.
- Opportunities for professional growth and continuous learning.
- Competitive salary and benefits package.



Role : AIML Engineer
Location : Madurai
Experience : 5 to 10 Yrs
Mandatory Skills : AIML, Python, SQL, ML Models, PyTorch, Pandas, Docker, AWS
Language: Python
DBs : SQL
Core Libraries:
Time Series & Forecasting: pmdarima, statsmodels, Prophet, GluonTS, NeuralProphet
SOTA ML : ML Models, Boosting & Ensemble models etc.
Explainability : Shap / Lime
Required skills:
- Deep Learning: PyTorch, PyTorch Forecasting,
- Data Processing: Pandas, NumPy, Polars (optional), PySpark
- Hyperparameter Tuning: Optuna, Amazon SageMaker Automatic Model Tuning
- Deployment & MLOps: Batch & Realtime with API endpoints, MLFlow
- Serving: TorchServe, Sagemaker endpoints / batch
- Containerization: Docker
- Orchestration & Pipelines: AWS Step Functions, AWS SageMaker Pipelines
AWS Services:
- SageMaker (Training, Inference, Tuning)
- S3 (Data Storage)
- CloudWatch (Monitoring)
- Lambda (Trigger-based Inference)
- ECR, ECS or Fargate (Container Hosting)


We are building an advanced, AI-driven multi-agent software system designed to revolutionize task automation and code generation. This is a futuristic AI platform capable of:
✅ Real-time self-coding based on tasks
✅ Autonomous multi-agent collaboration
✅ AI-powered decision-making
✅ Cross-platform compatibility (Desktop, Web, Mobile)
We are hiring a highly skilled **AI Engineer & Full-Stack Developer** based in India, with a strong background in AI/ML, multi-agent architecture, and scalable, production-grade software development.
### Responsibilities:
- Build and maintain a multi-agent AI system (AutoGPT, BabyAGI, MetaGPT concepts)
- Integrate large language models (GPT-4o, Claude, open-source LLMs)
- Develop full-stack components (Backend: Python, FastAPI/Flask, Frontend: React/Next.js)
- Work on real-time task execution pipelines
- Build cross-platform apps using Electron or Flutter
- Implement Redis, Vector databases, scalable APIs
- Guide the architecture of autonomous, self-coding AI systems
### Must-Have Skills:
- Python (advanced, AI applications)
- AI/ML experience, including multi-agent orchestration
- LLM integration knowledge
- Full-stack development: React or Next.js
- Redis, Vector Databases (e.g., Pinecone, FAISS)
- Real-time applications (websockets, event-driven)
- Cloud deployment (AWS, GCP)
### Good to Have:
- Experience with code-generation AI models (Codex, GPT-4o coding abilities)
- Microservices and secure system design
- Knowledge of AI for workflow automation and productivity tools
Join us to work on cutting-edge AI technology that builds the future of autonomous software.


AccioJob is conducting a Walk-In Hiring Drive with Atomic Loops for the position of AI/ML Developer Intern.
To apply, register, and select your slot here: https://go.acciojob.com/E8wPb8
Required Skills: Python, AI, Prompting, ML understanding
Eligibility: ALL
Degree: ALL
Branch: ALL
Graduation Year: 2019, 2020, 2021, 2022, 2023, 2024, 2025, 2026
Work Details:
- Work Location: Pune (Onsite)
- CTC: 4 LPA to 5 LPA
Evaluation Process:
Round 1: Offline Assessment at AccioJob Pune Centre
Further Rounds (for shortlisted candidates only):
Profile & Background Screening Round, Company Side Process
Company Side Process
2 rounds will be for the intern role, and 3 rounds will be for the full-time role (Virtual or Face-to-Face)
Important Note: Bring your laptop & earphones for the test.
Register here: https://go.acciojob.com/E8wPb8


About Us
Alfred Capital - Alfred Capital is a next-generation on-chain proprietary quantitative trading technology provider, pioneering fully autonomous algorithmic systems that reshape trading and capital allocation in decentralized finance.
As a sister company of Deqode — a 400+ person blockchain innovation powerhouse — we operate at the cutting edge of quant research, distributed infrastructure, and high-frequency execution.
What We Build
- Alpha Discovery via On‑Chain Intelligence — Developing trading signals using blockchain data, CEX/DEX markets, and protocol mechanics.
- DeFi-Native Execution Agents — Automated systems that execute trades across decentralized platforms.
- ML-Augmented Infrastructure — Machine learning pipelines for real-time prediction, execution heuristics, and anomaly detection.
- High-Throughput Systems — Resilient, low-latency engines that operate 24/7 across EVM and non-EVM chains tuned for high-frequency trading (HFT) and real-time response
- Data-Driven MEV Analysis & Strategy — We analyze mempools, order flow, and validator behaviors to identify and capture MEV opportunities ethically—powering strategies that interact deeply with the mechanics of block production and inclusion.
Evaluation Process
- HR Discussion – A brief conversation to understand your motivation and alignment with the role.
- Initial Technical Interview – A quick round focused on fundamentals and problem-solving approach.
- Take-Home Assignment – Assesses research ability, learning agility, and structured thinking.
- Assignment Presentation – Deep-dive into your solution, design choices, and technical reasoning.
- Final Interview – A concluding round to explore your background, interests, and team fit in depth.
- Optional Interview – In specific cases, an additional round may be scheduled to clarify certain aspects or conduct further assessment before making a final decision.
Job Description : Blockchain Data & ML Engineer
As a Blockchain Data & ML Engineer, you’ll work on ingesting and modelling on-chain behaviour, building scalable data pipelines, and designing systems that support intelligent, autonomous market interaction.
What You’ll Work On
- Build and maintain ETL pipelines for ingesting and processing blockchain data.
- Assist in designing, training, and validating machine learning models for prediction and anomaly detection.
- Evaluate model performance, tune hyperparameters, and document experimental results.
- Develop monitoring tools to track model accuracy, data drift, and system health.
- Collaborate with infrastructure and execution teams to integrate ML components into production systems.
- Design and maintain databases and storage systems to efficiently manage large-scale datasets.
Ideal Traits
- Strong in data structures, algorithms, and core CS fundamentals.
- Proficiency in any programming language
- Familiarity with backend systems, APIs, and database design, along with a basic understanding of machine learning and blockchain fundamentals.
- Curiosity about how blockchain systems and crypto markets work under the hood.
- Self-motivated, eager to experiment and learn in a dynamic environment.
Bonus Points For
- Hands-on experience with pandas, numpy, scikit-learn, or PyTorch.
- Side projects involving automated ML workflows, ETL pipelines, or crypto protocols.
- Participation in hackathons or open-source contributions.
What You’ll Gain
- Cutting-Edge Tech Stack: You'll work on modern infrastructure and stay up to date with the latest trends in technology.
- Idea-Driven Culture: We welcome and encourage fresh ideas. Your input is valued, and you're empowered to make an impact from day one.
- Ownership & Autonomy: You’ll have end-to-end ownership of projects. We trust our team and give them the freedom to make meaningful decisions.
- Impact-Focused: Your work won’t be buried under bureaucracy. You’ll see it go live and make a difference in days, not quarters
What We Value:
- Craftsmanship over shortcuts: We appreciate engineers who take the time to understand the problem deeply and build durable solutions—not just quick fixes.
- Depth over haste: If you're the kind of person who enjoys going one level deeper to really "get" how something works, you'll thrive here.
- Invested mindset: We're looking for people who don't just punch tickets, but care about the long-term success of the systems they build.
- Curiosity with follow-through: We admire those who take the time to explore and validate new ideas, not just skim the surface.
Compensation:
- INR 6 - 12 LPA
- Performance Bonuses: Linked to contribution, delivery, and impact.


Brudite is an IT Training and Services company shaping the future of technology with Fortune 500 clients. We specialize in empowering young engineers to achieve their dreams through cutting-edge training, innovative products, and comprehensive services.
Proudly registered with iStart Rajasthan and Startup India, we are supported by industry leaders like NVIDIA and AWS.
Roles and Responsibilities -
- A can-do attitude to new challenges.
- Strong understanding of computer science fundamentals, including operating systems, Databases, and Networking.
- Knowledge of Python or any other programming language.
- Basic knowledge of Cloud Computing(AWS/Azure/GCP) will be a Plus.
- Basic Knowledge of Any Front-end Framework will be a Plus.
- We operate in a fast-paced, startup-like environment, so the ability to work in a dynamic, agile environment is essential.
- Strong written and verbal communication skills are essential for this role. You'll need to communicate with clients, team members, and stakeholders.
- Ability to learn and adapt to new technology trends and a curiosity to learn are essential

Technical Skills – Must have
Lead the design and development of AI-driven test automation frameworks and solutions.
Collaborate with stakeholders (e.g., product managers, developers, data scientists) to understand testing requirements and identify areas where AI automation can be effectively implemented.
Develop and implement test automation strategies for AI-based systems, encompassing various aspects like data generation, model testing, and performance evaluation.
Evaluate and select appropriate tools and technologies for AI test automation, including AI frameworks, testing tools, and automation platforms.
Define and implement best practices for AI test automation, covering areas like code standards, test case design, test data management, and ethical considerations.
Lead and mentor a team of test automation engineers in designing, developing, and executing AI test automation solutions.
Collaborate with development teams to ensure the testability of AI models and systems, providing guidance and feedback throughout the development lifecycle.
Analyze test results and identify areas for improvement in the AI automation process, continuously optimizing testing effectiveness and efficiency.
Stay up-to-date with the latest advancements and trends in AI and automation technologies, actively adapting and implementing new knowledge to enhance testing capabilities.
Knowledge in Generative AI and Conversational AI for implementation in test automation strategies is highly desirable.
Proficiency in programming languages commonly used in AI, such as Python, Java, or R.
Knowledge on AI frameworks and libraries, such as TensorFlow, PyTorch, or scikit-learn.
Familiarity with testing methodologies and practices, including Agile and DevOps.
Working experience on Python/Java and Selenium also knowledge in prompt engineering.

Senior Data Engineer Job Description
Overview
The Senior Data Engineer will design, develop, and maintain scalable data pipelines and
infrastructure to support data-driven decision-making and advanced analytics. This role requires deep
expertise in data engineering, strong problem-solving skills, and the ability to collaborate with
cross-functional teams to deliver robust data solutions.
Key Responsibilities
Data Pipeline Development: Design, build, and optimize scalable, secure, and reliable data
pipelines to ingest, process, and transform large volumes of structured and unstructured data.
Data Architecture: Architect and maintain data storage solutions, including data lakes, data
warehouses, and databases, ensuring performance, scalability, and cost-efficiency.
Data Integration: Integrate data from diverse sources, including APIs, third-party systems,
and streaming platforms, ensuring data quality and consistency.
Performance Optimization: Monitor and optimize data systems for performance, scalability,
and cost, implementing best practices for partitioning, indexing, and caching.
Collaboration: Work closely with data scientists, analysts, and software engineers to
understand data needs and deliver solutions that enable advanced analytics, machine
learning, and reporting.
Data Governance: Implement data governance policies, ensuring compliance with data
security, privacy regulations (e.g., GDPR, CCPA), and internal standards.
Automation: Develop automated processes for data ingestion, transformation, and validation
to improve efficiency and reduce manual intervention.
Mentorship: Guide and mentor junior data engineers, fostering a culture of technical
excellence and continuous learning.
Troubleshooting: Diagnose and resolve complex data-related issues, ensuring high
availability and reliability of data systems.
Required Qualifications
Education: Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science,
or a related field.
Experience: 5+ years of experience in data engineering or a related role, with a proven track
record of building scalable data pipelines and infrastructure.
Technical Skills:
Proficiency in programming languages such as Python, Java, or Scala.
Expertise in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra).
Strong experience with cloud platforms (e.g., AWS, Azure, GCP) and their data services
(e.g., Redshift, BigQuery, Snowflake).
Hands-on experience with ETL/ELT tools (e.g., Apache Airflow, Talend, Informatica) and
data integration frameworks.
Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka) and distributed
systems.
Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes) is a
plus.
Soft Skills:
Excellent problem-solving and analytical skills.
Strong communication and collaboration abilities.
Ability to work in a fast-paced, dynamic environment and manage multiple priorities.
Certifications (optional but preferred): Cloud certifications (e.g., AWS Certified Data Analytics,
Google Professional Data Engineer) or relevant data engineering certifications.
Preferred Qualifica
Experience with real-time data processing and streaming architectures.
Familiarity with machine learning pipelines and MLOps practices.
Knowledge of data visualization tools (e.g., Tableau, Power BI) and their integration with data
pipelines.
Experience in industries with high data complexity, such as finance, healthcare, or
e-commerce.
Work Environment
Location: Hybrid/Remote/On-site (depending on company policy).
Team: Collaborative, cross-functional team environment with data scientists, analysts, and
business stakeholders.
Hours: Full-time, with occasional on-call responsibilities for critical data systems.


Job description
Brief Description
One of our client is looking for a Lead Engineer in Bhopal with 5–10 years of experience. Candidates must have strong expertise in Python. Additional experience in AI/ML, MERN Stack, or Full Stack Development is a plus.
Job Description
We are seeking a highly skilled and experienced Lead Engineer – Python AI to join our dynamic team. The ideal candidate will have a strong background in AI technologies, MERN stack, and Python full stack development, with a passion for building scalable and intelligent systems. This role involves leading development efforts, mentoring junior engineers, and collaborating with cross-functional teams to deliver cutting-edge AI-driven solutions.
Key Responsibilities:
- Lead the design, development, and deployment of AI-powered applications using Python and MERN stack.
- Architect scalable and maintainable full-stack solutions integrating AI models and data pipelines.
- Collaborate with data scientists and product teams to integrate machine learning models into production systems.
- Ensure code quality, performance, and security across all layers of the application.
- Mentor and guide junior developers, fostering a culture of technical excellence.
- Stay updated with emerging technologies in AI, data engineering, and full-stack development.
- Participate in code reviews, sprint planning, and technical discussions.
Required Skills:
- 5+ years of experience in software development with a strong focus on Python full stack and MERN stack.
- Hands-on experience with AI technologies, machine learning frameworks (e.g., TensorFlow, PyTorch), and data processing tools.
- Proficiency in MongoDB, Express.js, React.js, Node.js.
- Strong understanding of RESTful APIs, microservices architecture, and cloud platforms (AWS, Azure, GCP).
- Experience with CI/CD pipelines, containerization (Docker), and version control (Git).
- Excellent problem-solving skills and ability to work in a fast-paced environment.
Education Qualification:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- Certifications in AI/ML or Full Stack Development are a plus.



Position – Python Developer
Location – Navi Mumbai
Who are we
Based out of IIT Bombay, HaystackAnalytics is a HealthTech company creating clinical genomics products, which enable diagnostic labs and hospitals to offer accurate and personalized diagnostics. Supported by India's most respected science agencies (DST, BIRAC, DBT), we created and launched a portfolio of products to offer genomics in infectious diseases. Our genomics-based diagnostic solution for Tuberculosis was recognized as one of the top innovations supported by BIRAC in the past 10 years, and was launched by the Prime Minister of India in the BIRAC Showcase event in Delhi, 2022.
Objectives of this Role:
- Design and implement efficient, scalable backend services using Python.
- Work closely with healthcare domain experts to create innovative and accurate diagnostics solutions.
- Build APIs, services, and scripts to support data processing pipelines and front-end applications.
- Automate recurring tasks and ensure robust integration with cloud services.
- Maintain high standards of software quality and performance using clean coding principles and testing practices.
- Collaborate within the team to upskill and unblock each other for faster and better outcomes.
Primary Skills – Python Development
- Proficient in Python 3 and its ecosystem
- Frameworks: Flask / Django / FastAPI
- RESTful API development
- Understanding of OOPs and SOLID design principles
- Asynchronous programming (asyncio, aiohttp)
- Experience with task queues (Celery, RQ)
- Rust programming experience for systems-level or performance-critical components
Testing & Automation
- Unit Testing: PyTest / unittest
- Automation tools: Ansible / Terraform (good to have)
- CI/CD pipelines
DevOps & Cloud
- Docker, Kubernetes (basic knowledge expected)
- Cloud platforms: AWS / Azure / GCP
- GIT and GitOps workflows
- Familiarity with containerized deployment & serverless architecture
Bonus Skills
- Data handling libraries: Pandas / NumPy
- Experience with scripting: Bash / PowerShell
- Functional programming concepts
- Familiarity with front-end integration (REST API usage, JSON handling)
Other Skills
- Innovation and thought leadership
- Interest in learning new tools, languages, workflows
- Strong communication and collaboration skills
- Basic understanding of UI/UX principles
To know more about us – https://haystackanalytics.in

- 3 + years owning ML / LLM services in production on Azure (AKS, Azure OpenAI/Azure ML) or another major cloud.
- Strong Python plus hands-on work with a modern deep-learning stack (PyTorch / TensorFlow / HF Transformers).
- Built features with LLM toolchains: prompt engineering, function calling / tools, vector stores (FAISS, Pinecone, etc.).
- Familiar with agentic AI patterns (LangChain / LangGraph, eval harnesses, guardrails) and strategies to tame LLM non-determinism.
- Comfortable with containerization & CI/CD (Docker, Kubernetes, Git-based workflows); can monitor, scale and troubleshoot live services.
Nice-to-Haves
- Experience in billing, collections, fintech, or professional-services SaaS.
- Knowledge of email deliverability, templating engines, or CRM systems.
- Exposure to compliance frameworks (SOC 2, ISO 27001) or secure handling of financial data.


We are seeking a highly skilled and motivated MLOps Engineer with 3-5 years of experience to join our engineering team. The ideal candidate should possess a strong foundation in DevOps or software engineering principles with practical exposure to machine learning operational workflows. You will be instrumental in operationalizing ML systems, optimizing the deployment lifecycle, and strengthening the integration between data science and engineering teams.
Required Skills:
• Hands-on experience with MLOps platforms such as MLflow and Kubeflow.
• Proficiency in Infrastructure as Code (laC) tools like Terraform or Ansible.
• Strong familiarity with monitoring and alerting frameworks (Prometheus, Grafana, Datadog, AWS CloudWatch).
• Solid understanding of microservices architecture, service discovery, and load balancing.
• Excellent programming skills in Python, with experience in writing modular, testable, and maintainable code.
• Proficient in Docker and container-based application deployments.
• Experience with CI/CD tools such as Jenkins or GitLab Cl.
• Basic working knowledge of Kubernetes for container orchestration.
• Practical experience with cloud-based ML platforms such as AWS SageMaker, Databricks, or Google Vertex Al.
Good-to-Have Skills:
• Awareness of security practices specific to ML pipelines, including secure model endpoints and data protection.
• Experience with scripting languages like Bash or PowerShell for automation tasks.
• Exposure to database scripting and data integration pipelines.
Experience & Qualifications:
• 3-5+ years of experience in MLOps, Site Reliability Engineering (SRE), or
Software Engineering roles.
• At least 2+ years of hands-on experience working on ML/Al systems in production settings.

Job Title: Node.js / AI Engineer
Department: Technology
Location: Remote
Company: Mercer Talent Enterprise
Company Overview:Mercer Talent Enterprise is a leading provider of talent management solutions, dedicated to helping organizations optimize their workforce. We foster a collaborative and innovative work environment where our team members can thrive and contribute to our mission of enhancing talent strategies for our clients.
Position Overview:We are looking for a skilled Node.js / AI Engineer to join our Lighthouse Tech Team. This role is focused on application development, where you will be responsible for designing, developing, and deploying intelligent, AI-powered applications. You will leverage your expertise in Node.js and modern AI technologies to build sophisticated systems that feature Large Language Models (LLMs), AI Agents, and Retrieval-Augmented Generation (RAG) pipelines.
Key Responsibilities:
- Develop and maintain robust and scalable backend services and APIs using Node.js.
- Design, build, and integrate AI-powered features into our core applications.
- Implement and optimize Retrieval-Augmented Generation (RAG) systems to ensure accurate and context-aware responses.
- Develop and orchestrate autonomous AI agents to automate complex tasks and workflows.
- Work with third-party LLM APIs (like OpenAI, Anthropic, etc.) and open-source models, fine-tuning and adapting them for specific use cases.
- Collaborate with product managers and developers to define application requirements and deliver high-quality, AI-driven solutions.
- Ensure the performance, quality, and responsiveness of AI-powered applications.
Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- 5+ years of professional experience in backend application development with a strong focus on Node.js.
- 2+ years of hands-on experience in AI-related development, including building applications that integrate with Large Language Models (LLMs).
- Demonstrable experience developing AI agents and implementing RAG patterns.
- Familiarity with AI/ML frameworks and libraries relevant to application development (e.g., LangChain, LlamaIndex).
- Experience with vector databases (e.g., Pinecone, Chroma, Weaviate) is a plus.
- Excellent problem-solving and analytical skills.
- Strong communication and teamwork abilities.
Benefits:
- Competitive salary and performance-based bonuses.
- Professional development opportunities.

Key Responsibilities:
● Develop and maintain web applications using Django and Flask frameworks.
● Design and implement RESTful APIs using Django Rest Framework (DRF).
● Deploy, manage, and optimize applications on AWS services, including EC2, S3, RDS, Lambda, and CloudFormation.
● Build and integrate APIs for AI/ML models into existing systems.
● Create scalable machine learning models using frameworks like PyTorch, TensorFlow, and scikit-learn.
● Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases.
● Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization.
● Deploy and manage machine learning models in production environments using tools like TensorFlow Serving, TorchServe, and AWS SageMaker.
● Ensure the scalability, performance, and reliability of applications and deployed models.
● Collaborate with cross-functional teams to analyze requirements and deliver effective technical solutions.
● Write clean, maintainable, and efficient code following best practices. ● Conduct code reviews and provide constructive feedback to peers.
● Stay up-to-date with the latest industry trends and technologies, particularly in AI/ML.
Required Skills and Qualifications:
● Bachelor’s degree in Computer Science, Engineering, or a related field.
● 2+ years of professional experience as a Python Developer.
● Proficient in Python with a strong understanding of its ecosystem.
● Extensive experience with Django and Flask frameworks.
● Hands-on experience with AWS services for application deployment and management.
● Strong knowledge of Django Rest Framework (DRF) for building APIs. ● Expertise in machine learning frameworks such as PyTorch, TensorFlow, and scikit-learn.
● Experience with transformer architectures for NLP and advanced AI solutions.
● Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB).
● Familiarity with MLOps practices for managing the machine learning lifecycle.
● Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus.
● Excellent problem-solving skills and the ability to work independently and as part of a team.

We are seeking a highly skilled Senior/Lead Data Scientist with deep expertise in AI/ML/Gen AI, including Deep Learning, Computer Vision, and NLP. The ideal candidate will bring strong hands-on experience, particularly in building, fine-tuning, and deploying models, and will work directly with customers with minimal supervision.
This role requires someone who can not only lead and execute technical projects but also actively contribute to business development through customer interaction, proposal building, and RFP responses. You will be expected to take ownership of AI project execution and team leadership, while helping Tekdi expand its AI footprint.
Key Responsibilities:
Contribute to AI business growth by working on RFPs, proposals, and solutioning activities.
- Lead the team in delivering customer requirements, ensuring quality and timely execution.
Develop and fine tune advanced AI/ML models using deep learning and generative AI techniques.
- Fine-tune and optimize Large Language Models (LLMs) such as GPT, BERT, T5, and LLaMA.
- Interact directly with customers to understand their business needs and provide AI-driven solutions.
- Work with Deep Learning architectures including CNNs, RNNs, and Transformer-based models.
- Leverage NLP techniques such as summarization, NER, sentiment analysis, and embeddings.
- Implement MLOps pipelines and deploy scalable AI solutions in cloud environments (AWS, GCP, Azure).
- Collaborate with cross-functional teams to integrate AI into business applications.
- Stay updated with AI/ML research and integrate new techniques into projects.
Required Skills & Qualifications:
- Minimum 6 years of experience in AI/ML/Gen AI, with at least 3+ years in Deep Learning/Computer Vision.
- Strong proficiency in Python and popular AI/ML frameworks (TensorFlow, PyTorch, Hugging Face, Scikit-learn).
- Hands-on experience with LLMs and generative models (e.g., GPT, Stable Diffusion).
- Experience with data preprocessing, feature engineering, and performance evaluation.
- Exposure to containerization and cloud deployment using Docker, Kubernetes.
- Experience with vector databases and RAG-based architectures.
- Ability to lead teams and projects, and work independently with minimal guidance.
- Experience with customer-facing roles, proposals, and solutioning.
Educational Requirements:
- Bachelor’s, Master’s, or PhD in Computer Science, Artificial Intelligence, Information Technology, or related field.
Preferred Skills (Good to Have):
- Knowledge of Reinforcement Learning (e.g., RLHF), multi-modal AI, or time-series forecasting.
- Familiarity with Graph Neural Networks (GNNs).
- Exposure to Responsible AI (RAI), AI Ethics, or AutoML platforms.
- Contributions to open-source AI projects or publications in peer-reviewed journals.

Job Description: AI/ML Specialist
We are looking for a highly skilled and experienced AI/ML Specialist to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with expertise in deploying and managing applications on AWS. Proficiency in Django Rest Framework (DRF), a solid understanding of machine learning concepts, and hands-on experience with tools like PyTorch, TensorFlow, and transformer architectures are essential.
Key Responsibilities
● Develop and maintain web applications using Django and Flask frameworks.
● Design and implement RESTful APIs using Django Rest Framework (DRF).
● Deploy, manage, and optimize applications on AWS services, including EC2, S3, RDS, Lambda, and CloudFormation.
● Build and integrate APIs for AI/ML models into existing systems.
● Create scalable machine learning models using frameworks like PyTorch, TensorFlow, and scikit-learn.
● Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases.
● Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization.
● Deploy and manage machine learning models in production environments using tools like TensorFlow Serving, TorchServe, and AWS SageMaker.
● Ensure the scalability, performance, and reliability of applications and deployed models.
● Collaborate with cross-functional teams to analyze requirements and deliver effective technical solutions.
● Write clean, maintainable, and efficient code following best practices.
● Conduct code reviews and provide constructive feedback to peers.
● Stay up-to-date with the latest industry trends and technologies, particularly in AI/ML.
Required Skills and Qualifications
● Bachelor’s degree in Computer Science, Engineering, or a related field.
● 3+ years of professional experience as a AI/ML Specialist
● Proficient in Python with a strong understanding of its ecosystem.
● Extensive experience with Django and Flask frameworks.
● Hands-on experience with AWS services for application deployment and management.
● Strong knowledge of Django Rest Framework (DRF) for building APIs.
● Expertise in machine learning frameworks such as PyTorch, TensorFlow, and scikit-learn.
● Experience with transformer architectures for NLP and advanced AI solutions.
● Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB).
● Familiarity with MLOps practices for managing the machine learning lifecycle.
● Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus.
● Excellent problem-solving skills and the ability to work independently and as part of a team.
● Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders.

About Moative
Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots and predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries. Through Moative Labs, we aspire to build micro-products and launch AI startups in vertical markets.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Role
We seek experienced ML/AI professionals with strong backgrounds in computer science, software engineering, or related elds to join our Azure-focused MLOps team. If you’re passionate about deploying complex machine learning models in real-world settings, bridging the gap between research and production, and working on high-impact projects, this role is for you.
Work you’ll do
As an operations engineer, you’ll oversee the entire ML lifecycle on Azure—spanning initial proofs-of-concept to large-scale production deployments. You’ll build and maintain automated training, validation, and deployment pipelines using Azure DevOps, Azure ML, and related services, ensuring models are continuously monitored, optimized for performance, and cost-eective. By integrating MLOps practices such as MLow and CI/CD, you’ll drive rapid iteration and experimentation. In close collaboration with senior ML engineers, data scientists, and domain experts, you’ll deliver robust, production-grade ML solutions that directly impact business outcomes.
Responsibilities
- ML-focused DevOps: Set up robust CI/CD pipelines with a strong emphasis on model versioning, automated testing, and advanced deployment strategies on Azure.
- Monitoring & Maintenance: Track and optimize the performance of deployed models through live metrics, alerts, and iterative improvements.
- Automation: Eliminate repetitive tasks around data preparation, model retraining, and inference by leveraging scripting and infrastructure as code (e.g., Terraform, ARM templates).
- Security & Reliability: Implement best practices for securing ML workows on Azure, including identity/access management, container security, and data encryption.
- Collaboration: Work closely with the data science teams to ensure model performance is within agreed SLAs, both for training and inference.
Skills & Requirements
- 2+ years of hands-on programming experience with Python (PySpark or Scala optional).
- Solid knowledge of Azure cloud services (Azure ML, Azure DevOps, ACI/AKS).
- Practical experience with DevOps concepts: CI/CD, containerization (Docker, Kubernetes), infrastructure as code (Terraform, ARM templates).
- Fundamental understanding of MLOps: MLow or similar frameworks for tracking and versioning.
- Familiarity with machine learning frameworks (TensorFlow, PyTorch, XGBoost) and how to operationalize them in production.
- Broad understanding of data structures and data engineering.
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, eiciency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less.
Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, x or improve – anything that isn’t done right, irrespective of who did it. Be selsh about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply here. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.


- Strong AI/ML OR Software Developer Profile
- Mandatory (Experience 1) - Must have 3+ YOE in Core Software Developement (SDLC)
- Mandatory (Experience 2) - Must have 2+ years of experience in AI/ML, preferably in conversational AI domain (spped to text, text to speech, speech emotional recognition) or agentic AI systems.
- Mandatory (Experience 3) - Must have hands-on experience in fine-tuning LLMs/SLM, model optimization (quantization, distillation) and RAG
- Mandatory (Experience 4) - Hands-on Programming experience in Python, TensorFlow, PyTorch and model APIs (Hugging Face, LangChain, OpenAI, etc


We are seeking a visionary and hands-on AI/ML and Chatbot Lead to spearhead the design, development, and deployment of enterprise-wide Conversational and Generative AI solutions. This role will be instrumental in establishing and scaling our AI Lab function, defining chatbot and multimodal AI strategies, and delivering intelligent automation solutions that enhance user engagement and operational efficiency.
Key Responsibilities
- Strategy & Leadership
- Define and lead the enterprise-wide strategy for Conversational AI, Multimodal AI, and Large Language Models (LLMs).
- Establish and scale an AI/Chatbot Lab, with a clear roadmap for innovation across in-app, generative, and conversational AI use cases.
- Lead, mentor, and scale a high-performing team of AI/ML engineers and chatbot developers.
- Architecture & Development
- Architect scalable AI/ML systems encompassing presentation, orchestration, AI, and data layers.
- Build multi-turn, memory-aware conversations using frameworks like LangChain or Semantic Kernel.
- Integrate chatbots with enterprise platforms such as Salesforce, NetSuite, Slack, and custom applications via APIs/webhooks.
- Solution Delivery
- Collaborate with business stakeholders to assess needs, conduct ROI analyses, and deliver high-impact AI solutions.
- Identify and implement agentic AI capabilities and SaaS optimization opportunities.
- Deliver POCs, pilots, and MVPs, owning the full design, development, and deployment lifecycle.
- Monitoring & Governance
- Implement and monitor chatbot KPIs using tools like Kibana, Grafana, and custom dashboards.
- Champion ethical AI practices, ensuring compliance with governance, data privacy, and security standards.
Must-Have Skills
- Experience & Leadership
- 10+ years of experience in AI/ML with demonstrable success in chatbot, conversational AI, and generative AI implementations.
- Proven experience in building and operationalizing AI/Chatbot architecture frameworks across enterprises.
- Technical Expertise
- Programming: Python
- AI/ML Frameworks & Libraries: LangChain, ElasticSearch, spaCy, NLTK, Hugging Face
- LLMs & NLP: GPT, BERT, RAG, prompt engineering, PEFT
- Chatbot Platforms: Azure OpenAI, Microsoft Bot Framework, CLU, CQA
- AI Deployment & Monitoring at Scale
- Conversational AI Integration: APIs, webhooks
- Infrastructure & Platforms
- Cloud: AWS, Azure, GCP
- Containerization: Docker, Kubernetes
- Vector Databases: Pinecone, Weaviate, Qdrant
- Technologies: Semantic search, knowledge graphs, intelligent document processing
- Soft Skills
- Strong leadership and team management
- Excellent communication and documentation
- Deep understanding of AI governance, compliance, and ethical AI practices
Good-to-Have Skills
- Familiarity with tools like Glean, Perplexity.ai, Rasa, XGBoost
- Experience integrating with Salesforce, NetSuite, and understanding of Customer Success domain
- Knowledge of RPA tools like UiPath and its AI Center


Skill Sets:
- Expertise in ML/DL, model lifecycle management, and MLOps (MLflow, Kubeflow)
- Proficiency in Python, TensorFlow, PyTorch, Scikit-learn, and Hugging Face models
- Strong experience in NLP, fine-tuning transformer models, and dataset preparation
- Hands-on with cloud platforms (AWS, GCP, Azure) and scalable ML deployment (Sagemaker, Vertex AI)
- Experience in containerization (Docker, Kubernetes) and CI/CD pipelines
- Knowledge of distributed computing (Spark, Ray), vector databases (FAISS, Milvus), and model optimization (quantization, pruning)
- Familiarity with model evaluation, hyperparameter tuning, and model monitoring for drift detection
Roles and Responsibilities:
- Design and implement end-to-end ML pipelines from data ingestion to production
- Develop, fine-tune, and optimize ML models, ensuring high performance and scalability
- Compare and evaluate models using key metrics (F1-score, AUC-ROC, BLEU etc)
- Automate model retraining, monitoring, and drift detection
- Collaborate with engineering teams for seamless ML integration
- Mentor junior team members and enforce best practices


Job Title : Senior Machine Learning Engineer
Experience : 8+ Years
Location : Chennai
Notice Period : Immediate Joiners Only
Work Mode : Hybrid
Job Summary :
We are seeking an experienced Machine Learning Engineer with a strong background in Python, ML algorithms, and data-driven development.
The ideal candidate should have hands-on experience with popular ML frameworks and tools, solid understanding of clustering and classification techniques, and be comfortable working in Unix-based environments with Agile teams.
Mandatory Skills :
- Programming Languages : Python
- Machine Learning : Strong experience with ML algorithms, models, and libraries such as Scikit-learn, TensorFlow, and PyTorch
- ML Concepts : Proficiency in supervised and unsupervised learning, including techniques such as K-Means, DBSCAN, and Fuzzy Clustering
- Operating Systems : RHEL or any Unix-based OS
- Databases : Oracle or any relational database
- Version Control : Git
- Development Methodologies : Agile
Desired Skills :
- Experience with issue tracking tools such as Azure DevOps or JIRA.
- Understanding of data science concepts.
- Familiarity with Big Data algorithms, models, and libraries.


· Develop and maintain scalable back-end applications using Python frameworks such as Flask/Django/FastAPI.
· Design, build, and optimize data pipelines for ETL processes using tools like PySpark, Airflow, and other similar technologies.
· Work with relational and NoSQL databases to manage and process large datasets efficiently.
Collaborate with data scientists to clean, transform, and prepare data for analytics and machine learning models.
Work in a dynamic environment, at the intersection of software development and data engineering.
Key Responsibilities
- Develop and maintain backend services and APIs using Java (Spring Boot preferred).
- Integrate Large Language Models (LLMs) and Generative AI models (e.g., OpenAI, Hugging Face, LangChain) into applications.
- Collaborate with data scientists to build data pipelines and enable intelligent application features.
- Design scalable systems to support AI model inference and deployment.
- Work with cloud platforms (AWS, GCP, or Azure) for deploying AI-driven services.
- Write clean, maintainable, and well-tested code.
- Participate in code reviews and technical discussions.
Required Skills
- 3–5 years of experience in Java development (preferably with Spring Boot).
- Experience working with RESTful APIs, microservices, and cloud-based deployments.
- Exposure to LLMs, NLP, or GenAI tools (OpenAI, Cohere, Hugging Face, LangChain, etc.).
- Familiarity with Python for data science/ML integration is a plus.
- Good understanding of software engineering best practices (CI/CD, testing, etc.).
- Ability to work collaboratively in cross-functional teams.

We are seeking a passionate and experienced Data Analyst Trainer to design, develop, and deliver training content for aspiring or existing data professionals. The trainer will be responsible for teaching core data analytics skills, tools, and industry practices to ensure trainees are job-ready or upskilled.


- Design and implement cloud solutions, build MLOps on Azure
- Build CI/CD pipelines orchestration by GitLab CI, GitHub Actions, Circle CI, Airflow or similar tools
- Data science model review, run the code refactoring and optimization, containerization, deployment, versioning, and monitoring of its quality
- Data science models testing, validation and tests automation
- Deployment of code and pipelines across environments
- Model performance metrics
- Service performance metrics
- Communicate with a team of data scientists, data engineers and architect, document the processes


About FileSpin.io
FileSpin’s mission is to bring excellence and joy to the enterprise. We are a fully remote team spread across the UK, Europe and India. We bootstrapped in a garage (true story) and have been profitable from day one.
We value innovation and uncompromising professional excellence. Work at FileSpin is challenging, fun and highly rewarding. Come and be part of a unique company that is doing big things without the bloat.
About the Job
Location: Remote
We’re looking for a Junior and Senior Platform Engineer to join us and be on our ambitious growth journey. In this role, you’ll help build FileSpin into the most innovative AI-Enabled Digital Asset Management platform in the world. You'll have ample opportunities to work in areas solving awesome technical challenges and learning along the way.
Our roadmap focuses on creating an amazing API and UI, scaling our cloud infrastructure to deal with an order of magnitude higher media processing volume, implementing ML-pipelines and tuning the stack for high-performance.
Qualifications & Responsibilities
- Proficient in Troubleshooting and Infrastructure management
- Strong skills in Software Development and Programming
- Experience with Databases
- Excellent analytical and problem-solving skills
- Ability to work independently and remotely
- Bachelor's degree in Computer Science, Information Technology, or related field preferred
Essential skills
- Excellent Python Programming skills
- Good Experience with SQL
- Excellent Experience with at least one web frameworks such as Tornado, Flask, FastAPI
- Experience with Video encoding using ffmpeg, Image processing (GraphicsMagick, PIL)
- Good Experience with Git, CI/CD, DevOps tools
- Experience with React, TypeScript, HTML5/CSS3
Nice to have skills
- Experience in ML model training and deployments is a plus
- Web/Proxy servers (nginx/Apache/Traefik)
- SaaS stacks such as task queues, search engines, cache servers
The intangibles
- Culture that values your contribution and gives your autonomy
- Startup ethos, no useless meetings
- Continuous Learning Budget
- An entrepreneurial workplace, we value creativity and innovation
Interview Process
Qualifying test, introductory chat, technical round, HR discussion and job offer.


Desired Competencies (Technical/Behavioral Competency)
Must-Have
- Experience in working with various ML libraries and packages like Scikit learn, Numpy, Pandas, Tensorflow, Matplotlib, Caffe, etc.
- Deep Learning Frameworks: PyTorch, spaCy, Keras
- Deep Learning Architectures: LSTM, CNN, Self-Attention and Transformers
- Experience in working with Image processing, computer vision is must
- Designing data science applications, Large Language Models(LLM) , Generative Pre-trained Transformers (GPT), generative AI techniques, Natural Language Processing (NLP), machine learning techniques, Python, Jupyter Notebook, common data science packages (tensorflow, scikit-learn,kerasetc.,.) , LangChain, Flask,FastAPI, prompt engineering.
- Programming experience in Python
- Strong written and verbal communications
- Excellent interpersonal and collaboration skills.
Role descriptions / Expectations from the Role
Design and implement scalable and efficient data architectures to support generative AI workflows.
Fine tune and optimize large language models (LLM) for generative AI, conduct performance evaluation and benchmarking for LLMs and machine learning models
Apply prompt engineer techniques as required by the use case
Collaborate with research and development teams to build large language models for generative AI use cases, plan and breakdown of larger data science tasks to lower-level tasks
Lead junior data engineers on tasks such as design data pipelines, dataset creation, and deployment, use data visualization tools, machine learning techniques, natural language processing , feature engineering, deep learning , statistical modelling as required by the use case.


Desired Competencies (Technical/Behavioral Competency)
Must-Have
- Hands-on knowledge in machine learning, deep learning, TensorFlow, Python, NLP
- Stay up to date on the latest AI emergences relevant to the business domain.
- Conduct research and development processes for AI strategies.
4. Experience developing and implementing generative AI models, with a strong understanding of deep learning techniques such as GPT, VAE, and GANs.
5. Experience with transformer models such as BERT, GPT, RoBERTa, etc, and a solid understanding of their underlying principles is a plus
Good-to-Have
- Have knowledge of software development methodologies, such as Agile or Scrum
- Have strong communication skills, with the ability to effectively convey complex technical concepts to a diverse audience.
- Have experience with natural language processing (NLP) techniques and tools, such as SpaCy, NLTK, or Hugging Face
- Ensure the quality of code and applications through testing, peer review, and code analysis.
- Root cause analysis and bugs correction
- Familiarity with version control systems, preferably Git.
- Experience with building or maintaining cloud-native applications.
- Familiarity with cloud platforms such as AWS, Azure, or Google Cloud is Plus


Design and implement scalable and efficient data architectures to support generative AI workflows.
2 Fine tune and optimize large language models (LLM) for generative AI, conduct performance evaluation and benchmarking for LLMs and machine learning models
3 Apply prompt engineer techniques as required by the use case
4 Collaborate with research and development teams to build large language models for generative AI use cases, plan and breakdown of larger data science tasks to lower-level tasks
5 Lead junior data engineers on tasks such as design data pipelines, dataset creation, and deployment, use data visualization tools, machine learning techniques, natural language processing , feature engineering, deep learning , statistical modelling as required by the use case.



3+ years’ experience as Python Developer / Designer and Machine learning 2. Performance Improvement understanding and able to write effective, scalable code 3. security and data protection solutions 4. Expertise in at least one popular Python framework (like Django, Flask or Pyramid) 5. Knowledge of object-relational mapping (ORM) 6. Familiarity with front-end technologies (like JavaScript and HTML5




Job description
Job Title: AI-Driven Data Science Automation Intern – Machine Learning Research Specialist
Location: Remote (Global)
Compensation: $50 USD per month
Company: Meta2 Labs
www.meta2labs.com
About Meta2 Labs:
Meta2 Labs is a next-gen innovation studio building products, platforms, and experiences at the convergence of AI, Web3, and immersive technologies. We are a lean, mission-driven collective of creators, engineers, designers, and futurists working to shape the internet of tomorrow. We believe the next wave of value will come from decentralized, intelligent, and user-owned digital ecosystems—and we’re building toward that vision.
As we scale our roadmap and ecosystem, we're looking for a driven, aligned, and entrepreneurial AI-Driven Data Science Automation Intern – Machine Learning Research Specialist to join us on this journey.
The Opportunity:
We’re seeking a part-time AI-Driven Data Science Automation Intern – Machine Learning Research Specialist to join Meta2 Labs at a critical early stage. This is a high-impact role designed for someone who shares our vision and wants to actively shape the future of tech. You’ll be an equal voice at the table and help drive the direction of our ventures, partnerships, and product strategies.
Responsibilities:
- Collaborate on the vision, strategy, and execution across Meta2 Labs' portfolio and initiatives.
- Drive innovation in areas such as AI applications, Web3 infrastructure, and experiential product design.
- Contribute to go-to-market strategies, business development, and partnership opportunities.
- Help shape company culture, structure, and team expansion.
- Be a thought partner and problem-solver in all key strategic discussions.
- Lead or support verticals based on your domain expertise (e.g., product, technology, growth, design, etc.).
- Act as a representative and evangelist for Meta2 Labs in public or partner-facing contexts.
Ideal Profile:
- Passion for emerging technologies (AI, Web3, XR, etc.).
- Comfortable operating in ambiguity and working lean.
- Strong strategic thinking, communication, and collaboration skills.
- Open to wearing multiple hats and learning as you build.
- Driven by purpose and eager to gain experience in a cutting-edge tech environment.
Commitment:
- Flexible, part-time involvement.
- Remote-first and async-friendly culture.
Why Join Meta2 Labs:
- Join a purpose-led studio at the frontier of tech innovation.
- Help build impactful ventures with real-world value and long-term potential.
- Shape your own role, focus, and future within a decentralized, founder-friendly structure.
- Be part of a collaborative, intellectually curious, and builder-centric culture.
Job Types: Part-time, Internship
Pay: $50 USD per month
Work Location: Remote
Job Types: Full-time, Part-time, Internship
Contract length: 3 months
Pay: Up to ₹5,000.00 per month
Benefits:
- Flexible schedule
- Health insurance
- Work from home
Work Location: Remote


Responsibilities
Develop and maintain web and backend components using Python, Node.js, and Zoho tools
Design and implement custom workflows and automations in Zoho
Perform code reviews to maintain quality standards and best practices
Debug and resolve technical issues promptly
Collaborate with teams to gather and analyze requirements for effective solutions
Write clean, maintainable, and well-documented code
Manage and optimize databases to support changing business needs
Contribute individually while mentoring and supporting team members
Adapt quickly to a fast-paced environment and meet expectations within the first month
Leadership Opportunities
Lead and mentor junior developers in the team
Drive projects independently while collaborating with the broader team
Act as a technical liaison between the team and stakeholders to deliver effective solutions
Selection Process
1. HR Screening: Review of qualifications and experience
2. Online Technical Assessment: Test coding and problem-solving skills
3. Technical Interview: Assess expertise in web development, Python, Node.js, APIs, and Zoho
4. Leadership Evaluation: Evaluate team collaboration and leadership abilities
5. Management Interview: Discuss cultural fit and career opportunities
6. Offer Discussion: Finalize compensation and role specifics
Experience Required
2–5 years of relevant experience as a Software Developer
Proven ability to work as a self-starter and contribute individually
Strong technical and interpersonal skills to support team members effectively