50+ Machine Learning (ML) Jobs in India
Apply to 50+ Machine Learning (ML) Jobs on CutShort.io. Find your next job, effortlessly. Browse Machine Learning (ML) Jobs and apply today!




Job Overview
- •Level 1-Previous working experience as a Data Scientist minimum 5 years
- •Level 2-Previous working experience as a Data Scientist for 3 to 5 years
- •In-depth knowledge of Agile process and principles
- •Outstanding communication, presentation, and leadership skills
- •Excellent organizational and time management skills
- •Sharp analytical and problem-solving skills
- •Creative thinker with a vision
- •Flexibility / capacity of adaptation
- •Presentation skills (project reviews with customers and top management)
- •Interest in industrial & automotive topics
- •Fluent in English
- •Ability to work in international teams
- •Engineering degree with strong background in mathematics and computer science. A PhD in a quantitative field and/or a minimum of 3 years of experience in machine learning is a plus.
- •Excellent understanding of traditional machine learning techniques and algorithms, such as k-NN, SVM, Random Forests, etc.
- •Understanding of deep learning techniques
- •Understanding and, ideally, experience with Reinforcement Learning methods
- •Experience using ML, DL frameworks (Scikit-learn, XGBoost, TensorFlow, Keras, MXNet, etc.)
- •Proficiency in at least one programming language (preferably python)
- •Experience with SQL and NoSQL databases
- •Excellent verbal and written skills in English is mandatory Engineering degree.
- Appreciated extra skills
- •Experience in signal and image processing
- •Experience in forecasting and time series modeling
- •Experience with computer vision libraries like OpenCV
- •Experience using cloud platforms
- •Experience with versioning control systems (git)
- •Interest in IoT and hardware adapted to ML tasks



Job Overview
- o Min. 5 years of experience with development in Computer vision, Machine Learning, Deep Learning and associated implementation of algorithms
- oKnowledge and experience in
- -Data Science/Data Analysis techniques
- -Hands on experience of programming in Python, R and MATLAB or Octave
- -Python Frameworks for AI such as TensorFlow, PySpark, Theano etc.
- & libraries like PyTorch, Pandas, Numpy, etc.
- -Algorithms such as Regression, SVM, Decision tree, KNN and Neural Networks
- Skills & Attributes:
- oFast learner and Problem solving
- oInnovative thinking
- oExcellent communication skills
- oIntegrity, accountability and transparency
- oInternational working mindset

- 3 + years owning ML / LLM services in production on Azure (AKS, Azure OpenAI/Azure ML) or another major cloud.
- Strong Python plus hands-on work with a modern deep-learning stack (PyTorch / TensorFlow / HF Transformers).
- Built features with LLM toolchains: prompt engineering, function calling / tools, vector stores (FAISS, Pinecone, etc.).
- Familiar with agentic AI patterns (LangChain / LangGraph, eval harnesses, guardrails) and strategies to tame LLM non-determinism.
- Comfortable with containerization & CI/CD (Docker, Kubernetes, Git-based workflows); can monitor, scale and troubleshoot live services.
Nice-to-Haves
- Experience in billing, collections, fintech, or professional-services SaaS.
- Knowledge of email deliverability, templating engines, or CRM systems.
- Exposure to compliance frameworks (SOC 2, ISO 27001) or secure handling of financial data.

- 3 + years owning ML / LLM services in production on Azure (AKS, Azure OpenAI/Azure ML) or another major cloud.
- Strong Python plus hands-on work with a modern deep-learning stack (PyTorch / TensorFlow / HF Transformers).
- Built features with LLM toolchains: prompt engineering, function calling / tools, vector stores (FAISS, Pinecone, etc.).
- Familiar with agentic AI patterns (LangChain / LangGraph, eval harnesses, guardrails) and strategies to tame LLM non-determinism.
- Comfortable with containerization & CI/CD (Docker, Kubernetes, Git-based workflows); can monitor, scale and troubleshoot live services.
Nice-to-Haves
- Experience in billing, collections, fintech, or professional-services SaaS.
- Knowledge of email deliverability, templating engines, or CRM systems.
- Exposure to compliance frameworks (SOC 2, ISO 27001) or secure handling of financial data.


We are seeking a highly skilled and motivated MLOps Engineer with 3-5 years of experience to join our engineering team. The ideal candidate should possess a strong foundation in DevOps or software engineering principles with practical exposure to machine learning operational workflows. You will be instrumental in operationalizing ML systems, optimizing the deployment lifecycle, and strengthening the integration between data science and engineering teams.
Required Skills:
• Hands-on experience with MLOps platforms such as MLflow and Kubeflow.
• Proficiency in Infrastructure as Code (laC) tools like Terraform or Ansible.
• Strong familiarity with monitoring and alerting frameworks (Prometheus, Grafana, Datadog, AWS CloudWatch).
• Solid understanding of microservices architecture, service discovery, and load balancing.
• Excellent programming skills in Python, with experience in writing modular, testable, and maintainable code.
• Proficient in Docker and container-based application deployments.
• Experience with CI/CD tools such as Jenkins or GitLab Cl.
• Basic working knowledge of Kubernetes for container orchestration.
• Practical experience with cloud-based ML platforms such as AWS SageMaker, Databricks, or Google Vertex Al.
Good-to-Have Skills:
• Awareness of security practices specific to ML pipelines, including secure model endpoints and data protection.
• Experience with scripting languages like Bash or PowerShell for automation tasks.
• Exposure to database scripting and data integration pipelines.
Experience & Qualifications:
• 3-5+ years of experience in MLOps, Site Reliability Engineering (SRE), or
Software Engineering roles.
• At least 2+ years of hands-on experience working on ML/Al systems in production settings.

Job Title: Node.js / AI Engineer
Department: Technology
Location: Remote
Company: Mercer Talent Enterprise
Company Overview:Mercer Talent Enterprise is a leading provider of talent management solutions, dedicated to helping organizations optimize their workforce. We foster a collaborative and innovative work environment where our team members can thrive and contribute to our mission of enhancing talent strategies for our clients.
Position Overview:We are looking for a skilled Node.js / AI Engineer to join our Lighthouse Tech Team. This role is focused on application development, where you will be responsible for designing, developing, and deploying intelligent, AI-powered applications. You will leverage your expertise in Node.js and modern AI technologies to build sophisticated systems that feature Large Language Models (LLMs), AI Agents, and Retrieval-Augmented Generation (RAG) pipelines.
Key Responsibilities:
- Develop and maintain robust and scalable backend services and APIs using Node.js.
- Design, build, and integrate AI-powered features into our core applications.
- Implement and optimize Retrieval-Augmented Generation (RAG) systems to ensure accurate and context-aware responses.
- Develop and orchestrate autonomous AI agents to automate complex tasks and workflows.
- Work with third-party LLM APIs (like OpenAI, Anthropic, etc.) and open-source models, fine-tuning and adapting them for specific use cases.
- Collaborate with product managers and developers to define application requirements and deliver high-quality, AI-driven solutions.
- Ensure the performance, quality, and responsiveness of AI-powered applications.
Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- 5+ years of professional experience in backend application development with a strong focus on Node.js.
- 2+ years of hands-on experience in AI-related development, including building applications that integrate with Large Language Models (LLMs).
- Demonstrable experience developing AI agents and implementing RAG patterns.
- Familiarity with AI/ML frameworks and libraries relevant to application development (e.g., LangChain, LlamaIndex).
- Experience with vector databases (e.g., Pinecone, Chroma, Weaviate) is a plus.
- Excellent problem-solving and analytical skills.
- Strong communication and teamwork abilities.
Benefits:
- Competitive salary and performance-based bonuses.
- Professional development opportunities.

Key Responsibilities:
● Develop and maintain web applications using Django and Flask frameworks.
● Design and implement RESTful APIs using Django Rest Framework (DRF).
● Deploy, manage, and optimize applications on AWS services, including EC2, S3, RDS, Lambda, and CloudFormation.
● Build and integrate APIs for AI/ML models into existing systems.
● Create scalable machine learning models using frameworks like PyTorch, TensorFlow, and scikit-learn.
● Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases.
● Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization.
● Deploy and manage machine learning models in production environments using tools like TensorFlow Serving, TorchServe, and AWS SageMaker.
● Ensure the scalability, performance, and reliability of applications and deployed models.
● Collaborate with cross-functional teams to analyze requirements and deliver effective technical solutions.
● Write clean, maintainable, and efficient code following best practices. ● Conduct code reviews and provide constructive feedback to peers.
● Stay up-to-date with the latest industry trends and technologies, particularly in AI/ML.
Required Skills and Qualifications:
● Bachelor’s degree in Computer Science, Engineering, or a related field.
● 2+ years of professional experience as a Python Developer.
● Proficient in Python with a strong understanding of its ecosystem.
● Extensive experience with Django and Flask frameworks.
● Hands-on experience with AWS services for application deployment and management.
● Strong knowledge of Django Rest Framework (DRF) for building APIs. ● Expertise in machine learning frameworks such as PyTorch, TensorFlow, and scikit-learn.
● Experience with transformer architectures for NLP and advanced AI solutions.
● Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB).
● Familiarity with MLOps practices for managing the machine learning lifecycle.
● Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus.
● Excellent problem-solving skills and the ability to work independently and as part of a team.

We are seeking a highly skilled Senior/Lead Data Scientist with deep expertise in AI/ML/Gen AI, including Deep Learning, Computer Vision, and NLP. The ideal candidate will bring strong hands-on experience, particularly in building, fine-tuning, and deploying models, and will work directly with customers with minimal supervision.
This role requires someone who can not only lead and execute technical projects but also actively contribute to business development through customer interaction, proposal building, and RFP responses. You will be expected to take ownership of AI project execution and team leadership, while helping Tekdi expand its AI footprint.
Key Responsibilities:
Contribute to AI business growth by working on RFPs, proposals, and solutioning activities.
- Lead the team in delivering customer requirements, ensuring quality and timely execution.
Develop and fine tune advanced AI/ML models using deep learning and generative AI techniques.
- Fine-tune and optimize Large Language Models (LLMs) such as GPT, BERT, T5, and LLaMA.
- Interact directly with customers to understand their business needs and provide AI-driven solutions.
- Work with Deep Learning architectures including CNNs, RNNs, and Transformer-based models.
- Leverage NLP techniques such as summarization, NER, sentiment analysis, and embeddings.
- Implement MLOps pipelines and deploy scalable AI solutions in cloud environments (AWS, GCP, Azure).
- Collaborate with cross-functional teams to integrate AI into business applications.
- Stay updated with AI/ML research and integrate new techniques into projects.
Required Skills & Qualifications:
- Minimum 6 years of experience in AI/ML/Gen AI, with at least 3+ years in Deep Learning/Computer Vision.
- Strong proficiency in Python and popular AI/ML frameworks (TensorFlow, PyTorch, Hugging Face, Scikit-learn).
- Hands-on experience with LLMs and generative models (e.g., GPT, Stable Diffusion).
- Experience with data preprocessing, feature engineering, and performance evaluation.
- Exposure to containerization and cloud deployment using Docker, Kubernetes.
- Experience with vector databases and RAG-based architectures.
- Ability to lead teams and projects, and work independently with minimal guidance.
- Experience with customer-facing roles, proposals, and solutioning.
Educational Requirements:
- Bachelor’s, Master’s, or PhD in Computer Science, Artificial Intelligence, Information Technology, or related field.
Preferred Skills (Good to Have):
- Knowledge of Reinforcement Learning (e.g., RLHF), multi-modal AI, or time-series forecasting.
- Familiarity with Graph Neural Networks (GNNs).
- Exposure to Responsible AI (RAI), AI Ethics, or AutoML platforms.
- Contributions to open-source AI projects or publications in peer-reviewed journals.

Job Description: AI/ML Specialist
We are looking for a highly skilled and experienced AI/ML Specialist to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with expertise in deploying and managing applications on AWS. Proficiency in Django Rest Framework (DRF), a solid understanding of machine learning concepts, and hands-on experience with tools like PyTorch, TensorFlow, and transformer architectures are essential.
Key Responsibilities
● Develop and maintain web applications using Django and Flask frameworks.
● Design and implement RESTful APIs using Django Rest Framework (DRF).
● Deploy, manage, and optimize applications on AWS services, including EC2, S3, RDS, Lambda, and CloudFormation.
● Build and integrate APIs for AI/ML models into existing systems.
● Create scalable machine learning models using frameworks like PyTorch, TensorFlow, and scikit-learn.
● Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases.
● Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization.
● Deploy and manage machine learning models in production environments using tools like TensorFlow Serving, TorchServe, and AWS SageMaker.
● Ensure the scalability, performance, and reliability of applications and deployed models.
● Collaborate with cross-functional teams to analyze requirements and deliver effective technical solutions.
● Write clean, maintainable, and efficient code following best practices.
● Conduct code reviews and provide constructive feedback to peers.
● Stay up-to-date with the latest industry trends and technologies, particularly in AI/ML.
Required Skills and Qualifications
● Bachelor’s degree in Computer Science, Engineering, or a related field.
● 3+ years of professional experience as a AI/ML Specialist
● Proficient in Python with a strong understanding of its ecosystem.
● Extensive experience with Django and Flask frameworks.
● Hands-on experience with AWS services for application deployment and management.
● Strong knowledge of Django Rest Framework (DRF) for building APIs.
● Expertise in machine learning frameworks such as PyTorch, TensorFlow, and scikit-learn.
● Experience with transformer architectures for NLP and advanced AI solutions.
● Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB).
● Familiarity with MLOps practices for managing the machine learning lifecycle.
● Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus.
● Excellent problem-solving skills and the ability to work independently and as part of a team.
● Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders.

About Moative
Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots and predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries. Through Moative Labs, we aspire to build micro-products and launch AI startups in vertical markets.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Role
We seek experienced ML/AI professionals with strong backgrounds in computer science, software engineering, or related elds to join our Azure-focused MLOps team. If you’re passionate about deploying complex machine learning models in real-world settings, bridging the gap between research and production, and working on high-impact projects, this role is for you.
Work you’ll do
As an operations engineer, you’ll oversee the entire ML lifecycle on Azure—spanning initial proofs-of-concept to large-scale production deployments. You’ll build and maintain automated training, validation, and deployment pipelines using Azure DevOps, Azure ML, and related services, ensuring models are continuously monitored, optimized for performance, and cost-eective. By integrating MLOps practices such as MLow and CI/CD, you’ll drive rapid iteration and experimentation. In close collaboration with senior ML engineers, data scientists, and domain experts, you’ll deliver robust, production-grade ML solutions that directly impact business outcomes.
Responsibilities
- ML-focused DevOps: Set up robust CI/CD pipelines with a strong emphasis on model versioning, automated testing, and advanced deployment strategies on Azure.
- Monitoring & Maintenance: Track and optimize the performance of deployed models through live metrics, alerts, and iterative improvements.
- Automation: Eliminate repetitive tasks around data preparation, model retraining, and inference by leveraging scripting and infrastructure as code (e.g., Terraform, ARM templates).
- Security & Reliability: Implement best practices for securing ML workows on Azure, including identity/access management, container security, and data encryption.
- Collaboration: Work closely with the data science teams to ensure model performance is within agreed SLAs, both for training and inference.
Skills & Requirements
- 2+ years of hands-on programming experience with Python (PySpark or Scala optional).
- Solid knowledge of Azure cloud services (Azure ML, Azure DevOps, ACI/AKS).
- Practical experience with DevOps concepts: CI/CD, containerization (Docker, Kubernetes), infrastructure as code (Terraform, ARM templates).
- Fundamental understanding of MLOps: MLow or similar frameworks for tracking and versioning.
- Familiarity with machine learning frameworks (TensorFlow, PyTorch, XGBoost) and how to operationalize them in production.
- Broad understanding of data structures and data engineering.
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, eiciency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less.
Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, x or improve – anything that isn’t done right, irrespective of who did it. Be selsh about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply here. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.


- Strong AI/ML OR Software Developer Profile
- Mandatory (Experience 1) - Must have 3+ YOE in Core Software Developement (SDLC)
- Mandatory (Experience 2) - Must have 2+ years of experience in AI/ML, preferably in conversational AI domain (spped to text, text to speech, speech emotional recognition) or agentic AI systems.
- Mandatory (Experience 3) - Must have hands-on experience in fine-tuning LLMs/SLM, model optimization (quantization, distillation) and RAG
- Mandatory (Experience 4) - Hands-on Programming experience in Python, TensorFlow, PyTorch and model APIs (Hugging Face, LangChain, OpenAI, etc


Role Overview
We are seeking a passionate and skilled Machine Learning Engineer to join our team. The ideal
candidate will have a strong background in machine learning, data science, and software engineering.
As a Machine Learning Engineer, you will work closely with our clients and internal teams to develop,
implement, and maintain machine learning models that solve real-world problems.
Must Have Skills
• 2+ years of experience into Computer vision and NLP projects.
• 2+ years of experience in machine learning and Gen AI, data science, or a related field.
• Strong experience in python programming
• Understanding of data structures, data modeling and software architecture
• Deep knowledge of math, probability, statistics and algorithms
• Familiarity with machine learning frameworks (like Keras or PyTorch) and libraries (like scikit-
learn)
• Excellent communication skills
• Ability to work in a team
• Outstanding analytical and problem-solving skills
• BSc in Computer Science, Mathematics or similar field; Master’s degree is a plus
Role and Responsibilities
• Study and transform data science prototypes• Design machine learning systems
• Research and implement appropriate ML algorithms and tools
• Develop machine learning applications according to requirements
• Select appropriate datasets and data representation methods
• Run machine learning tests and experiments
• Perform statistical analysis and fine-tuning using test results
• Train and retrain systems when necessary
• Extend existing ML libraries and frameworks
• Keep abreast of developments in the field


We are seeking a visionary and hands-on AI/ML and Chatbot Lead to spearhead the design, development, and deployment of enterprise-wide Conversational and Generative AI solutions. This role will be instrumental in establishing and scaling our AI Lab function, defining chatbot and multimodal AI strategies, and delivering intelligent automation solutions that enhance user engagement and operational efficiency.
Key Responsibilities
- Strategy & Leadership
- Define and lead the enterprise-wide strategy for Conversational AI, Multimodal AI, and Large Language Models (LLMs).
- Establish and scale an AI/Chatbot Lab, with a clear roadmap for innovation across in-app, generative, and conversational AI use cases.
- Lead, mentor, and scale a high-performing team of AI/ML engineers and chatbot developers.
- Architecture & Development
- Architect scalable AI/ML systems encompassing presentation, orchestration, AI, and data layers.
- Build multi-turn, memory-aware conversations using frameworks like LangChain or Semantic Kernel.
- Integrate chatbots with enterprise platforms such as Salesforce, NetSuite, Slack, and custom applications via APIs/webhooks.
- Solution Delivery
- Collaborate with business stakeholders to assess needs, conduct ROI analyses, and deliver high-impact AI solutions.
- Identify and implement agentic AI capabilities and SaaS optimization opportunities.
- Deliver POCs, pilots, and MVPs, owning the full design, development, and deployment lifecycle.
- Monitoring & Governance
- Implement and monitor chatbot KPIs using tools like Kibana, Grafana, and custom dashboards.
- Champion ethical AI practices, ensuring compliance with governance, data privacy, and security standards.
Must-Have Skills
- Experience & Leadership
- 10+ years of experience in AI/ML with demonstrable success in chatbot, conversational AI, and generative AI implementations.
- Proven experience in building and operationalizing AI/Chatbot architecture frameworks across enterprises.
- Technical Expertise
- Programming: Python
- AI/ML Frameworks & Libraries: LangChain, ElasticSearch, spaCy, NLTK, Hugging Face
- LLMs & NLP: GPT, BERT, RAG, prompt engineering, PEFT
- Chatbot Platforms: Azure OpenAI, Microsoft Bot Framework, CLU, CQA
- AI Deployment & Monitoring at Scale
- Conversational AI Integration: APIs, webhooks
- Infrastructure & Platforms
- Cloud: AWS, Azure, GCP
- Containerization: Docker, Kubernetes
- Vector Databases: Pinecone, Weaviate, Qdrant
- Technologies: Semantic search, knowledge graphs, intelligent document processing
- Soft Skills
- Strong leadership and team management
- Excellent communication and documentation
- Deep understanding of AI governance, compliance, and ethical AI practices
Good-to-Have Skills
- Familiarity with tools like Glean, Perplexity.ai, Rasa, XGBoost
- Experience integrating with Salesforce, NetSuite, and understanding of Customer Success domain
- Knowledge of RPA tools like UiPath and its AI Center


Skill Sets:
- Expertise in ML/DL, model lifecycle management, and MLOps (MLflow, Kubeflow)
- Proficiency in Python, TensorFlow, PyTorch, Scikit-learn, and Hugging Face models
- Strong experience in NLP, fine-tuning transformer models, and dataset preparation
- Hands-on with cloud platforms (AWS, GCP, Azure) and scalable ML deployment (Sagemaker, Vertex AI)
- Experience in containerization (Docker, Kubernetes) and CI/CD pipelines
- Knowledge of distributed computing (Spark, Ray), vector databases (FAISS, Milvus), and model optimization (quantization, pruning)
- Familiarity with model evaluation, hyperparameter tuning, and model monitoring for drift detection
Roles and Responsibilities:
- Design and implement end-to-end ML pipelines from data ingestion to production
- Develop, fine-tune, and optimize ML models, ensuring high performance and scalability
- Compare and evaluate models using key metrics (F1-score, AUC-ROC, BLEU etc)
- Automate model retraining, monitoring, and drift detection
- Collaborate with engineering teams for seamless ML integration
- Mentor junior team members and enforce best practices


Job Title : Senior Machine Learning Engineer
Experience : 8+ Years
Location : Chennai
Notice Period : Immediate Joiners Only
Work Mode : Hybrid
Job Summary :
We are seeking an experienced Machine Learning Engineer with a strong background in Python, ML algorithms, and data-driven development.
The ideal candidate should have hands-on experience with popular ML frameworks and tools, solid understanding of clustering and classification techniques, and be comfortable working in Unix-based environments with Agile teams.
Mandatory Skills :
- Programming Languages : Python
- Machine Learning : Strong experience with ML algorithms, models, and libraries such as Scikit-learn, TensorFlow, and PyTorch
- ML Concepts : Proficiency in supervised and unsupervised learning, including techniques such as K-Means, DBSCAN, and Fuzzy Clustering
- Operating Systems : RHEL or any Unix-based OS
- Databases : Oracle or any relational database
- Version Control : Git
- Development Methodologies : Agile
Desired Skills :
- Experience with issue tracking tools such as Azure DevOps or JIRA.
- Understanding of data science concepts.
- Familiarity with Big Data algorithms, models, and libraries.


Job Description:
We are seeking a highly skilled Python Developer with expertise in Artificial Intelligence and Machine Learning to join our innovative AI team. The ideal candidate will design, build, and deploy machine learning solutions while writing clean, scalable Python code to power real-world applications.
🔧 Responsibilities:
- Develop and maintain robust Python applications focused on AI and ML use cases.
- Design, train, and evaluate ML models (e.g., regression, classification, NLP, or computer vision).
- Work with data scientists and ML engineers to productionize models using frameworks like Flask, FastAPI, or Docker.
- Optimize algorithms for performance, scalability, and accuracy.
- Build APIs and pipelines to integrate ML models into applications.
- Implement and maintain unit/integration tests and participate in code reviews.
- Use cloud platforms (AWS, Azure, GCP) for deploying AI/ML services.
💡 Required Skills:
- Strong proficiency in Python and experience with ML libraries (e.g., Scikit-learn, TensorFlow, PyTorch).
- Experience with data processing tools (Pandas, NumPy, Spark).
- Understanding of ML lifecycle, including data collection, cleaning, feature engineering, model training, and evaluation.
- Experience building and consuming RESTful APIs.
- Familiarity with SQL/NoSQL databases (e.g., PostgreSQL, MongoDB).
- Version control using Git.


· Develop and maintain scalable back-end applications using Python frameworks such as Flask/Django/FastAPI.
· Design, build, and optimize data pipelines for ETL processes using tools like PySpark, Airflow, and other similar technologies.
· Work with relational and NoSQL databases to manage and process large datasets efficiently.
Collaborate with data scientists to clean, transform, and prepare data for analytics and machine learning models.
Work in a dynamic environment, at the intersection of software development and data engineering.
Key Responsibilities
- Develop and maintain backend services and APIs using Java (Spring Boot preferred).
- Integrate Large Language Models (LLMs) and Generative AI models (e.g., OpenAI, Hugging Face, LangChain) into applications.
- Collaborate with data scientists to build data pipelines and enable intelligent application features.
- Design scalable systems to support AI model inference and deployment.
- Work with cloud platforms (AWS, GCP, or Azure) for deploying AI-driven services.
- Write clean, maintainable, and well-tested code.
- Participate in code reviews and technical discussions.
Required Skills
- 3–5 years of experience in Java development (preferably with Spring Boot).
- Experience working with RESTful APIs, microservices, and cloud-based deployments.
- Exposure to LLMs, NLP, or GenAI tools (OpenAI, Cohere, Hugging Face, LangChain, etc.).
- Familiarity with Python for data science/ML integration is a plus.
- Good understanding of software engineering best practices (CI/CD, testing, etc.).
- Ability to work collaboratively in cross-functional teams.

We are seeking a passionate and experienced Data Analyst Trainer to design, develop, and deliver training content for aspiring or existing data professionals. The trainer will be responsible for teaching core data analytics skills, tools, and industry practices to ensure trainees are job-ready or upskilled.




Job Overview:
We are looking for a skilled professional with:
- 7+ years of overall experience, including minimum 5 years in Computer Vision, Machine Learning, Deep Learning, and algorithm development.
- Proficiency in Data Science and Data Analysis techniques.
- Hands-on programming experience with Python, R, MATLAB or Octave.
- Experience with AI frameworks like TensorFlow, PySpark, Theano, and libraries such as PyTorch, Pandas, NumPy, etc.
- Strong understanding of algorithms like Regression, SVM, Decision Trees, KNN, and Neural Networks.
Key Skills & Attributes:
- Fast learner with strong problem-solving abilities
- Innovative thinking and approach
- Excellent communication skills
- High standards of integrity, accountability, and transparency
- Exposure to or experience with international work environments
Notice Period : Immediate to 30Days


- Design and implement cloud solutions, build MLOps on Azure
- Build CI/CD pipelines orchestration by GitLab CI, GitHub Actions, Circle CI, Airflow or similar tools
- Data science model review, run the code refactoring and optimization, containerization, deployment, versioning, and monitoring of its quality
- Data science models testing, validation and tests automation
- Deployment of code and pipelines across environments
- Model performance metrics
- Service performance metrics
- Communicate with a team of data scientists, data engineers and architect, document the processes


Tech Lead(Fullstack) – Nexa (Conversational Voice AI Platform)
Location: Bangalore Type: Full-time
Experience: 4+ years (preferably in early-stage startups)
Tech Stack: Python (core), Node.js, React.js
About Nexa
Nexa is a new venture by the founders of HeyCoach—Pratik Kapasi and Aditya Kamat—on a mission to build the most intuitive voice-first AI platform. We’re rethinking how humans interact with machines using natural, intelligent, and fast conversational interfaces.
We're looking for a Tech Lead to join us at the ground level. This is a high-ownership, high-speed role for builders who want to move fast and go deep.
What You’ll Do
● Design, build, and scale backend and full-stack systems for our voice AI engine
● Work primarily with Python (core logic, pipelines, model integration), and support full-stack features using Node.js and React.js
● Lead projects end-to-end—from whiteboard to production deployment
● Optimize systems for performance, scale, and real-time processing
● Collaborate with founders, ML engineers, and designers to rapidly prototype and ship features
● Set engineering best practices, own code quality, and mentor junior team members as we grow
✅ Must-Have Skills
● 4+ years of experience in Python, building scalable production systems
● Has led projects independently, from design through deployment
● Excellent at executing fast without compromising quality
● Strong foundation in system design, data structures and algorithms
● Hands-on experience with Node.js and React.js in a production setup
● Deep understanding of backend architecture—APIs, microservices, data flows
● Proven success working in early-stage startups, especially during 0→1 scaling phases
● Ability to debug and optimize across the full stack
● High autonomy—can break down big problems, prioritize, and deliver without hand-holding
🚀 What We Value
● Speed > Perfection: We move fast, ship early, and iterate
● Ownership mindset: You act like a founder-even if you're not one
● Technical depth: You’ve built things from scratch and understand what’s under the hood
● Product intuition: You don’t just write code—you ask if it solves the user’s problem
● Startup muscle: You’re scrappy, resourceful, and don’t need layers of process
● Bias for action: You unblock yourself and others. You push code and push thinking
Humility and curiosity
: You challenge ideas, accept better ones, and never stop learning
💡 Nice-to-Have
● Experience with NLP, speech interfaces, or audio processing
● Familiarity with cloud platforms (GCP/AWS), CI/CD, Docker, Kubernetes
● Contributions to open-source or technical blogs
● Prior experience integrating ML models into production systems
Why Join Nexa?
● Work directly with founders on a product that pushes boundaries in voice AI
● Be part of the core team shaping product and tech from day one
● High-trust environment focused on output and impact, not hours
● Flexible work style and a flat, fast culture


About FileSpin.io
FileSpin’s mission is to bring excellence and joy to the enterprise. We are a fully remote team spread across the UK, Europe and India. We bootstrapped in a garage (true story) and have been profitable from day one.
We value innovation and uncompromising professional excellence. Work at FileSpin is challenging, fun and highly rewarding. Come and be part of a unique company that is doing big things without the bloat.
About the Job
Location: Remote
We’re looking for a Junior and Senior Platform Engineer to join us and be on our ambitious growth journey. In this role, you’ll help build FileSpin into the most innovative AI-Enabled Digital Asset Management platform in the world. You'll have ample opportunities to work in areas solving awesome technical challenges and learning along the way.
Our roadmap focuses on creating an amazing API and UI, scaling our cloud infrastructure to deal with an order of magnitude higher media processing volume, implementing ML-pipelines and tuning the stack for high-performance.
Qualifications & Responsibilities
- Proficient in Troubleshooting and Infrastructure management
- Strong skills in Software Development and Programming
- Experience with Databases
- Excellent analytical and problem-solving skills
- Ability to work independently and remotely
- Bachelor's degree in Computer Science, Information Technology, or related field preferred
Essential skills
- Excellent Python Programming skills
- Good Experience with SQL
- Excellent Experience with at least one web frameworks such as Tornado, Flask, FastAPI
- Experience with Video encoding using ffmpeg, Image processing (GraphicsMagick, PIL)
- Good Experience with Git, CI/CD, DevOps tools
- Experience with React, TypeScript, HTML5/CSS3
Nice to have skills
- Experience in ML model training and deployments is a plus
- Web/Proxy servers (nginx/Apache/Traefik)
- SaaS stacks such as task queues, search engines, cache servers
The intangibles
- Culture that values your contribution and gives your autonomy
- Startup ethos, no useless meetings
- Continuous Learning Budget
- An entrepreneurial workplace, we value creativity and innovation
Interview Process
Qualifying test, introductory chat, technical round, HR discussion and job offer.


Desired Competencies (Technical/Behavioral Competency)
Must-Have
- Experience in working with various ML libraries and packages like Scikit learn, Numpy, Pandas, Tensorflow, Matplotlib, Caffe, etc.
- Deep Learning Frameworks: PyTorch, spaCy, Keras
- Deep Learning Architectures: LSTM, CNN, Self-Attention and Transformers
- Experience in working with Image processing, computer vision is must
- Designing data science applications, Large Language Models(LLM) , Generative Pre-trained Transformers (GPT), generative AI techniques, Natural Language Processing (NLP), machine learning techniques, Python, Jupyter Notebook, common data science packages (tensorflow, scikit-learn,kerasetc.,.) , LangChain, Flask,FastAPI, prompt engineering.
- Programming experience in Python
- Strong written and verbal communications
- Excellent interpersonal and collaboration skills.
Role descriptions / Expectations from the Role
Design and implement scalable and efficient data architectures to support generative AI workflows.
Fine tune and optimize large language models (LLM) for generative AI, conduct performance evaluation and benchmarking for LLMs and machine learning models
Apply prompt engineer techniques as required by the use case
Collaborate with research and development teams to build large language models for generative AI use cases, plan and breakdown of larger data science tasks to lower-level tasks
Lead junior data engineers on tasks such as design data pipelines, dataset creation, and deployment, use data visualization tools, machine learning techniques, natural language processing , feature engineering, deep learning , statistical modelling as required by the use case.


Desired Competencies (Technical/Behavioral Competency)
Must-Have
- Hands-on knowledge in machine learning, deep learning, TensorFlow, Python, NLP
- Stay up to date on the latest AI emergences relevant to the business domain.
- Conduct research and development processes for AI strategies.
4. Experience developing and implementing generative AI models, with a strong understanding of deep learning techniques such as GPT, VAE, and GANs.
5. Experience with transformer models such as BERT, GPT, RoBERTa, etc, and a solid understanding of their underlying principles is a plus
Good-to-Have
- Have knowledge of software development methodologies, such as Agile or Scrum
- Have strong communication skills, with the ability to effectively convey complex technical concepts to a diverse audience.
- Have experience with natural language processing (NLP) techniques and tools, such as SpaCy, NLTK, or Hugging Face
- Ensure the quality of code and applications through testing, peer review, and code analysis.
- Root cause analysis and bugs correction
- Familiarity with version control systems, preferably Git.
- Experience with building or maintaining cloud-native applications.
- Familiarity with cloud platforms such as AWS, Azure, or Google Cloud is Plus


Design and implement scalable and efficient data architectures to support generative AI workflows.
2 Fine tune and optimize large language models (LLM) for generative AI, conduct performance evaluation and benchmarking for LLMs and machine learning models
3 Apply prompt engineer techniques as required by the use case
4 Collaborate with research and development teams to build large language models for generative AI use cases, plan and breakdown of larger data science tasks to lower-level tasks
5 Lead junior data engineers on tasks such as design data pipelines, dataset creation, and deployment, use data visualization tools, machine learning techniques, natural language processing , feature engineering, deep learning , statistical modelling as required by the use case.



3+ years’ experience as Python Developer / Designer and Machine learning 2. Performance Improvement understanding and able to write effective, scalable code 3. security and data protection solutions 4. Expertise in at least one popular Python framework (like Django, Flask or Pyramid) 5. Knowledge of object-relational mapping (ORM) 6. Familiarity with front-end technologies (like JavaScript and HTML5




Job description
Job Title: AI-Driven Data Science Automation Intern – Machine Learning Research Specialist
Location: Remote (Global)
Compensation: $50 USD per month
Company: Meta2 Labs
www.meta2labs.com
About Meta2 Labs:
Meta2 Labs is a next-gen innovation studio building products, platforms, and experiences at the convergence of AI, Web3, and immersive technologies. We are a lean, mission-driven collective of creators, engineers, designers, and futurists working to shape the internet of tomorrow. We believe the next wave of value will come from decentralized, intelligent, and user-owned digital ecosystems—and we’re building toward that vision.
As we scale our roadmap and ecosystem, we're looking for a driven, aligned, and entrepreneurial AI-Driven Data Science Automation Intern – Machine Learning Research Specialist to join us on this journey.
The Opportunity:
We’re seeking a part-time AI-Driven Data Science Automation Intern – Machine Learning Research Specialist to join Meta2 Labs at a critical early stage. This is a high-impact role designed for someone who shares our vision and wants to actively shape the future of tech. You’ll be an equal voice at the table and help drive the direction of our ventures, partnerships, and product strategies.
Responsibilities:
- Collaborate on the vision, strategy, and execution across Meta2 Labs' portfolio and initiatives.
- Drive innovation in areas such as AI applications, Web3 infrastructure, and experiential product design.
- Contribute to go-to-market strategies, business development, and partnership opportunities.
- Help shape company culture, structure, and team expansion.
- Be a thought partner and problem-solver in all key strategic discussions.
- Lead or support verticals based on your domain expertise (e.g., product, technology, growth, design, etc.).
- Act as a representative and evangelist for Meta2 Labs in public or partner-facing contexts.
Ideal Profile:
- Passion for emerging technologies (AI, Web3, XR, etc.).
- Comfortable operating in ambiguity and working lean.
- Strong strategic thinking, communication, and collaboration skills.
- Open to wearing multiple hats and learning as you build.
- Driven by purpose and eager to gain experience in a cutting-edge tech environment.
Commitment:
- Flexible, part-time involvement.
- Remote-first and async-friendly culture.
Why Join Meta2 Labs:
- Join a purpose-led studio at the frontier of tech innovation.
- Help build impactful ventures with real-world value and long-term potential.
- Shape your own role, focus, and future within a decentralized, founder-friendly structure.
- Be part of a collaborative, intellectually curious, and builder-centric culture.
Job Types: Part-time, Internship
Pay: $50 USD per month
Work Location: Remote
Job Types: Full-time, Part-time, Internship
Contract length: 3 months
Pay: Up to ₹5,000.00 per month
Benefits:
- Flexible schedule
- Health insurance
- Work from home
Work Location: Remote


Responsibilities
Develop and maintain web and backend components using Python, Node.js, and Zoho tools
Design and implement custom workflows and automations in Zoho
Perform code reviews to maintain quality standards and best practices
Debug and resolve technical issues promptly
Collaborate with teams to gather and analyze requirements for effective solutions
Write clean, maintainable, and well-documented code
Manage and optimize databases to support changing business needs
Contribute individually while mentoring and supporting team members
Adapt quickly to a fast-paced environment and meet expectations within the first month
Leadership Opportunities
Lead and mentor junior developers in the team
Drive projects independently while collaborating with the broader team
Act as a technical liaison between the team and stakeholders to deliver effective solutions
Selection Process
1. HR Screening: Review of qualifications and experience
2. Online Technical Assessment: Test coding and problem-solving skills
3. Technical Interview: Assess expertise in web development, Python, Node.js, APIs, and Zoho
4. Leadership Evaluation: Evaluate team collaboration and leadership abilities
5. Management Interview: Discuss cultural fit and career opportunities
6. Offer Discussion: Finalize compensation and role specifics
Experience Required
2–5 years of relevant experience as a Software Developer
Proven ability to work as a self-starter and contribute individually
Strong technical and interpersonal skills to support team members effectively


- Design, develop, and maintain data pipelines and ETL workflows on AWS platform
- Work with AWS services like S3, Glue, Lambda, Redshift, EMR, and Athena for data ingestion, transformation, and analytics
- Collaborate with Data Scientists, Analysts, and Business teams to understand data requirements
- Optimize data workflows for performance, scalability, and reliability
- Troubleshoot data issues, monitor jobs, and ensure data quality and integrity
- Write efficient SQL queries and automate data processing tasks
- Implement data security and compliance best practices
- Maintain technical documentation and data pipeline monitoring dashboards


Apply only if:
- You are an AI agent.
- OR you know how to build an AI agent that can do this job.
What You’ll Do: At LearnTube, we’re pushing the boundaries of Generative AI to revolutionize how the world learns. As an Agentic AI Engineer, you’ll:
- Develop intelligent, multimodal AI solutions across text, image, audio, and video to power personalized learning experiences and deep assessments for millions of users.
- Drive the future of live learning by building real-time interaction systems with capabilities like instant feedback, assistance, and personalized tutoring.
- Conduct proactive research and integrate the latest advancements in AI & agents into scalable, production-ready solutions that set industry benchmarks.
- Build and maintain robust, efficient data pipelines that leverage insights from millions of user interactions to create high-impact, generalizable solutions.
- Collaborate with a close-knit team of engineers, agents, founders, and key stakeholders to align AI strategies with LearnTube's mission.
About Us: At LearnTube, we’re on a mission to make learning accessible, affordable, and engaging for millions of learners globally. Using Generative AI, we transform scattered internet content into dynamic, goal-driven courses with:
- AI-powered tutors that teach live, solve doubts in real time, and provide instant feedback.
- Seamless delivery through WhatsApp, mobile apps, and the web, with over 1.4 million learners across 64 countries.
Meet the Founders: LearnTube was founded by Shronit Ladhani and Gargi Ruparelia, who bring deep expertise in product development and ed-tech innovation. Shronit, a TEDx speaker, is an advocate for disrupting traditional learning, while Gargi’s focus on scalable AI solutions drives our mission to build an AI-first company that empowers learners to achieve career outcomes.
We’re proud to be recognized by Google as a Top 20 AI Startup and are part of their 2024 Startups Accelerator: AI First Program, giving us access to cutting-edge technology, credits, and mentorship from industry leaders.
Why Work With Us? At LearnTube, we believe in creating a work environment that’s as transformative as the products we build. Here’s why this role is an incredible opportunity:
- Cutting-Edge Technology: You’ll work on state-of-the-art generative AI applications, leveraging the latest advancements in LLMs, multimodal AI, and real-time systems.
- Autonomy and Ownership: Experience unparalleled flexibility and independence in a role where you’ll own high-impact projects from ideation to deployment.
- Rapid Growth: Accelerate your career by working on impactful projects that pack three years of learning and growth into one.
- Founder and Advisor Access: Collaborate directly with founders and industry experts, including the CTO of Inflection AI, to build transformative solutions.
- Team Culture: Join a close-knit team of high-performing engineers and innovators, where every voice matters, and Monday morning meetings are something to look forward to.
- Mission-Driven Impact: Be part of a company that’s redefining education for millions of learners and making AI accessible to everyone.


Are you passionate about the power of data and excited to leverage cutting-edge AI/ML to drive business impact? At Poshmark, we tackle complex challenges in personalization, trust & safety, marketing optimization, product experience, and more.
Why Poshmark?
As a leader in Social Commerce, Poshmark offers an unparalleled opportunity to work with extensive multi-platform social and commerce data. With over 130 million users generating billions of daily events and petabytes of rapidly growing data, you’ll be at the forefront of data science innovation. If building impactful, data-driven AI solutions for millions excites you, this is your place.
What You’ll Do
- Drive end-to-end data science initiatives, from ideation to deployment, delivering measurable business impact through projects such as feed personalization, product recommendation systems, and attribute extraction using computer vision.
- Collaborate with cross-functional teams, including ML engineers, product managers, and business stakeholders, to design and deploy high-impact models.
- Develop scalable solutions for key areas like product, marketing, operations, and community functions.
- Own the entire ML Development lifecycle: data exploration, model development, deployment, and performance optimization.
- Apply best practices for managing and maintaining machine learning models in production environments.
- Explore and experiment with emerging AI trends, technologies, and methodologies to keep Poshmark at the cutting edge.
Your Experience & Skills
- Ideal Experience: 6-9 years of building scalable data science solutions in a big data environment. Experience with personalization algorithms, recommendation systems, or user behavior modeling is a big plus.
- Machine Learning Knowledge: Hands-on experience with key ML algorithms, including CNNs, Transformers, and Vision Transformers. Familiarity with Large Language Models (LLMs) and techniques like RAG or PEFT is a bonus.
- Technical Expertise: Proficiency in Python, SQL, and Spark (Scala or PySpark), with hands-on experience in deep learning frameworks like PyTorch or TensorFlow. Familiarity with ML engineering tools like Flask, Docker, and MLOps practices.
- Mathematical Foundations: Solid grasp of linear algebra, statistics, probability, calculus, and A/B testing concepts.
- Collaboration & Communication: Strong problem-solving skills and ability to communicate complex technical ideas to diverse audiences, including executives and engineers.


AccioJob is conducting an offline hiring drive with Gaian Solutions India for the position of AI /ML Intern.
Required Skills - Python,SQL, ML libraries like (scikit-learn, pandas, TensorFlow, etc.)
Apply Here - https://go.acciojob.com/tUxTdV
Eligibility -
- Degree: B.Tech/BE/BCA/MCA/M.Tech
- Graduation Year: 2023, 2024, and 2025
- Branch: All Branches
- Work Location: Hyderabad
Compensation -
- Internship stipend: 20- 25k
- Internship duration: 3 months
- CTC:- 4.5-6 LPA
Evaluation Process -
- Assessment at the AccioJob Skill Centre in Pune
- 2 Technical Interviews
Apply Here - https://go.acciojob.com/tUxTdV
Important: Please bring your laptop & earphones for the test.


🚀 Job Title : Python AI/ML Engineer
💼 Experience : 3+ Years
📍 Location : Gurgaon (Work from Office, 5 Days/Week)
📅 Notice Period : Immediate
Summary :
We are looking for a Python AI/ML Engineer with strong experience in developing and deploying machine learning models on Microsoft Azure.
🔧 Responsibilities :
- Build and deploy ML models using Azure ML.
- Develop scalable Python applications with cloud-first design.
- Create data pipelines using Azure Data Factory, Blob Storage & Databricks.
- Optimize performance, fix bugs, and ensure system reliability.
- Collaborate with cross-functional teams to deliver intelligent features.
✅ Requirements :
- 3+ Years of software development experience.
- Strong Python skills; experience with scikit-learn, pandas, NumPy.
- Solid knowledge of SQL and relational databases.
- Hands-on with Azure ML, Data Factory, Blob Storage.
- Familiarity with Git, REST APIs, Docker.


About Data Axle:
Data Axle Inc. has been an industry leader in data, marketing solutions, sales and research for over 50 years in the USA. Data Axle now as an established strategic global centre of excellence in Pune. This centre delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business & consumer databases.
Data Axle Pune is pleased to have achieved certification as a Great Place to Work!
Roles & Responsibilities:
We are looking for a Senior Data Scientist to join the Data Science Client Services team to continue our success of identifying high quality target audiences that generate profitable marketing return for our clients. We are looking for experienced data science, machine learning and MLOps practitioners to design, build and deploy impactful predictive marketing solutions that serve a wide range of verticals and clients. The right candidate will enjoy contributing to and learning from a highly talented team and working on a variety of projects.
We are looking for a Senior Data Scientist who will be responsible for:
- Ownership of design, implementation, and deployment of machine learning algorithms in a modern Python-based cloud architecture
- Design or enhance ML workflows for data ingestion, model design, model inference and scoring
- Oversight on team project execution and delivery
- Establish peer review guidelines for high quality coding to help develop junior team members’ skill set growth, cross-training, and team efficiencies
- Visualize and publish model performance results and insights to internal and external audiences
Qualifications:
- Masters in a relevant quantitative, applied field (Statistics, Econometrics, Computer Science, Mathematics, Engineering)
- Minimum of 5 years of work experience in the end-to-end lifecycle of ML model development and deployment into production within a cloud infrastructure (Databricks is highly preferred)
- Proven ability to manage the output of a small team in a fast-paced environment and to lead by example in the fulfilment of client requests
- Exhibit deep knowledge of core mathematical principles relating to data science and machine learning (ML Theory + Best Practices, Feature Engineering and Selection, Supervised and Unsupervised ML, A/B Testing, etc.)
- Proficiency in Python and SQL required; PySpark/Spark experience a plus
- Ability to conduct a productive peer review and proper code structure in Github
- Proven experience developing, testing, and deploying various ML algorithms (neural networks, XGBoost, Bayes, and the like)
- Working knowledge of modern CI/CD methods This position description is intended to describe the duties most frequently performed by an individual in this position.
It is not intended to be a complete list of assigned duties but to describe a position level.

Data Architect/Engineer
Job Summary:
We are seeking an experienced Data Engineer/Architect to join our data and analytics team. The ideal candidate will have a strong background in data engineering, ETL pipeline development, and experience working with one or more data visualization tools (e.g., Power BI, Tableau, Looker). This role will involve designing, building, and maintaining scalable data solutions that empower business decision-making.
Experience: 8 to 12 yrs
Work location: JP Nagar 3rd phase, Bangalore.
Work type: work from office
Key Responsibilities:
- Define and maintain the overall data architecture strategy in line with business goals.
- Design and implement scalable, reliable, and secure data models, data lakes, and data warehouses.
- Design, develop, and maintain robust data pipelines and ETL workflows.
- Work with stakeholders to understand data requirements and translate them into technical solutions.
- Build and manage data models, data marts, and data lakes.
- Collaborate with BI and analytics teams to support dashboards and data visualizations.
- Ensure data quality, performance, and reliability across systems.
- Optimize data processing using modern cloud-based data platforms and tools.
- Support data governance and security best practices.
- Support the development of enterprise dashboards and reporting frameworks using tools like Power BI, Tableau, or Looker.
- Ensure compliance with data security and privacy regulations.
Required Skills & Qualifications:
- 8–12 years of experience in data engineering or related roles.
- Deep understanding of data modelling, database design, and data warehousing concepts.
- Technology evaluation & selection – Execute Proof of concept and Proof of value for various technology solutions and frameworks
- Strong knowledge of SQL, Python, and/or Scala.
- Experience with ETL tools (e.g., Apache Airflow, Talend, Informatica, dbt).
- Hands-on experience with cloud platforms (AWS, Azure, or GCP) and data services (e.g., Redshift, BigQuery, Snowflake).
- Exposure to one or more data visualization tools like Power BI, Tableau, Looker, or QlikView.
- Familiarity with data modeling, data warehousing, and real-time data streaming.
- Strong problem-solving and communication skills.
Preferred Qualifications:
- Experience working in Agile environments.
- Knowledge of CI/CD for data pipelines.
- Exposure to ML/AI data preparation is a plus.


Knowledge of Gen AI technology ecosystem including top tier LLMs, prompt engineering, knowledge of development frameworks such as LLMaxindex and LangChain, LLM fine tuning and experience in architecting RAGs and other LLM based solution for enterprise use cases. 1. Strong proficiency in programming languages like Python and SQL. 2. 3+ years of experience of predictive/prescriptive analytics including Machine Learning algorithms (Supervised and Unsupervised) and deep learning algorithms and Artificial Neural Networks such as Regression , classification, ensemble model,RNN,LSTM,GRU. 3. 2+ years of experience in NLP, Text analytics, Document AI, OCR, sentiment analysis, entity recognition, topic modeling 4. Proficiency in LangChain and Open LLM frameworks to perform summarization, classification, Name entity recognition, Question answering 5. Proficiency in Generative techniques prompt engineering, Vector DB, LLMs such as OpenAI,LlamaIndex, Azure OpenAI, Open-source LLMs will be important 6. Hands-on experience in GenAI technology areas including RAG architecture, fine tuning techniques, inferencing frameworks etc 7. Familiarity with big data technologies/frameworks 8. Sound knowledge of Microsoft Azure

Job Summary:
We’re seeking an innovative and business-savvy AI Strategist to lead the integration of artificial intelligence across all departments within our organization. This role is ideal for someone who has a deep understanding of AI capabilities, trends, and tools — but is more focused on strategic implementation, process improvement, and cross-functional collaboration than on technical development.
As our AI Strategist, you'll identify high-impact opportunities where AI can streamline operations, improve efficiency, and unlock new value. You’ll work closely with stakeholders in operations, marketing, HR, customer service, finance, and more to evaluate needs, recommend solutions, and support the adoption of AI-powered tools
and workflows.
Key Responsibilities:
• Partner with department heads to assess workflows and identify opportunities for AI integration
• Develop and maintain a company-wide AI roadmap aligned with business goals
• Evaluate and recommend AI solutions and platforms (e.g., automation tools, chatbots, predictive analytics, NLP applications)
• Serve as a liaison between internal teams and external AI vendors or technical consultants
• Educate teams on AI use cases, capabilities, and best practices
• Oversee pilot programs and track the effectiveness of AI initiatives
• Ensure ethical and compliant AI use, including data privacy and bias mitigation
• Stay current on emerging AI trends and make recommendations to maintain a competitive edge
Qualifications:
• Bachelor’s or Master’s degree in Business, Data Science, Information Systems,
or related fields
• Strong understanding of AI concepts, tools, and use cases across business
functions
• 3+ years of experience in strategy, operations, digital transformation, or a related role
• Proven track record of implementing new technologies or process improvements
• Excellent communication and change management skills
• Ability to translate complex AI concepts into business value
• Strategic thinker with a data-driven mindset
• Bonus: Experience working with AI vendors, SaaS platforms, or enterprise AI tools

🚀 We’re Hiring! | AI/ML Engineer – Computer Vision
📍 Location: Noida | 🕘 Full-Time
🔍 What We’re Looking For:
• 4+ years in AI/ML (Computer Vision)
• Python, OpenCV, TensorFlow, PyTorch, etc.
• Hands-on with object detection, face recognition, classification
• Git, Docker, Linux experience
• Curious, driven, and ready to build impactful products
💡 Be part of a fast-growing team, build products used by brands like Biba, Zivame, Costa Coffee & more!

TL;DR
Founding Software Engineer (Next.js / React / TypeScript) — ₹17,000–₹24,000 net ₹/mo — 100% remote (India) — ~40 h/wk — green-field stack, total autonomy, ship every week. If you can own the full lifecycle and prove impact every Friday, apply.
🏢 Mega Style Apartments
We rent beautifully furnished 1- to 4-bedroom flats that feel like home but run like a hotel—so travellers can land, unlock the door, and live like locals from hour one. Tech is now the growth engine, and you’ll be employee #1 in engineering, laying the cornerstone for a tech platform that will redefine the premium furnished apartment experience.
✨ Why This Role Rocks
💡 Green-field Everything
Choose the stack, CI, even the linter.
🎯 Visible Impact & Ambition
Every deploy reaches real guests this week. Lay rails for ML that can boost revenue 20%.
⏱️ Radical Autonomy
Plan sprints, own deploys; no committees.
- Direct line to decision-makers → zero red tape
- Modern DX: Next.js & React (latest stable), Tailwind, Prisma/Drizzle, Vercel, optional AI copilots – building mostly server-rendered, edge-ready flows.
- Async-first, with structured weekly 1-on-1s to ensure you’re supported, not micromanaged.
- Unmatched Career Acceleration: Build an entire tech foundation from zero, making decisions that will define your trajectory and our company's success.
🗓️ Your Daily Rhythm
- Morning: Check metrics, pick highest-impact task
- Day: Build → ship → measure
- Evening: 10-line WhatsApp update (done, next, blockers)
- Friday: Live demo of working software (no mock-ups)
📈 Success Milestones
- Week 1: First feature in production
- Month 1: Automation that saves ≥10 h/week for ops
- Month 3: Core platform stable; conversion up, load times down (aiming for <1s LCP); ready for future ML pricing (stretch goal: +20% revenue within 12 months).
🔑 What You’ll Own
- Ship guest-facing features with Next.js (App Router / RSC / Server Actions).
- Automate ops—dashboards & LLM helpers that delete busy-work.
- Full lifecycle: idea → spec → code → deploy → measure → iterate.
- Set up CI/CD & observability on Vercel; a dedicated half-day refactor slot each sprint keeps tech-debt low.
- Optimise for outcomes—conversion, CWV, security, reliability; laying the groundwork for future capabilities in dynamic pricing and guest personalization.
Prototype > promise. Results > hours-in-chair.
💻 Must-Have Skills
Frontend Focus:
- Next.js (App Router/RSC/Server Actions)
- React (latest stable), TypeScript
- Tailwind CSS + shadcn/ui
- State mgmt (TanStack Query / Zustand / Jotai)
Backend & DevOps Focus:
- Node.js APIs, Prisma/Drizzle ORM
- Solid SQL schema design (e.g., PostgreSQL)
- Auth.js / Better-Auth, web security best practices
- GitHub Flow, automated tests, CI, Vercel deploys
- Excellent English; explain trade-offs to non-tech peers
- Self-starter—comfortable as the engineer (for now)
🌱 Nice-to-Haves (Learn Here or Teach Us)
A/B testing & CRO, Python/basic ML, ETL pipelines, Advanced SEO & CWV, Payment APIs (Stripe, Merchant Warrior), n8n automation
🎁 Perks & Benefits
- 100% remote anywhere in 🇮🇳
- Flexible hours (~40 h/wk)
- 12 paid days off (holiday + sick)
- ₹1,700/mo health insurance reimbursement (post-probation)
- Performance bonuses for measurable wins
- 6-month paid probation → permanent role & full benefits (this is a full-time employment role)
- Blank-canvas stack—your decisions live on
- Equity is not offered at this time; we compensate via performance bonuses and a clear path for growth, with future leadership opportunities as the company and engineering team scales.
⏩ Hiring Process (7–10 Days, Fast & Fair)
All stages are async & remote.
- Apply: 5-min form + short quiz (approx. 15 min total)
- Test 1: TypeScript & logic (1 h)
- Test 2: Next.js / React / Node / SQL deep-dive (1 h)
- Final: AI Video interview (1 h)
.
🚫 Who Shouldn’t Apply
- Need daily hand-holding
- Prefer consensus to decisions
- Chase perfect code over shipped value
- “Move fast & learn” culture feels scary
🚀 Ready to Own the Stack?
If you read this and thought “Finally—no bureaucracy,” and you're ready to set the technical standard for a growing company, show us something you’ve built and apply here →


We are looking for a Senior AI/ML Engineer with expertise in Generative AI (GenAI) integrations, APIs, and Machine Learning (ML) algorithms who should have strong hands-on experience in Python and statistical and predictive modeling.
Key Responsibilities:
• Develop and integrate GenAI solutions using APIs and custom models.
• Design, implement, and optimize ML algorithms for predictive modeling and data-driven insights.
• Leverage statistical techniques to improve model accuracy and performance.
• Write clean, well-documented, and testable code while adhering to
coding standards and best practices.
Required Skills:
• 4+ years of experience in AI/ML, with a strong focus on GenAI integrations and APIs.
• Proficiency in Python, including libraries like TensorFlow, PyTorch, Scikit-learn, and Pandas.
• Strong expertise in statistical modeling and ML algorithms (Regression, Classification, Clustering, NLP, etc.).
• Hands-on experience with RESTful APIs and AI model deployment.

Job Title : AI/ML Engineer – DevOps & Cloud Automation
Experience : 3+ Years
Location : Gurgaon (WFO)
Job Summary :
We’re looking for a talented AI/ML Engineer to help build an AI-driven DevOps automation platform. The ideal candidate has hands-on experience in ML, NLP, and cloud automation.
Key Responsibilities :
- Develop AI/ML models for predictive analytics, anomaly detection, and automation
- Build NLP bots, observability tools, and real-time monitoring systems
- Analyze system logs/metrics and automate workflows
- Integrate AI with DevOps pipelines, cloud-native apps, and APIs
- Research & apply deep learning, generative AI, reinforcement learning.
Requirements :
- 3+ years in AI/ML, ideally in DevOps/cloud/security environments.
- Strong in Python, TensorFlow/PyTorch, NLP, and LLMs.
- Experience with AWS/GCP/Azure, Kubernetes, MLOps, and CI/CD.
- Knowledge of cybersecurity, big data, and real-time systems.
- Bonus : AIOps, RAG, blockchain AI, federated learning.


Project Overview
Be part of developing "Fenrir Security" - a groundbreaking autonomous security testing platform. We're creating an AI-powered security testing solution that integrates with an Electron desktop application. This contract role offers the opportunity to build cutting-edge autonomous agent technology for security testing applications.
Contract Details
- Duration: Initial 4-month contract with possibility of extension
- Work Arrangement: Remote with regular online collaboration
- Compensation: Competitive rates based on experience (₹1,00,000-₹1,80,000 monthly)
- Hours: Flexible, approximately 40 hours weekly
Role & Responsibilities
- Develop the core autonomous agent architecture for security testing
- Design and implement the agent's planning and execution capabilities
- Create natural language interfaces for security test configuration
- Build knowledge representation systems for security testing methodologies
- Implement security vulnerability detection and analysis components
- Integrate autonomous capabilities with the Electron application
- Create learning mechanisms to improve testing efficacy over time
- Collaborate with security expert to encode testing approaches
- Deliver functional autonomous testing components at regular milestones
- Participate in technical planning and architecture decisions
Skills & Experience
- 3+ years of AI/ML development experience
- Strong background in autonomous agent systems or similar AI architectures
- Experience with LLM integration and prompt engineering
- Proficiency in Python and relevant AI/ML frameworks
- Knowledge of natural language processing techniques
- Understanding of machine learning approaches for security applications (preferred)
- Ability to work independently with minimal supervision
- Strong problem-solving abilities and communication skills
Why Join Us
- Work at the cutting edge of AI and cybersecurity technology
- Flexible working arrangements and competitive compensation
- Opportunity to solve novel technical challenges
- Potential for equity or profit-sharing in future funding rounds
- Build portfolio-worthy work in an innovative field
Selection Process
- Initial screening call
- Technical assessment (paid task)
- Final interview with founder
- Contract discussion and onboarding


Title: Senior Software Engineer – Python (Remote: Africa, India, Portugal)
Experience: 9 to 12 Years
INR : 40 LPA - 50 LPA
Location Requirement: Candidates must be based in Africa, India, or Portugal. Applicants outside these regions will not be considered.
Must-Have Qualifications:
- 8+ years in software development with expertise in Python
- kubernetes is important
- Strong understanding of async frameworks (e.g., asyncio)
- Experience with FastAPI, Flask, or Django for microservices
- Proficiency with Docker and Kubernetes/AWS ECS
- Familiarity with AWS, Azure, or GCP and IaC tools (CDK, Terraform)
- Knowledge of SQL and NoSQL databases (PostgreSQL, Cassandra, DynamoDB)
- Exposure to GenAI tools and LLM APIs (e.g., LangChain)
- CI/CD and DevOps best practices
- Strong communication and mentorship skills


What you will be doing at Webkul?
- Python Proficiency and API Integration:
- Demonstrate strong proficiency in Python programming language.
- Design and implement scalable, efficient, and maintainable code for machine learning applications.
- Integrate machine learning models with APIs to facilitate seamless communication between different software components.
- Machine Learning Model Deployment, Training, and Performance:
- Develop and deploy machine learning models for real-world applications.
- Conduct model training, optimization, and performance evaluation.
- Collaborate with cross-functional teams to ensure the successful integration of machine learning solutions into production systems.
- Large Language Model Understanding and Integration:
- Possess a deep understanding of large language models (LLMs) and their applications.
- Integrate LLMs into existing systems and workflows to enhance natural language processing capabilities.
- Stay abreast of the latest advancements in large language models and contribute insights to the team.
- Langchain and RAG-Based Systems (e.g., LLamaindex):
- Familiarity with Langchain and RAG-based systems, such as LLamaindex, will be a significant advantage.
- Work on the design and implementation of systems that leverage Langchain and RAG-based approaches for enhanced performance and functionality.
- LLM Integration with Vector Databases (e.g., Pinecone):
- Experience in integrating large language models with vector databases, such as Pinecone, for efficient storage and retrieval of information.
- Optimize the integration of LLMs with vector databases to ensure high-performance and low-latency interactions.
- Natural Language Processing (NLP):
- Expertise in NLP techniques such as tokenization, named entity recognition, sentiment analysis, and language translation.
- Experience with NLP libraries and frameworks like NLTK, SpaCy, Hugging Face Transformers
- Computer Vision:
- Proficiency in computer vision tasks such as image classification, object detection, segmentation, and image generation.
- Experience with computer vision libraries like OpenCV, PIL, and frameworks like TensorFlow, PyTorch, and Keras.
- Deep Learning:
- Strong understanding of deep learning concepts and architectures, including convolutional neural networks (CNNs) and recurrent neural networks (RNNs).
- Proficiency in using deep learning frameworks like TensorFlow, PyTorch, and Keras.
- Experience with model optimization, hyperparameter tuning, and transfer learning.
- Data Manipulation:
- Strong skills in data manipulation and analysis using libraries like Pandas, NumPy, and SciPy.
- Proficiency in data cleaning, preprocessing, and augmentation techniques.


Title: Data Engineer II (Remote – India/Portugal)
Exp: 4- 8 Years
CTC: up to 30 LPA
Required Skills & Experience:
- 4+ years in data engineering or backend software development
- AI / ML is important
- Expert in SQL and data modeling
- Strong Python, Java, or Scala coding skills
- Experience with Snowflake, Databricks, AWS (S3, Lambda)
- Background in relational and NoSQL databases (e.g., Postgres)
- Familiar with Linux shell and systems administration
- Solid grasp of data warehouse concepts and real-time processing
- Excellent troubleshooting, documentation, and QA mindset
If interested, kindly share your updated CV to 82008 31681
Job Title: Senior AIML Engineer – Immediate Joiner (AdTech)
Location: Pune – Onsite
About Us:
We are a cutting-edge technology company at the forefront of digital transformation, building innovative AI and machine learning solutions for the digital advertising industry. Join us in shaping the future of AdTech!
Role Overview:
We are looking for a highly skilled Senior AIML Engineer with AdTech experience to develop intelligent algorithms and predictive models that optimize digital advertising performance. Immediate joiners preferred.
Key Responsibilities:
- Design and implement AIML models for real-time ad optimization, audience targeting, and campaign performance analysis.
- Collaborate with data scientists and engineers to build scalable AI-driven solutions.
- Analyze large volumes of data to extract meaningful insights and improve ad performance.
- Develop and deploy machine learning pipelines for automated decision-making.
- Stay updated on the latest AI/ML trends and technologies to drive continuous innovation.
- Optimize existing models for speed, scalability, and accuracy.
- Work closely with product managers to align AI solutions with business goals.
Requirements:
- Minimum 4-6 years of experience in AIML, with a focus on AdTech (Mandatory).
- Strong programming skills in Python, R, or similar languages.
- Hands-on experience with machine learning frameworks like TensorFlow, PyTorch, or Scikit-learn.
- Expertise in data processing and real-time analytics.
- Strong understanding of digital advertising, programmatic platforms, and ad server technology.
- Excellent problem-solving and analytical skills.
- Immediate joiners preferred.
Preferred Skills:
- Knowledge of big data technologies like Spark, Hadoop, or Kafka.
- Experience with cloud platforms like AWS, GCP, or Azure.
- Familiarity with MLOps practices and tools.
How to Apply:
If you are a passionate AIML engineer with AdTech experience and can join immediately, we want to hear from you. Share your resume and a brief note on your relevant experience.
Join us in building the future of AI-driven digital advertising!


Job description:
Design, develop, and deploy ML models.
Build scalable AI solutions for real-world problems.
Optimize model performance and infrastructure.
Collaborate with the Technical Team and execute any other tasks assigned by the company/its representatives.
Required Candidate profile:
Strong Python & ML frameworks (TensorFlow/PyTorch).
Experience with data pipelines & model deployment.
Problem-solving & teamwork skills.
Passion for AI innovation.
Perks and benefits:
Learning Environment, Guidance & Support



Roles & Responsibilities:
We are looking for a Data Scientist to join the Data Science Client Services team to continue our success of identifying high quality target audiences that generate profitable marketing return for our clients. We are looking for experienced data science, machine learning and MLOps practitioners to design, build and deploy impactful predictive marketing solutions that serve a wide range of verticals and clients. The right candidate will enjoy contributing to and learning from a highly talented team and working on a variety of projects.
We are looking for a Lead Data Scientist who will be responsible for
- Ownership of design, implementation, and deployment of machine learning algorithms in a modern Python-based cloud architecture
- Design or enhance ML workflows for data ingestion, model design, model inference and scoring 3. Oversight on team project execution and delivery
- Establish peer review guidelines for high quality coding to help develop junior team members’ skill set growth, cross-training, and team efficiencies
- Visualize and publish model performance results and insights to internal and external audiences
Qualifications:
- Masters in a relevant quantitative, applied field (Statistics, Econometrics, Computer Science, Mathematics, Engineering)
- Minimum of 9+ years of work experience in the end-to-end lifecycle of ML model development and deployment into production within a cloud infrastructure (Databricks is highly preferred)
- Exhibit deep knowledge of core mathematical principles relating to data science and machine learning (ML Theory + Best Practices, Feature Engineering and Selection, Supervised and Unsupervised ML, A/B Testing, etc.)
- Proficiency in Python and SQL required; PySpark/Spark experience a plus
- Ability to conduct a productive peer review and proper code structure in Github
- Proven experience developing, testing, and deploying various ML algorithms (neural networks, XGBoost, Bayes, and the like)
- Working knowledge of modern CI/CD methods
This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.

What You will do:
● Create beautiful software experiences for our clients using design thinking, lean, and agile methodology.
● Work on software products designed from scratch using the latest cutting-edge technologies, platforms, and languages such as NodeJS, JavaScript.
● Work in a dynamic, collaborative, transparent, non-hierarchical culture.
● Work in collaborative, fast-paced,d and value-driven teams to build innovative customer experiences for our clients.
● Help to grow the next generation of developers and have a positive impact on the industry.
Basic Qualifications :
● Experience: 4+ years.
● Hands-on development experience with a broad mix of languages such as NodeJS,
● Server-side development experience, mainly in NodeJS, can be considerable
● UI development experience in AngularJS
● Passion for software engineering and following the best coding concepts .
● Good to great problem-solving and communication skills.
Nice to have Qualifications:
● Product and customer-centric mindset.
● Great OO skills, including design patterns.
● Experience with devops, continuous integration & deployment.
● Exposure to big data technologies, Machine Learning, and NLP will be a plus.