50+ Artificial Intelligence (AI) Jobs in India
Apply to 50+ Artificial Intelligence (AI) Jobs on CutShort.io. Find your next job, effortlessly. Browse Artificial Intelligence (AI) Jobs and apply today!
Job Title: Technology Intern
Location: Remote (India)
Shift Timings:
- 5:00 PM – 2:00 AM
- 6:00 PM – 3:00 AM
Compensation: Stipend
Job Summary
ARDEM is looking for enthusiastic Technology Interns from Tier 1 colleges who are eager to build hands-on experience across web technologies, cloud platforms, and emerging technologies such as AI/ML. This role is ideal for final-year students (2026 pass-outs) or fresh graduates seeking real-world exposure in a fast-growing, technology-driven organization.
Eligibility & Qualifications
- Education:
- B.Tech (Computer Science) / M.Tech (Computer Science)
- Tier 1 colleges preferred
- Final semester students pursuing graduation (2026 pass-outs) or recently hired interns
- Experience Level: Fresher
- Communication: Excellent English communication skills (verbal & written)
Skills Required
Technical & Development Skills
- Basic understanding of AI / Machine Learning concepts
- Exposure to AWS (deployment or cloud fundamentals)
- PHP development
- WordPress development and customization
- JavaScript (ES5 / ES6+)
- jQuery
- AJAX calls and asynchronous handling
- Event handling
- HTML5 & CSS3
- Client-side form validation
Work Environment & Tools
- Comfortable working in a remote setup
- Familiarity with collaboration and remote access tools
Additional Requirements (Work-from-Home Setup)
This opportunity promotes a healthy work-life balance with remote work flexibility. Candidates must have the following minimum infrastructure:
- System: Laptop or Desktop (Windows-based)
- Operating System: Windows
- Screen Size: Minimum 14 inches
- Screen Resolution: Full HD (1920 × 1080)
- Processor: Intel i5 or higher
- RAM: Minimum 8 GB (Mandatory)
- Software: AnyDesk
- Internet Speed: 100 Mbps or higher
About ARDEM
ARDEM is a leading Business Process Outsourcing (BPO) and Business Process Automation (BPA) service provider. With over 20 years of experience, ARDEM has consistently delivered high-quality outsourcing and automation services to clients across the USA and Canada. We are growing rapidly and continuously innovating to improve our services. Our goal is to strive for excellence and become the best Business Process Outsourcing and Business Process Automation company for our customers.
About Unilog
Unilog is the only connected product content and eCommerce provider serving the Wholesale Distribution, Manufacturing, and Specialty Retail industries. Our flagship CX1 Platform is at the center of some of the most successful digital transformations in North America. CX1 Platform’s syndicated product content, integrated eCommerce storefront, and automated PIM tool simplify our customers' path to success in the digital marketplace.
With more than 500 customers, Unilog is uniquely positioned as the leader in eCommerce and product content for Wholesale distribution, manufacturing, and specialty retail.
Unilog’ s Mission Statement
At Unilog, our mission is to provide purpose-built connected product content and eCommerce solutions that empower our customers to succeed in the face of intense competition. By virtue of living our mission, we are able to transform the way Wholesale Distributors, Manufacturers, and Specialty Retailers go to market. We help our customers extend a digital version of their business and accelerate their growth.
Designation:- AI Architect
Location: Bangalore/Mysore/Remote
Job Type: Full-time
Department: Software R&D
About the Role
We are looking for a highly motivated AI Architect to join our CTO Office and drive the exploration, prototyping, and adoption of next-generation technologies. This role offers a unique opportunity to work at the forefront of AI/ML, Generative AI (Gen AI), Large Language Models (LLMs), Vector Databases, AI Search, Agentic AI, Automation, and more.
As an Architect, you will be responsible for identifying emerging technologies, building proof-of-concepts (PoCs), and collaborating with cross-functional teams to define the future of AI-driven solutions. Your work will directly influence the company’s technology strategy and help shape disruptive innovations.
Key Responsibilities
Research & Experimentation: Stay ahead of industry trends, evaluate emerging AI/ML technologies, and prototype novel solutions in areas like Gen AI, Vector Search, AI Agents, and Automation.
Proof-of-Concept Development: Rapidly build, test, and iterate PoCs to validate new technologies for potential business impact.
AI/ML Engineering: Design and develop AI/ML models, LLMs, embedding’s, and intelligent search capabilities leveraging state-of-the-art techniques.
Vector & AI Search: Explore vector databases and optimize retrieval-augmented generation (RAG) workflows.
Automation & AI Agents: Develop autonomous AI agents and automation frameworks to enhance business processes.
Collaboration & Thought Leadership: Work closely with software developers and product teams to integrate innovations into production-ready solutions.
Innovation Strategy: Contribute to the technology roadmap, patents, and research papers to establish leadership in emerging domains.
Required Qualifications
- 8-14 years of experience in AI/ML, software engineering, or a related field.
- Strong hands-on expertise in Python, TensorFlow, PyTorch, LangChain, Hugging Face, OpenAI APIs, Claude, Gemini.
- Experience with LLMs, embeddings, AI search, vector databases (e.g., Pinecone, FAISS, Weaviate, PGVector), and agentic AI.
- Familiarity with cloud platforms (AWS, Azure, GCP) and AI/ML infrastructure.
- Strong problem-solving skills and a passion for innovation.
- Ability to communicate complex ideas effectively and work in a fast-paced, experimental environment.
Preferred Qualifications
- Experience with multi-modal AI (text, vision, audio), reinforcement learning, or AI security.
- Knowledge of data pipelines, MLOps, and AI governance.
- Contributions to open-source AI/ML projects or published research papers.
Why Join Us?
- Work on cutting-edge AI/ML innovations with the CTO Office.
- Influence the company’s future AI strategy and shape emerging technologies.
- Competitive compensation, growth opportunities, and a culture of continuous learning.
About our Benefits:
Unilog offers a competitive total rewards package including competitive salary, multiple medical, dental, and vision plans to meet all our employees’ needs, 401K match, career development, advancement opportunities, annual merit, pay-for-performance bonus eligibility, a generous time-off policy, and a flexible work environment.
Unilog is committed to building the best team and we are committed to fair hiring practices where we hire people for their potential and advocate for diversity, equity, and inclusion. As such, we do not discriminate or make decisions based on your race, color, religion, creed, ancestry, sex, national origin, age, disability, familial status, marital status, military status, veteran status, sexual orientation, gender identity, or expression, or any other protected class.
About MyOperator
MyOperator is a Business AI Operator and category leader that unifies WhatsApp, Calls, and AI-powered chatbots & voice bots into one intelligent business communication platform.Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single no-code platform.Trusted by 12,000+ brands including Amazon, Domino’s, Apollo, and Razor-pay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement
Role Summary
We’re hiring a Front Deployed Engineer (FDE)—a customer-facing, field-deployed engineer who owns the end-to-end delivery of AI bots/agents.
This role is “frontline”: you’ll work directly with customers (often onsite), translate business reality into bot workflows, do prompt engineering + knowledge grounding, ship deployments, and iterate until it works reliably in production.
Think: solutions engineer + implementation engineer + prompt engineer, with a strong bias for execution.
Responsibilities
Requirement Discovery & Stakeholder Interaction
- Join customer calls alongside Sales and Revenue teams.
- Ask targeted questions to understand business objectives, user journeys, automation expectations, and edge cases.
- Identify data sources (CRM, APIs, Excel, SharePoint, etc.) required for the solution.
- Act as the AI subject-matter expert during client discussions.
Use Case & Solution Documentation
- Convert discussions into clear, structured use case documents, including:
- Problem statement & goals.
- Current vs. proposed conversational flows.
- Chatbot conversation logic, integrations, and dependencies.
- Assumptions, limitations, and success criteria.
Customer Delivery Ownership
Own deployment of AI bots for customer use-cases (lead qualification, support, booking, etc.). Run workshops to capture processes, FAQs, edge cases, and success metrics. Drive the go-live process: requirements through monitoring and improvement.
Prompt Engineering & Conversation Design
Craft prompts, tool instructions, guardrails, fallbacks, and escalation policies for stable behavior. Build structured conversational flows: intents, entities, routing, handoff, and compliant responses. Create reusable prompt patterns and "prompt packs."
Testing, Debugging & Iteration
Analyze logs to find failure modes (misclassification, hallucination, poor handling). Create test sets ("golden conversations"), run regressions, and measure improvements. Coordinate with Product/Engineering for platform needs.
Integrations & Technical Coordination
Integrate bots with APIs/webhooks (CRM, ticketing, internal tools) to complete workflows. Troubleshoot production issues and coordinate fixes/root-cause analysis.
What Success Looks Like
- Customer bots go live quickly and show high containment + high task completion with low escalation.
- You can diagnose failures from transcripts/logs and fix them with prompt/workflow/knowledge changes.
- Customers trust you as the “AI delivery owner”—clear communication, realistic timelines, crisp execution.
Requirements (Must Have)
- 2–5 years in customer-facing delivery roles: implementation, solutions engineering, customer success engineering, or similar.
- Hands-on comfort with LLMs and prompt engineering (structured outputs, guardrails, tool use, iteration).
- Strong communication: workshops, requirement capture, crisp documentation, stakeholder management.
- Technical fluency: APIs/webhooks concepts, JSON, debugging logs, basic integration troubleshooting.
- Willingness to be front deployed (customer calls/visits as needed).
Good to Have (Nice to Have)
- Experience with chatbots/voicebots, IVR, WhatsApp automation, conversational AI platforms with at least a couple of projects.
- Understanding of metrics like containment, resolution rate, response latency, CSAT drivers.
- Prior SaaS onboarding/delivery experience in mid-market or enterprises.
Working Style & Traits We Value
- High agency: you don’t wait for perfect specs—you create clarity and ship.
- Customer empathy + engineering discipline.
- Strong bias for iteration: deploy → learn → improve.
- Calm under ambiguity (real customer environments are chaotic by default).

Client is at the cutting-edge of AI, Psychology and large-scale data. We believe that we have an opportunity (and even a responsibility) to personalize and humanize how people interact over the internet; and an opportunity to inspire far more trustworthy relationships online than it has ever been possible before.
10+ years of experience in successfully building,
deploying, and running complex, large-scale web or data products.
● Proven Management Experience: Demonstrated success
managing a team of 5+ engineers for at least 2 years (managing
timelines, performance, and hiring). You know how to transition a
team from 'startup chaos' to 'structured agility'.
● Full-stack Authority: Deep expertise with Javascript, Node.js,
MySQL, and Python. You must have world-class expertise in at least
one area but possess a solid understanding of the entire stack in a
multi-tier environment.
● Architectural Track Record: Has built at least two
professional-grade products as the tech owner/architect and led the
delivery of complex products from conception to release.
● Experience in working with REST APIs, Machine Learning,
Algorithms & AWS.
● Familiar with visualization libraries and database technologies.
● Your reputation in the technology community within your domain.
● Your participation and success in competitive programming.
● Work on unusual/extraordinary hobby projects during school/college
that were not a part of the curriculum.
● The school that you come from and organizations where you have
worked earlier.
Technical Trainer at the Pollachi location.
Trainer - Pollachi.
Willing to travel around a 30km radius from Pollachi.
Job Description: Technical Trainer
Expertise: HTML, CSS, JavaScript, Python, Artificial Intelligence (AI), and Machine Learning (ML), IoT, and Robotics (Optional).
Work Location: Flexible (Work from Home & Office available)
Target Audience: School students and teachers
Employment Type: Full-time, IoT and Robotics (Optional).
Key Responsibilities:
* Develop and deliver content in an easy-to-understand format suitable for varying audience levels.
* Prepare training materials, exercises, and assessments to evaluate participant progress and measure their learning outcomes. Adapt teaching methods to suit both in-person (office) and virtual (work-from-home) formats.
* Stay updated with the latest trends and tools in technology to ensure high-quality training delivery.
Job Details
- Job Title: Software Developer (Python, React/Vue)
- Industry: Technology
- Experience Required: 2-4 years
- Working Days: 5 days/week
- Job Location: Remote working
- CTC Range: Best in Industry
Review Criteria
- Strong Full stack/Backend engineer profile
- 2+ years of hands-on experience as a full stack developer (backend-heavy)
- (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
- (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
- (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
- (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
- (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
- (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
- Product companies (B2B SaaS preferred)
Preferred
- Preferred (Location) - Mumbai
- Preferred (Skills): Candidates with strong backend or full-stack experience in other languages/frameworks are welcome if fundamentals are strong
- Preferred (Education): B.Tech from Tier 1, Tier 2 institutes
Role & Responsibilities
This is not just another dev job. You’ll help engineer the backbone of the world’s first AI Agentic manufacturing OS.
You will:
- Build and own features end-to-end — from design → deployment → scale.
- Architect scalable, loosely coupled systems powering AI-native workflows.
- Create robust integrations with 3rd-party systems.
- Push boundaries on reliability, performance, and automation.
- Write clean, tested, secure code → and continuously improve it.
- Collaborate directly with Founders & Snr engineers in a high-trust environment.
Our Tech Arsenal:
- We believe in always using the sharpest tools for the job. On that end we always try to remain tech agnostic and leave it up to discussion on what tools to use to get the problem solved in the most robust and quick way.
- That being said, our bright team of engineers have already constructed a formidable arsenal of tools which helps us fortify our defense and always play on the offensive. Take a look at the Tech Stack we already use.
🚀 About Us
At Remedo, we're building the future of digital healthcare marketing. We help doctors grow their online presence, connect with patients, and drive real-world outcomes like higher appointment bookings and better Google reviews — all while improving their SEO.
We’re also the creators of Convertlens, our generative AI-powered engagement engine that transforms how clinics interact with patients across the web. Think hyper-personalized messaging, automated conversion funnels, and insights that actually move the needle.
We’re a lean, fast-moving team with startup DNA. If you like ownership, impact, and tech that solves real problems — you’ll fit right in.
🛠️ What You’ll Do
- Build and maintain scalable Python back-end systems that power Convertlens and internal applications.
- Develop Agentic AI applications and workflows to drive automation and insights.
- Design and implement connectors to third-party systems (APIs, CRMs, marketing tools) to source and unify data.
- Ensure system reliability with strong practices in observability, monitoring, and troubleshooting.
⚙️ What You Bring
- 2+ years of hands-on experience in Python back-end development.
- Strong understanding of REST API design and integration.
- Proficiency with relational databases (MySQL/PostgreSQL).
- Familiarity with observability tools (logging, monitoring, tracing — e.g., OpenTelemetry, Prometheus, Grafana, ELK).
- Experience maintaining production systems with a focus on reliability and scalability.
- Bonus: Exposure to Node.js and modern front-end frameworks like ReactJs.
- Strong problem-solving skills and comfort working in a startup/product environment.
- A builder mindset — scrappy, curious, and ready to ship.
💼 Perks & Culture
- Flexible work setup — remote-first for most, hybrid if you’re in Delhi NCR.
- A high-growth, high-impact environment where your code goes live fast.
- Opportunities to work with Agentic AI and cutting-edge tech.
- Small team, big vision — your work truly matters here.
Review Criteria
- Strong Data Scientist/Machine Learnings/ AI Engineer Profile
- 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
- Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
- Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
- Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
- Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
- Preferred (Company) – Must be from product companies
Job Specific Criteria
- CV Attachment is mandatory
- What's your current company?
- Which use cases you have hands on experience?
- Are you ok for Mumbai location (if candidate is from outside Mumbai)?
- Reason for change (if candidate has been in current company for less than 1 year)?
- Reason for hike (if greater than 25%)?
Role & Responsibilities
- Partner with Product to spot high-leverage ML opportunities tied to business metrics.
- Wrangle large structured and unstructured datasets; build reliable features and data contracts.
- Build and ship models to:
- Enhance customer experiences and personalization
- Boost revenue via pricing/discount optimization
- Power user-to-user discovery and ranking (matchmaking at scale)
- Detect and block fraud/risk in real time
- Score conversion/churn/acceptance propensity for targeted actions
- Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
- Design and run A/B tests with guardrails.
- Build monitoring for model/data drift and business KPIs
Ideal Candidate
- 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
- Proven, hands-on success in at least two (preferably 3–4) of the following:
- Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
- Fraud/risk detection (severe class imbalance, PR-AUC)
- Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
- Propensity models (payment/churn)
- Programming: strong Python and SQL; solid git, Docker, CI/CD.
- Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
- ML breadth: recommender systems, NLP or user profiling, anomaly detection.
- Communication: clear storytelling with data; can align stakeholders and drive decisions.
About the Company
SimplyFI Softech India Pvt. Ltd. is a product-led company working across AI, Blockchain, and Cloud. The team builds intelligent platforms for fintech, SaaS, and enterprise use cases, focused on solving real business problems with production-grade systems.
Role Overview
This role is for someone who enjoys working hands-on with data and machine learning models. You’ll support real-world AI use cases end to end, from data prep to model integration, while learning how AI systems are built and deployed in production.
Key Responsibilities
- Design, develop, and deploy machine learning models with guidance from senior engineers
- Work with structured and unstructured datasets for cleaning, preprocessing, and feature engineering
- Implement ML algorithms using Python and standard ML libraries
- Train, test, and evaluate models and track performance metrics
- Assist in integrating AI/ML models into applications and APIs
- Perform basic data analysis and visualization to extract insights
- Participate in code reviews, documentation, and team discussions
- Stay updated on ML, AI, and Generative AI trends
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, AI, Data Science, or a related field
- Strong foundation in Python
- Clear understanding of core ML concepts: supervised and unsupervised learning
- Hands-on exposure to NumPy, Pandas, and Scikit-learn
- Basic familiarity with TensorFlow or PyTorch
- Understanding of data structures, algorithms, and statistics
- Good analytical thinking and problem-solving skills
- Comfortable working in a fast-moving product environment
Good to Have
- Exposure to NLP, Computer Vision, or Generative AI
- Experience with Jupyter Notebook or Google Colab
- Basic knowledge of SQL or NoSQL databases
- Understanding of REST APIs and model deployment concepts
- Familiarity with Git/GitHub
- AI/ML internships or academic projects
Role Overview:
We are looking for a PHP & Angular Developer to build and maintain scalable full-stack web applications. The role requires strong backend expertise in PHP, solid frontend development using Angular, and exposure or interest in AI/ML-powered features.
Key Responsibilities:
- Develop and maintain applications using PHP and Angular
- Build and consume RESTful APIs
- Create reusable Angular components using TypeScript
- Work with MySQL/PostgreSQL databases
- Collaborate with Product, QA, and AI/ML teams
- Integrate AI/ML APIs where applicable
- Ensure performance, security, and scalability
- Debug and resolve production issues
Required Skills:
- 5–7 years experience in PHP development
- Strong hands-on with Laravel / CodeIgniter
- Experience with Angular (v10+)
- HTML, CSS, JavaScript, TypeScript
- REST APIs, JSON
- MySQL / PostgreSQL
- Git, MVC architecture
Good to Have:
- Exposure to AI/ML concepts or API integrations
- Python-based ML services (basic)
- Cloud platforms (AWS / Azure / GCP)
- Docker, CI/CD
- Agile/Scrum experience
- Product/start-up background
Role Overview:
We are looking for a PHP & Angular Developer to build and maintain scalable full-stack web applications. The role requires strong backend expertise in PHP, solid frontend development using Angular, and exposure or interest in AI/ML-powered features.
Key Responsibilities:
- Develop and maintain applications using PHP and Angular
- Build and consume RESTful APIs
- Create reusable Angular components using TypeScript
- Work with MySQL/PostgreSQL databases
- Collaborate with Product, QA, and AI/ML teams
- Integrate AI/ML APIs where applicable
- Ensure performance, security, and scalability
- Debug and resolve production issues
Required Skills:
- 5–7 years experience in PHP development
- Strong hands-on with Laravel / CodeIgniter
- Experience with Angular (v10+)
- HTML, CSS, JavaScript, TypeScript
- REST APIs, JSON
- MySQL / PostgreSQL
- Git, MVC architecture
Good to Have:
- Exposure to AI/ML concepts or API integrations
- Python-based ML services (basic)
- Cloud platforms (AWS / Azure / GCP)
- Docker, CI/CD
- Agile/Scrum experience
- Product/start-up background
About MyOperator
MyOperator is a Business AI Operator and category leader that unifies WhatsApp, Calls, and AI-powered chatbots & voice bots into one intelligent business communication platform.Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single no-code platform.Trusted by 12,000+ brands including Amazon, Domino’s, Apollo, and Razor-pay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement
Role Summary
We’re hiring a Front Deployed Engineer (FDE)—a customer-facing, field-deployed engineer who owns the end-to-end delivery of AI bots/agents.
This role is “frontline”: you’ll work directly with customers (often onsite), translate business reality into bot workflows, do prompt engineering + knowledge grounding, ship deployments, and iterate until it works reliably in production.
Think: solutions engineer + implementation engineer + prompt engineer, with a strong bias for execution.
Responsibilities
Requirement Discovery & Stakeholder Interaction
- Join customer calls alongside Sales and Revenue teams.
- Ask targeted questions to understand business objectives, user journeys, automation expectations, and edge cases.
- Identify data sources (CRM, APIs, Excel, SharePoint, etc.) required for the solution.
- Act as the AI subject-matter expert during client discussions.
Use Case & Solution Documentation
- Convert discussions into clear, structured use case documents, including:
- Problem statement & goals.
- Current vs. proposed conversational flows.
- Chatbot conversation logic, integrations, and dependencies.
- Assumptions, limitations, and success criteria.
Customer Delivery Ownership
Own deployment of AI bots for customer use-cases (lead qualification, support, booking, etc.). Run workshops to capture processes, FAQs, edge cases, and success metrics. Drive the go-live process: requirements through monitoring and improvement.
Prompt Engineering & Conversation Design
Craft prompts, tool instructions, guardrails, fallbacks, and escalation policies for stable behavior. Build structured conversational flows: intents, entities, routing, handoff, and compliant responses. Create reusable prompt patterns and "prompt packs."
Testing, Debugging & Iteration
Analyze logs to find failure modes (misclassification, hallucination, poor handling). Create test sets ("golden conversations"), run regressions, and measure improvements. Coordinate with Product/Engineering for platform needs.
Integrations & Technical Coordination
Integrate bots with APIs/webhooks (CRM, ticketing, internal tools) to complete workflows. Troubleshoot production issues and coordinate fixes/root-cause analysis.
What Success Looks Like
- Customer bots go live quickly and show high containment + high task completion with low escalation.
- You can diagnose failures from transcripts/logs and fix them with prompt/workflow/knowledge changes.
- Customers trust you as the “AI delivery owner”—clear communication, realistic timelines, crisp execution.
Requirements (Must Have)
- 2–5 years in customer-facing delivery roles: implementation, solutions engineering, customer success engineering, or similar.
- Hands-on comfort with LLMs and prompt engineering (structured outputs, guardrails, tool use, iteration).
- Strong communication: workshops, requirement capture, crisp documentation, stakeholder management.
- Technical fluency: APIs/webhooks concepts, JSON, debugging logs, basic integration troubleshooting.
- Willingness to be front deployed (customer calls/visits as needed).
Good to Have (Nice to Have)
- Experience with chatbots/voicebots, IVR, WhatsApp automation, conversational AI platforms with at least a couple of projects.
- Understanding of metrics like containment, resolution rate, response latency, CSAT drivers.
- Prior SaaS onboarding/delivery experience in mid-market or enterprises.
Working Style & Traits We Value
- High agency: you don’t wait for perfect specs—you create clarity and ship.
- Customer empathy + engineering discipline.
- Strong bias for iteration: deploy → learn → improve.
- Calm under ambiguity (real customer environments are chaotic by default).
About Role
We are looking for a hands-on Python Engineer with strong experience in backend development, AI-driven systems, and cloud infrastructure. The ideal candidate should be comfortable working across Python services, AI/ML pipelines, and cloud-native environments, and capable of building production-grade, scalable systems.
This role offers high ownership, exposure to real-world AI systems, and long-term growth, making it ideal for engineers who want to build meaningful products rather than just features
Key Responsibilities
- Design, develop, and maintain scalable backend services using Python
- Build APIs and services using FastAPI, Flask, or Django
- Ensure performance, reliability, and scalability of backend systems
- Integrate AI/ML models into production systems (model inference, automation)
- Build and maintain AI pipelines for data processing and inference
- Deploy and manage applications on AWS, with exposure to GCP and Azure
- Implement CI/CD pipelines, containerization, and cloud deployments
- Collaborate with product, frontend, and AI teams on end-to-end delivery
- Optimize cloud infrastructure for cost, performance, and reliability
- Collaborate with product, frontend, and AI teams on end-to-end delivery
- Follow best practices for security, monitoring, and logging
Required Qualifications
- 2–4 years of professional experience in Python development
- Strong understanding of backend frameworks: FastAPI, Flask, Django
- Hands-on experience integrating AI/ML systems into applications
- Solid experience with AWS (EC2, S3, Lambda, RDS, IAM)
- Exposure to Google Cloud Platform (GCP) and Microsoft Azure
- Experience with Docker and CI/CD workflows
- Understanding of scalable system design principles
- Strong problem-solving and debugging skills
- Ability to work collaboratively in a product-driven environment
Perks and Benefits
- Work in Nikhil Kamath funded startup
- ₹3 – ₹4.6 LPA with ESOPs linked to performance and tenure
- Opportunity to build long-term wealth through ESOP participation
- Work on production-scale AI systems used in real-world applications
- Hands-on experience with AWS, GCP, and Azure architectures
- Work with a team that values clean engineering, experimentation, and execution
- Exposure to modern backend frameworks, AI pipelines, and DevOps practices
- High autonomy, fast decision-making, and real ownership of features and systems
Job Title: AI/ML Engineer – Voice (2–3 Years)
Location: Bengaluru (On-site)
Employment Type: Full-time
About Impacto Digifin Technologies
Impacto Digifin Technologies enables enterprises to adopt digital transformation through intelligent, AI-powered solutions. Our platforms reduce manual work, improve accuracy, automate complex workflows, and ensure compliance—empowering organizations to operate with speed, clarity, and confidence.
We combine automation where it’s fastest with human oversight where it matters most. This hybrid approach ensures trust, reliability, and measurable efficiency across fintech and enterprise operations.
Role Overview
We are looking for an AI Engineer Voice with strong applied experience in machine learning, deep learning, NLP, GenAI, and full-stack voice AI systems.
This role requires someone who can design, build, deploy, and optimize end-to-end voice AI pipelines, including speech-to-text, text-to-speech, real-time streaming voice interactions, voice-enabled AI applications, and voice-to-LLM integrations.
You will work across core ML/DL systems, voice models, predictive analytics, banking-domain AI applications, and emerging AGI-aligned frameworks. The ideal candidate is an applied engineer with strong fundamentals, the ability to prototype quickly, and the maturity to contribute to R&D when needed.
This role is collaborative, cross-functional, and hands-on.
Key Responsibilities
Voice AI Engineering
- Build end-to-end voice AI systems, including STT, TTS, VAD, audio processing, and conversational voice pipelines.
- Implement real-time voice pipelines involving streaming interactions with LLMs and AI agents.
- Design and integrate voice calling workflows, bi-directional audio streaming, and voice-based user interactions.
- Develop voice-enabled applications, voice chat systems, and voice-to-AI integrations for enterprise workflows.
- Build and optimize audio preprocessing layers (noise reduction, segmentation, normalization)
- Implement voice understanding modules, speech intent extraction, and context tracking.
Machine Learning & Deep Learning
- Build, deploy, and optimize ML and DL models for prediction, classification, and automation use cases.
- Train and fine-tune neural networks for text, speech, and multimodal tasks.
- Build traditional ML systems where needed (statistical, rule-based, hybrid systems).
- Perform feature engineering, model evaluation, retraining, and continuous learning cycles.
NLP, LLMs & GenAI
- Implement NLP pipelines including tokenization, NER, intent, embeddings, and semantic classification.
- Work with LLM architectures for text + voice workflows
- Build GenAI-based workflows and integrate models into production systems.
- Implement RAG pipelines and agent-based systems for complex automation.
Fintech & Banking AI
- Work on AI-driven features related to banking, financial risk, compliance automation, fraud patterns, and customer intelligence.
- Understand fintech data structures and constraints while designing AI models.
Engineering, Deployment & Collaboration
- Deploy models on cloud or on-prem (AWS / Azure / GCP / internal infra).
- Build robust APIs and services for voice and ML-based functionalities.
- Collaborate with data engineers, backend developers, and business teams to deliver end-to-end AI solutions.
- Document systems and contribute to internal knowledge bases and R&D.
Security & Compliance
- Follow fundamental best practices for AI security, access control, and safe data handling.
- Awareness of financial compliance standards (plus, not mandatory).
- Follow internal guidelines on PII, audio data, and model privacy.
Primary Skills (Must-Have)
Core AI
- Machine Learning fundamentals
- Deep Learning architectures
- NLP pipelines and transformers
- LLM usage and integration
- GenAI development
- Voice AI (STT, TTS, VAD, real-time pipelines)
- Audio processing fundamentals
- Model building, tuning, and retraining
- RAG systems
- AI Agents (orchestration, multi-step reasoning)
Voice Engineering
- End-to-end voice application development
- Voice calling & telephony integration (framework-agnostic)
- Realtime STT ↔ LLM ↔ TTS interactive flows
- Voice chat system development
- Voice-to-AI model integration for automation
Fintech/Banking Awareness
- High-level understanding of fintech and banking AI use cases
- Data patterns in core banking analytics (advantageous)
Programming & Engineering
- Python (strong competency)
- Cloud deployment understanding (AWS/Azure/GCP)
- API development
- Data processing & pipeline creation
Secondary Skills (Good to Have)
- MLOps & CI/CD for ML systems
- Vector databases
- Prompt engineering
- Model monitoring & evaluation frameworks
- Microservices experience
- Basic UI integration understanding for voice/chat
- Research reading & benchmarking ability
Qualifications
- 2–3 years of practical experience in AI/ML/DL engineering.
- Bachelor’s/Master’s degree in CS, AI, Data Science, or related fields.
- Proven hands-on experience building ML/DL/voice pipelines.
- Experience in fintech or data-intensive domains preferred.
Soft Skills
- Clear communication and requirement understanding
- Curiosity and research mindset
- Self-driven problem solving
- Ability to collaborate cross-functionally
- Strong ownership and delivery discipline
- Ability to explain complex AI concepts simply
Job Title: Software Development Engineer – III (SDE-III)
Location: Sector 55, Gurugram (Onsite)
Work Timings: Regular day shift, 5 days working
About Master-O
Master-O is a next-generation sales enablement and microskill learning platform designed to empower frontline sales teams through gamification, AI-driven coaching, and just-in-time learning. We work closely with large enterprises to improve sales readiness, productivity, and on-ground performance at scale.
As we continue to build intelligent, scalable, and enterprise-ready products, we are looking for a seasoned SDE-III who can take ownership of complex modules, mentor engineers, and contribute to architectural decisions.
Role Overview
As an SDE-III at Master-O, you will play a critical role in designing, building, and scaling core product features used by large enterprises with high user volumes. You will work closely with Product, Design, and Customer Success teams to deliver robust, high-performance solutions while ensuring best engineering practices.
This is a hands-on role requiring strong technical depth, system thinking, and the ability to work in a fast-paced B2B SaaS environment.
Required Skills & Experience
- 4–5 years of full-time professional experience in software development
- Strong hands-on experience with:
- React.js
- Node.js & Express.js
- JavaScript
- MySQL
- AWS
- Prior experience working in B2B SaaS companies (preferred)
- Experience handling enterprise-level applications with high concurrent users
- Solid understanding of REST APIs, authentication, authorization, and backend architecture
- Strong problem-solving skills and ability to write clean, maintainable, and testable code
- Comfortable working in an onsite, collaborative team environment
Good to Have
- Experience working with or integrating LLMs, AI assistants, or Agentic AI systems
- Experience with cloud platforms and deployment workflows
- Prior experience in EdTech, Sales Enablement, or Enterprise Productivity tools
Why Join Master-O?
- Opportunity to build AI-first, enterprise-grade products from the ground up
- High ownership role with real impact on product direction and architecture
- Work on meaningful problems at the intersection of sales, learning, and AI
- Collaborative culture with fast decision-making and minimal bureaucracy
- Be part of a growing product company shaping the future of sales readiness
Review Criteria
- Strong AI/ML Test Engineer
- 5+ years of overall experience in Testing/QA
- 2+ years of experience in testing AI/ML models and data-driven applications, across NLP, recommendation engines, fraud detection, and advanced analytics models
- Must have expertise in validating AI/ML models for accuracy, bias, explainability, and performance, ensuring decisions are fair, reliable, and transparent
- Must have strong experience to design AI/ML test strategies, including boundary testing, adversarial input simulation, and anomaly monitoring to detect manipulation attempts by marketplace users (buyers/sellers)
- Proficiency in AI/ML testing frameworks and tools (like PyTest, TensorFlow Model Analysis, MLflow, Python-based data validation libraries, Jupyter) with the ability to integrate into CI/CD pipelines
- Must understand marketplace misuse scenarios, such as manipulating recommendation algorithms, biasing fraud detection systems, or exploiting gaps in automated scoring
- Must have strong verbal and written communication skills, able to collaborate with data scientists, engineers, and business stakeholders to articulate testing outcomes and issues.
- Degree in Engineering, Computer Science, IT, Data Science, or a related discipline (B.E./B.Tech/M.Tech/MCA/MS or equivalent)
- Candidate must be based within Delhi NCR (100 km radius)
Preferred
- Certifications such as ISTQB AI Testing, TensorFlow, Cloud AI, or equivalent applied AI credentials are an added advantage.
Job Specific Criteria
- CV Attachment is mandatory
- Have you worked with large datasets for AI/ML testing?
- Have you automated AI/ML testing using PyTest, Jupyter notebooks, or CI/CD pipelines?
- Please provide details of 2 key AI/ML testing projects you have worked on, including your role, responsibilities, and tools/frameworks used.
- Are you willing to relocate to Delhi and why (if not from Delhi)?
- Are you available for a face-to-face round?
Role & Responsibilities
- 5 years’ experience in testing AIML models and data driven applications including natural language processing NLP recommendation engines fraud detection and advanced analytics models
- Proven expertise in validating AI models for accuracy bias explainability and performance to ensure decisions eg bid scoring supplier ranking fraud detection are fair reliable and transparent
- Handson experience in data validation and model testing ensuring training and inference pipelines align with business requirements and procurement rules
- Strong skills in data science equipped to design test strategies for AI systems including boundary testing adversarial input simulation and dri monitoring to detect manipulation aempts by marketplace users buyers sellers
- Proficient in data science for defining AIML testing frameworks and tools TensorFlow Model Analysis MLflow PyTest Python based data validation libraries Jupyter with ability to integrate into CICD pipelines
- Business awareness of marketplace misuse scenarios such as manipulating recommendation algorithms biasing fraud detection systems or exploiting gaps in automated scoring
- Education Certifications Bachelors masters in engineering CSIT Data Science or equivalent
- Preferred Certifications ISTQB AI Testing TensorFlowCloud AI certifications or equivalent applied AI credentials
Ideal Candidate
- 5 years’ experience in testing AIML models and data driven applications including natural language processing NLP recommendation engines fraud detection and advanced analytics models
- Proven expertise in validating AI models for accuracy bias explainability and performance to ensure decisions eg bid scoring supplier ranking fraud detection are fair reliable and transparent
- Handson experience in data validation and model testing ensuring training and inference pipelines align with business requirements and procurement rules
- Strong skills in data science equipped to design test strategies for AI systems including boundary testing adversarial input simulation and dri monitoring to detect manipulation aempts by marketplace users buyers sellers
- Proficient in data science for defining AIML testing frameworks and tools TensorFlow Model Analysis MLflow PyTest Python based data validation libraries Jupyter with ability to integrate into CICD pipelines
- Business awareness of marketplace misuse scenarios such as manipulating recommendation algorithms biasing fraud detection systems or exploiting gaps in automated scoring
- Education Certifications Bachelors masters in engineering CSIT Data Science or equivalent
- Preferred Certifications ISTQB AI Testing TensorFlow Cloud AI certifications or equivalent applied AI credentials
Who we are
We’re Fluxon, a product development team founded by ex-Googlers and startup founders. We offer full-cycle software development from ideation and design to build and go-to-market. We partner with visionary companies, ranging from fast-growing startups to tech leaders like
Google and Stripe, to turn bold ideas into products with the power to transform the world.
About the role
As an AI Engineer at Fluxon, you’ll take the lead in designing, building and deploying AI-powered applications for our clients.
You'll be responsible for:
- System Architecture: Design and implement end-to-end AI systems and their parts, including data ingestion, preprocessing, model inference, and output structuring
- Generative AI Development: Build and optimize RAG (Retrieval-Augmented Generation) systems and Agentic workflows using frameworks like LangChain, LangGraph, ADK (Agent Development Kit), Genkit
- Production Engineering: Deploy models to production environments (AWS/GCP/Azure) using Docker and Kubernetes, ensuring high availability and scalability
- Evaluation & Monitoring: Implement feedback loops to evaluate model performance (accuracy, hallucinations, relevance) and set up monitoring for drift in production
- Collaboration: Work closely with other engineers to integrate AI endpoints into the core product and with product managers to define AI capabilities
- Model Optimization: Fine-tune open-source models (e.g., Llama, Mistral) for specific domain tasks and optimize them for latency and cost
You'll work with technologies including:
Languages
- Python (Preferred)
- Java / C++ / Scala / R / JavaScript
AI / ML
- LangChain
- LangGraph
- Google ADK
- Genkit
- OpenAI API
- LLM - Large Language Model
- Vertex AI
Cloud & Infrastructure
- Platforms: Google Cloud Platform (GCP) or Amazon Web Services (AWS)
- Storage: Google Cloud Storage (GCS) or AWS S3
- Orchestration: Temporal, Kubernetes
- Data Stores
- PostgreSQL
- Firestore
- MongoDB
Monitoring & Observability
- GCP Cloud Monitoring Suite
Qualifications
- 5+ years of industry experience in software engineering roles
- Strong proficiency in Python or any preferred AI programming language such as Scala, Javascript and Java
- Strong understanding of Transformer architectures, embeddings, and vector similarity search
- Experience integrating with LLM provider APIs (OpenAI, Anthropic, Google Vertex AI)
- Hands-on experience with agent workflows like LangChain, LangGraph
- Experience with Vector Databases and traditional SQL / NoSQL databases
- Familiarity with cloud platforms, preferably GCP or AWS
- Understanding of patterns like RAG (Retrieval-Augmented Generation), few-shot prompting, and Fine-Tuning
- Solid understanding of software development practices including version control (Git) and CI/CD
Nice to have:
- Experience with Google Cloud Platform (GCP) services, specifically Vertex AI, Firestore,and Cloud Functions
- Knowledge of prompt engineering techniques (Chain-of-Thought, ReAct, Tree of Thoughts)
- Experience building "Agentic" workflows where AI can execute tools or API calls autonomously
What we offer
- Exposure to high-profile SV startups and enterprise companies
- Competitive salary
- Fully remote work with flexible hours
- Flexible paid time off
- Profit-sharing program
- Healthcare
- Parental leave, including adoption and fostering
- Gym membership and tuition reimbursement
- Hands-on career development
About Hack Daily
Hack Daily is a fast-moving digital platform focused on building, breaking, and rethinking ideas across startups, tech, culture, and business. We value speed, originality, and clarity over safe, boring design.
Role Overview
We’re looking for a Graphic Designer who can translate ideas into visuals that stop the scroll. You’ll work closely with content, marketing, and product teams to create designs that are bold, clean, and internet-savvy.
Key Responsibilities
- Design creatives for social media (Instagram, LinkedIn, Twitter/X, Meta ads)
- Create visual assets for blogs, newsletters, landing pages, and campaigns
- Develop brand-consistent templates, thumbnails, and carousels
- Collaborate with content and growth teams to convert ideas into visuals
- Experiment with trends, formats, and styles without losing brand clarity
- Deliver designs quickly while maintaining quality
What We’re Looking For
- Strong command of Photoshop, Illustrator, Figma, or Canva
- Solid understanding of typography, color, spacing, and composition
- Ability to think conceptually, not just execute briefs
- Awareness of Gen-Z / internet design trends
- Can take feedback, iterate fast, and still defend good design choices
- Portfolio or sample work is mandatory (college projects count)
Good to Have (Not Mandatory)
- Motion graphics or basic video editing skills
- Experience with ad creatives or performance design
- Understanding of branding and visual storytelling
Why Join Hack Daily
- Real ownership and visible impact
- Fast learning curve and creative freedom
- Work with a team that values ideas over titles
- A brand that encourages experimentation, not copy-paste design
About the job
Type: Employment Commitment: Full-time
Status: Permanent Location: Multiple
Mode: Hybrid Position: Mid-Senior
About the role
We are seeking a dynamic Innovation Analyst to help build and scale our Innovation Labs for the upcoming years. This role will partner closely with leadership to shape the innovation foundation, drive governance, and amplify our brand presence in innovation. The consultant will combine strategic foresight with hands-on execution across technology, communication, and ecosystem engagement.
How you’ll make an impact
Innovation Strategy & Insights
- Analyse market trends and emerging technologies in SAP, ServiceNow, Microsoft Power Platform, and AI relevant to the organisation's domains (Service & Asset Maintenance, SAP Cloud, Digital Excellence).
- Identify high-impact innovation bets aligned with our growth priorities.
Innovation Lab Development
- Support the design and implementation of our Innovation Hybrid Operating Model, including governance frameworks, KPIs, and dashboards.
- Collaborate with cluster heads and innovation champions to validate ideas and accelerate MVP development.
Technology & Ecosystem Engagement
- Benchmark best practices from global innovation labs and conferences.
- Co-create with universities, startups, and tech partners to expand our innovation ecosystem.
AI & Emerging Tech Enablement
- Explore and integrate AI-driven solutions for SAP FSM, ServiceNow workflows, and Microsoft platforms.
- Drive PoCs and accelerators for AI use cases in asset-intensive industries.
Governance & Project Leadership
- Lead governance for selected innovation projects, ensuring transparency and accountability.
- Manage ROI tracking, MVP seed budgets, and quarterly innovation reports.
What will make you a great fit
- Strong curiosity and enthusiasm for emerging technologies.
- Actively keep up with the latest tech trends and industry news.
- Balances innovation excitement with a practical approach to solving real business problems.
- Ability to translate cutting-edge ideas into tangible business outcomes.
What is in it for you
- Opportunity to shape organisation's innovation journey and influence strategic decisions.
- Exposure to cutting-edge technologies and global innovation ecosystems.
- Collaborative environment with senior leadership and cross-functional teams.
What you bring and build
Must-have
- Proven experience in innovation consulting, digital transformation, or technology strategy.
- Strong knowledge of SAP BTP, ServiceNow, Microsoft Power Platform, and AI trends.
- Ability to translate market insights into actionable innovation roadmaps.
Nice to have
- Excellent communication and branding skills; experience in internal comms is a plus.
- Familiarity with agile methodologies and innovation frameworks (e.g., SAP Labs, IBM Garage
About NebuLogic:
An ISO 9001:2015 certified company that provides best-in-class digital transformation, CRM solution to both commercial and public sector agencies worldwide. For more details, please visit our website www.NebuLogic.com
Role Summary
We are seeking Full Stack Developers with strong experience in building and enhancing CRM-based applications, combined with hands-on exposure to AI-driven tools and modern development practices.
Key Responsibilities
· Design, develop, and maintain CRM-based applications across front-end and back-end layers.
· Build scalable APIs and integrate third-party services.
· Leverage AI tools to improve productivity, code quality, and application intelligence.
· Collaborate closely with UX designers, infrastructure teams, and product stakeholders.
· Participate in code reviews, testing, and continuous improvement initiatives.
Required Skills, Experience & Education
· 3-5 years of hands-on full stack development experience.
· Proficiency in modern front-end and back-end technologies.
· Experience with AI tools, frameworks, or AI-assisted development.
· Solid understanding of REST APIs, databases, and integration patterns.
· Bachelor’s or master’s degree from a top institute
Share your resumes directly to contact @ nebulogic .com
Role: Senior AI Engineer
Work Location: TechGenzi Coimbatore Office (ODC for Tiramai.ai)
Employment Type: Full-time
Experience: 2–5 years (Full-stack development with AI exposure)
About the Role & Work Location.
The selected candidate will be employed by Tiramai.ai and will work exclusively on Tiramai.ai projects. The role will be based out of TechGenzi’s Coimbatore office, which functions as an Offshore Development Center (ODC) supporting Tiramai.ai’s product and engineering initiatives.
Primary Focus
As an AI Engineer at our enterprise SaaS and AI-native organization, you will play a pivotal role in building secure, scalable, and intelligent digital solutions. This role combines full-stack development expertise with applied AI skills to create next-generation platforms that empower enterprises to modernize and act smarter with AI. You will work on AI-driven features, APIs, and cloud-native applications that are production-ready, compliance-conscious, and aligned with our mission of delivering responsible AI innovation.
Key Responsibilities
- Design, develop, and maintain full-stack applications using Python (backend) and React/Angular (frontend).
- Build and integrate AI-driven modules, leveraging GenAI, ML models, and AI-native tools into enterprise-grade SaaS products.
- Develop scalable REST APIs and microservices with security, compliance, and performance in mind.
- Collaborate with architects, product managers, and cross-functional teams to translate requirements into production-ready features.
- Ensure adherence to secure coding standards, data privacy regulations, and human-in-the-loop AI principles.
- Participate in code reviews, system design discussions, and continuous integration/continuous deployment (CI/CD) practices.
- Contribute to reusable libraries, frameworks, and best practices to accelerate AI platform development.
Skills Required
- Strong proficiency in Python for backend development.
- Frontend expertise in React.js or Angular with 2+ years of experience.
- Hands-on experience in full SDLC development (design, build, test, deploy, maintain).
- Familiarity with AI/ML frameworks (e.g., TensorFlow, PyTorch) or GenAI tools (LangChain, vector DBs, OpenAI APIs).
- Knowledge of cloud-native development (AWS/Azure/GCP), Docker, Kubernetes, and CI/CD pipelines.
- Strong understanding of REST APIs, microservices, and enterprise-grade security standards.
- Ability to work collaboratively in fast-paced, cross-functional teams with strong problem-solving and analytical skills.
- Exposure to responsible AI principles (explainability, bias mitigation, compliance) is a plus.
Growth Path
- AI Engineer (24 years) focus on full-stack + AI integration, delivering production-ready features.
- Senior AI Engineer (4–6 years) lead modules, mentor juniors, and drive AI feature development at scale.
- Lead AI Engineer (6–8 years) own solution architecture for AI features, ensure security/compliance, collaborate closely with product/tech leaders.
- AI Architect / Engineering Manager (8+ years) shape AI platform strategy, guide large-scale deployments, and influence product/technology roadmap.
We are seeking a highly motivated and skilled AI Engineer. You will have strong fundamentals in applied machine learning. You will have a passion for building and deploying production-grade AI solutions for enterprise clients. You will be a key technical expert and the face of our company. You will directly interface with customers to design, build, and deliver cutting-edge AI applications. This is a customer-facing role. It requires a balance of deep technical expertise and excellent communication skills.
Roles & Responsibilities
Design & Deliver AI Solutions
- Interact directly with customers.
- Understand their business requirements.
- Translate them into robust, production-ready AI solutions.
- Manage AI projects with the customer's vision in mind.
- Build long-term, trusted relationships with clients.
Build & Integrate Agents
- Architect, build, and integrate intelligent agent systems.
- Automate IT functions and solve specific client problems.
- Use expertise in frameworks like LangChain or LangGraph to build multi-step tasks.
- Integrate these custom agents directly into the RapidCanvas platform.
Implement LLM & RAG Pipelines
- Develop grounding pipelines with retrieval-augmented generation (RAG).
- Contextualize LLM behavior with client-specific knowledge.
- Build and integrate agents with infrastructure signals like logs and APIs.
Collaborate & Enable
- Work with customer data science teams.
- Collaborate with other internal Solutions Architects, Engineering, and Product teams.
- Ensure seamless integration of AI solutions.
- Serve as an expert on the RapidCanvas platform.
- Enable and support customers in building their own applications.
- Act as a Product Champion, providing crucial feedback to the product team to drive innovation.
Data & Model Management
- Oversee the entire AI project lifecycle.
- Start from data preprocessing and model development.
- Finish with deployment, monitoring, and optimization.
Champion Best Practices
- Write clean, maintainable Python code.
- Champion engineering best practices.
- Ensure high performance, accuracy, and scalability.
Key Skills Required
Experience
- Minimum 5+ years of hands-on experience in AI/ML engineering or backend systems.
- Recent exposure to LLMs or intelligent agents is a must.
Technical Expertise
- Proficiency in Python.
- Proven track record of building scalable backend services or APIs.
- Expertise in machine learning, deep learning, and Generative AI concepts.
- Hands-on experience with LLM platforms (e.g., GPT, Gemini).
- Deep understanding of and hands-on experience with agentic frameworks like LangChain, LangGraph, or CrewAI.
- Experience with vector databases (e.g., Pinecone, Weaviate, FAISS).
Customer & Communication Skills
- Proven ability to partner with enterprise stakeholders.
- Excellent presentation skills.
- Comfortable working independently.
- Manage multiple projects simultaneously.
Preferred Skills
- Experience with cloud platforms (e.g., AWS, Azure, Google Cloud).
- Knowledge of MLOps practices.
- Experience in the AI services industry or startup environments.
Why Join us
- High-impact opportunity: Play a pivotal role in building a new business vertical within a rapidly growing AI company.
- Strong leadership & funding: Backed by top-tier investors, our leadership team has deep experience scaling AI-driven businesses.
- Recognized as a top 5 Data Science and Machine Learning platform by independent research firm G2 for customer satisfaction.
We are looking for motivated Growth Sales Executives who will drive Snapsight's revenue growth through strategic prospecting, technical sales expertise, and customer-centric engagement.
Responsibilities:
- Conduct proactive prospecting and generate qualified leads.
- Manage the entire sales cycle from initial contact to deal closure.
- Deliver personalized, technical product demonstrations.
- Collaborate closely with product and technical teams.
- Maintain and optimize sales processes and CRM usage.
Required Qualifications:
- 2-3 years of proven sales or growth role experience in technology or SaaS.
- Technical product understanding and ability to communicate complex solutions.
- Strong proactive growth mindset and analytical skills.
- Excellent negotiation, communication, and relationship-building skills.
- Bachelor’s degree required.
Desirable Qualifications:
- Prior experience with AI-driven SaaS products.
- Familiarity with automation-driven sales processes.
Hiring Process:
- Initial screening call
- Practical sales scenario assignment (prospecting and deal closing simulation)
- Technical/product understanding interview
- Final round (culture fit and strategic discussion)
About the Role
We are looking for a passionate and creative Content Creator who can bring brand stories to life through compelling visual content. This role is ideal for someone with a strong understanding of design, luxury interiors, and lifestyle storytelling, who can translate these elements into engaging videos, reels, and digital narratives that evoke attention and emotion across platforms.
Key Responsibilities
Content Creation
- Plan and create short-form and long-form video content for Instagram, YouTube, LinkedIn, and other digital platforms.
- Script and capture behind-the-scenes footage, client walkthroughs, design process videos, and lifestyle content aligned with brand identity.
- Support the development of storyboards, scripts, and creative concepts for campaigns, shoots, and social media reels.
- Collaborate closely with videographers during installations, shoots, events, and interviews to capture live footage with professional framing, lighting, and audio.
- Create candid, real-time content featuring leadership moments, on-site interactions, and day-to-day workflows to build strong personal and professional brand presence on social media.
Strategy & Collaboration
- Work closely with Marketing and Design teams to ensure content aligns with brand tone, campaigns, and product launches.
- Contribute to content calendars, ensuring a consistent pipeline of high-quality, on-brand visual content.
- Collaborate with stylists, photographers, designers, and creative leads to maintain visual consistency across platforms.
Social Media & Trend Awareness
- Stay up to date with trends in video content, especially across Instagram Reels, YouTube Shorts, and LinkedIn.
- Proactively suggest new formats, hooks, and storytelling approaches to drive engagement and organic reach.
- Track content performance metrics and optimise videos for improved reach, retention, and engagement.
Qualifications & Skills
- Bachelor’s degree in Media, Film, Design, Communications, or a related field (preferred, not mandatory).
- 2–4 years of experience in videography, content creation, or social media production, preferably in lifestyle, design, or luxury sectors.
- Experience using AI tools for video editing, scripting, or trend analysis to improve speed and innovation in content workflows.
- Proficiency in tools such as Adobe Premiere Pro, Final Cut Pro, DaVinci Resolve, CapCut Pro, or similar.
- Hands-on experience with DSLR/mirrorless cameras, gimbals, drones, and audio equipment.
- Strong communication skills and ability to work collaboratively across teams.
- A sharp eye for luxury aesthetics, interiors, and visual storytelling.
Bonus Points
- Experience creating viral content or reels with 1M+ views.
- Familiarity with motion graphics or animation tools such as After Effects or Canva Motion.
- Interest in architecture, interior design, or luxury lifestyle branding.
What We Offer
- Opportunity to work with a leading luxury interiors and design-focused organisation.
- A collaborative, fast-paced, and creative work environment.
- Exposure to premium projects and high-end clientele.
Job Title : Senior Software Engineer (Full Stack — AI/ML & Data Applications)
Experience : 5 to 10 Years
Location : Bengaluru, India
Employment Type : Full-Time | Onsite
Role Overview :
We are seeking a Senior Full Stack Software Engineer with strong technical leadership and hands-on expertise in AI/ML, data-centric applications, and scalable full-stack architectures.
In this role, you will design and implement complex applications integrating ML/AI models, lead full-cycle development, and mentor engineering teams.
Mandatory Skills :
Full Stack Development (React/Angular/Vue + Node.js/Python/Java), Data Engineering (Spark/Kafka/ETL), ML/AI Model Integration (TensorFlow/PyTorch/scikit-learn), Cloud & DevOps (AWS/GCP/Azure, Docker, Kubernetes, CI/CD), SQL/NoSQL Databases (PostgreSQL/MongoDB).
Key Responsibilities :
- Architect, design, and develop scalable full-stack applications for data and AI-driven products.
- Build and optimize data ingestion, processing, and pipeline frameworks for large datasets.
- Deploy, integrate, and scale ML/AI models in production environments.
- Drive system design, architecture discussions, and API/interface standards.
- Ensure engineering best practices across code quality, testing, performance, and security.
- Mentor and guide junior developers through reviews and technical decision-making.
- Collaborate cross-functionally with product, design, and data teams to align solutions with business needs.
- Monitor, diagnose, and optimize performance issues across the application stack.
- Maintain comprehensive technical documentation for scalability and knowledge-sharing.
Required Skills & Experience :
- Education : B.E./B.Tech/M.E./M.Tech in Computer Science, Data Science, or equivalent fields.
- Experience : 5+ years in software development with at least 2+ years in a senior or lead role.
- Full Stack Proficiency :
- Front-end : React / Angular / Vue.js
- Back-end : Node.js / Python / Java
- Data Engineering : Experience with data frameworks such as Apache Spark, Kafka, and ETL pipeline development.
- AI/ML Expertise : Practical exposure to TensorFlow, PyTorch, or scikit-learn and deploying ML models at scale.
- Databases : Strong knowledge of SQL & NoSQL systems (PostgreSQL, MongoDB) and warehousing tools (Snowflake, BigQuery).
- Cloud & DevOps : Working knowledge of AWS, GCP, or Azure; containerization & orchestration (Docker, Kubernetes); CI/CD; MLflow/SageMaker is a plus.
- Visualization : Familiarity with modern data visualization tools (D3.js, Tableau, Power BI).
Soft Skills :
- Excellent communication and cross-functional collaboration skills.
- Strong analytical mindset with structured problem-solving ability.
- Self-driven with ownership mentality and adaptability in fast-paced environments.
Preferred Qualifications (Bonus) :
- Experience deploying distributed, large-scale ML or data-driven platforms.
- Understanding of data governance, privacy, and security compliance.
- Exposure to domain-driven data/AI use cases in fintech, healthcare, retail, or e-commerce.
- Experience working in Agile environments (Scrum/Kanban).
- Active open-source contributions or a strong GitHub technical portfolio.
Lead AI Engineer
Location: Bengaluru, Hybrid | Type: Full-time
About Newpage Solutions
Newpage Solutions is a global digital health innovation company helping people live longer, healthier lives. We partner with life sciences organisations—which include pharmaceutical, biotech, and healthcare leaders—to build transformative AI and data-driven technologies addressing real-world health challenges.
From strategy and research to UX design and agile development, we deliver and validate impactful solutions using lean, human-centered practices.
We are proud to be a ‘Great Place to Work®’ certified company for the last three consecutive years. We also hold a top Glassdoor rating and are named among the "Top 50 Most Promising Healthcare Solution Providers" by CIOReview.
As an organisation, we foster creativity, continuous learning, and inclusivity, creating an environment where bold ideas thrive and make a measurable difference in people’s lives.
Your Mission
We’re seeking a highly experienced, technically exceptional Lead AI Engineer to architect and deliver next-generation Generative AI and Agentic systems. You will drive end-to-end innovation, from model selection and orchestration design to scalable backend implementation, all while collaborating with cross-functional teams to transform AI research into production-ready solutions.
This is an individual-contributor leadership role for someone who thrives on ownership, fast execution, and technical excellence. You will define the standards for quality, scalability, and innovation across all AI initiatives.
What You’ll Do
Develop AI Applications & Agentic Systems
- Architect, build, and optimise production-grade Generative AI and agentic applications using frameworks such as LangChain, LangGraph, LlamaIndex, Semantic Kernel, n8n, Pydantic AI or custom orchestration layers integrating with LLMs such as GPT, Claude, Gemini as well as self-hosted LLMs along with MCP integrations.
- Implement Retrieval-Augmented Generation (RAG) techniques leveraging vector databases (Pinecone, ChromaDB, Weaviate, pgvector, etc.) and search engines such as ElasticSearch / Solr using both TF/IDF BM25-based full-text search and similarity search techniques.
- Implement guardrails, observability, fine-tune and train models for industry or domain-specific use cases.
- Build multi-modal workflows using text, image, voice, and video.
- Design robust prompt & context engineering frameworks to improve accuracy, repeatability, quality, cost, and latency.
- Build supporting microservices and modular backends using Python, JavaScript, or Java aligned with domain-driven design, SOLID principles, OOP, and clean architecture, using various databases including relational, document, Key-Value, Graph, and event-driven systems using Kafka / MSK, SQS, etc.
- Deploy cloud-native applications in hyper-scalers such as AWS / GCP / Azure using containerisation and orchestration with Docker / Kubernetes or serverless architecture.
- Apply industry best engineering practices: TDD, well-structured and clean code with linting, domain-driven design, security-first design (secrets management, rotation, SAST, DAST), comprehensive observability (structured logging, metrics, tracing), containerisation & orchestration (Docker, Kubernetes), automated CI/CD pipelines (GitHub Actions, Jenkins).
AI-Assisted Development, Context Engineering & Innovation
- Use AI-assisted development tools such as Claude Code, GitHub Copilot, Codex, Roo Code, Cursor to accelerate development while maintaining code quality and maintainability.
- Utilise coding assistant tools with native instructions, templates, guides, workflows, sub-agents, and more to create developer workflows that improve development velocity, standardisation, and reliability across AI teams.
- Ensure industry best practices to develop well-structured code that is testable, maintainable, performant, scalable, and secure.
- Partner with Product, Design, and ML teams to translate conceptual AI features into scalable user-facing products.
- Provide technical mentorship and guide team members in system design, architecture reviews, and AI best practices.
- Lead POCs, internal research experiments, and innovation sprints to explore and validate emerging AI techniques.
What You Bring
- 7–12 years of total experience in software development, with at least 3 years in AI/ML systems engineering or Generative AI.
- Experience with cloud-native deployments and services in AWS / GCP / Azure, with the ability to architect distributed systems.
- A ‘no-compromise’ attitude with engineering best practices such as clean code, TDD, containerisation, security, CI/CD, scalability, performance, and cost optimisation.
- Active user of AI-assisted development tools (Claude Code, GitHub Copilot, Cursor) with demonstrable experience using structured workflows and sub-agents.
- A deep understanding of LLMs, context engineering approaches, and best practices, with the ability to optimise accuracy, latency, and cost.
- Python or JavaScript experience with strong grasp of OOP, SOLID principles, 12-factor application development, and scalable microservice architecture.
- Proven track record developing and deploying GenAI/LLM-based systems in production.
- Advanced understanding of context engineering, prompt construction, optimisation, and evaluation techniques.
- End-to-end implementation experience using vector databases and retrieval pipelines.
- Experience with GitHub Actions, Docker, Kubernetes, and cloud-native deployments.
- Obsession with clean code, system scalability, and performance optimisation.
- Ability to balance rapid prototyping with long-term maintainability.
- Excel at working independently while collaborating effectively across teams.
- Stay ahead of the curve on new AI models, frameworks, and best practices.
- Have a founder’s mindset and love solving ambiguous, high-impact technical challenges.
- Bachelor’s or Master’s degree in Computer Science, Machine Learning, or a related technical discipline.
Bonus Skills / Experience
- Understanding of MLOps, model serving, scaling, and monitoring workflows (e.g., BentoML, MLflow, Vertex AI, AWS Sagemaker).
- Experience building streaming + batch data ingestion and transformation pipelines (Spark / Airflow / Beam).
- Mobile and front-end web application development experience.
What We Offer
- A people-first culture – Supportive peers, open communication, and a strong sense of belonging.
- Smart, purposeful collaboration – Work with talented colleagues to create technologies that solve meaningful business challenges.
- Balance that lasts – We respect your time and support a healthy integration of work and life.
- Room to grow – Opportunities for learning, leadership, and career development, shaped around you.
- Meaningful rewards – Competitive compensation that recognises both contribution and potential.
We are looking for an AI Engineer (Computer Vision) to design and deploy intelligent video analytics solutions using CCTV feeds. The role focuses on analyzing real-time and recorded video to extract insights such as attention levels, engagement, movement patterns, posture, and overall group behavior. You will work closely with data scientists, backend teams, and product managers to build scalable, privacy-aware AI systems.
Key Responsibilities
- Develop and deploy computer vision models for CCTV-based video analytics
- Analyze gaze, posture, facial expressions, movement, and crowd behavior
- Build real-time and batch video processing pipelines
- Train, fine-tune, and optimize deep learning models for production
- Convert visual signals into actionable insights & dashboards
- Ensure privacy, security, and ethical AI compliance
- Improve model accuracy, latency, and scalability
- Collaborate with engineering teams for end-to-end deployment
Required Skills
- Strong experience in Computer Vision & Deep Learning
- Proficiency in Python
- Hands-on experience with OpenCV, TensorFlow, PyTorch
- Knowledge of CNNs, object detection, tracking, pose estimation
- Experience with video analytics & CCTV data
- Understanding of model optimization and deployment
Good to Have
- Experience with real-time video streaming (RTSP, CCTV feeds)
- Familiarity with edge AI or GPU optimization
- Exposure to education analytics or surveillance systems
- Knowledge of cloud deployment (AWS/GCP/Azure)
Experienced Senior Full Stack Engineer to Build & Backend for Event Ticketing Platform Using Claude Code
Overview
We have the frontend already built in Replit for both user and admin experiences. We are looking for a very experienced software engineer to design, build, and scale a production ready backend for a high concurrency event ticketing platform.
This role is for a true senior engineer who actively uses Claude Code as part of their daily development workflow, understands how to review and harden AI assisted output, and can ship systems that hold up under real world load, QA, and security testing.
This is not a prototype or demo system. This backend must be reliable, scalable, secure, and extremely well documented.
What You Will Be Building
• Backend services for an event ticketing platform
• Multi tenant architecture supporting thousands of organizers and events
• High concurrency purchase and checkout flows
• Secure user, admin, and system level access controls
• Production deployments with secrets management, middleware, and security layers
• Observability, logging, and metrics suitable for QA and load testing
• A thoroughly documented backend that future engineers can onboard into quickly
Core Engineering Requirements
• 10 plus years of professional software engineering experience
• Expert level experience using Claude Code or equivalent AI coding agents in real production workflows
• Deep experience building backends with Node.js
• Strong experience with Supabase and PostgreSQL
• Experience deploying and scaling applications on Vercel
• Proven experience extending existing codebases safely
• Strong API design and database performance fundamentals
• Ability to reason clearly about concurrency, transactions, and failure modes
Scalability and Performance Expectations
The backend must be designed to support:
• Hundreds of thousands of ticket purchases
• Thousands of concurrent users and tenants
• Large traffic spikes during on sale windows
• Load testing across all major latency percentiles including p90, p95, and p99
• Extremely low tail latency under high concurrency
• Graceful degradation and recovery under stress
You should be comfortable designing and implementing caching strategies, queue based workflows, idempotent operations, locking mechanisms, and transactional safety.
Security and Production Readiness
You must be able to:
• Set up production environments with proper secrets management
• Implement middleware for authentication, authorization, and rate limiting
• Design secure communication between frontend and backend
• Apply industry best practices for security hardening
• Prepare the system to pass penetration testing
• Maintain clean separation of concerns and least privilege access
Documentation and Developer Experience Requirements
High quality documentation is a hard requirement for this role.
You must have experience:
• Using AI powered documentation tools to generate and maintain technical documentation
• Producing clear architectural overviews and system diagrams
• Writing detailed README files and onboarding guides
• Documenting APIs, data models, and critical flows
• Keeping documentation in sync with code changes over time
We expect the backend to be easy to understand, easy to extend, and well explained for future engineers.
How We Evaluate Candidates
We are looking for engineers who can demonstrate:
• Real production systems they have built or scaled
• Clear judgment in how they use Claude Code and validate AI generated output
• Strong opinions on production readiness backed by experience
• The ability to move fast without sacrificing quality or safety
A short paid technical task may be used that involves real backend work with production quality code and documentation.
The Mission Tonomo is revolutionizing e-commerce with an intelligent, autonomous platform powered by IoT and AI. We are in the Beta phase, rapidly iterating based on user feedback. We need an "Unblocker"—a senior engineer who owns the mobile experience but can dive into the Python backend to build the endpoints they need to move fast.
The Engineering Culture We believe in AI-Augmented Engineering. We expect you to use tools like Cursor, Copilot, Gemini, GPT-4 and alike, to handle boilerplate code, allowing you to focus on complex native bridges, system architecture, and "on-the-spot" bug resolution.
Core Responsibilities
- Flutter Mastery: Lead the development of our cross-platform Beta app (Android, iOS, and Web) using Flutter.
- Backend Independence: Build and modify REST APIs and microservices in Python (FastAPI) to unblock frontend features.
- AI coding: tools like Cursor, Copilot, Gemini, GPT-4 and alike
- Agile Troubleshooting: Fix critical UI and logical bugs "on the spot" as reported by users. Experience with UI/UX best practices.
- Performance & Debugging: Proactively monitor app health, experienced with Sentry, Firebase Crashlytics, and Flutter DevTools
- IoT & Integration: Work with IoT telemetry protocols (MQTT) and integrate third-party services for payments (Stripe) and Firebase.
- Native Depth: Develop custom plugins and MethodChannels to bridge Flutter with native iOS/Android functionalities.
- Dashboard Ownership: Own dashboards end-to-end. Design and build internal dashboards for: Business Intelligence. System health and operational metrics. IoT and backend activity insights.
- Frontend Development Build modern, responsive web dashboards using React (or similar). Implement advanced data visualizations. Focus on clarity, performance, and usability for non-technical stakeholders.
- BI & Data Integration: Integrate dashboards with: Backend APIs (Python / FastAPI). Databases (PostgreSQL). Analytics / metrics sources (Grafana, Prometheus, or BI tools). Work with product & ops to define what should be measured.
- Monitoring & Insights: Build visual views on top of monitoring data (Grafana or embedded views). Help translate raw metrics into actionable insights. Support ad-hoc analysis and investigation workflows.
- Execution & Iteration: Move fast in a startup environment: iterate dashboards based on real feedback. Improve data quality, consistency, and trust over time.
Technical Requirements
- Mobile Experience: 7+ years in mobile development with at least 5 highly distributed apps published.
- The Stack: * Frontend: Expert Flutter/Dart skills
- Backend: Proficient Python developer with experience in FastAPI, SQLAlchemy, and PostgreSQL.
- Data & Backend Awareness: Comfortable consuming REST APIs and working with structured data.
- Ability to collaborate on schema design and API contracts.
- BI / Analytics (Nice to Have): Experience with BI tools or platforms (Grafana, Metabase, Superset, Looker, etc.).
- Understanding of KPIs, funnels, and business metrics.
- Experience embedding dashboards or analytics into web apps.
- Architecture: Mastery of design patterns for both mobile (MVVM/MVC) and backend microservices.
- Infrastructure: Experience with Google Cloud Platform and IoT telemetry (mandatory).
- Execution: Proactive attitude toward learning and the ability to "own" a feature from DB schema to UI implementation.
- Experience with Atlassian Jira
Soft skills:
· Self-Directed Ownership: flags blockers early and suggests improvements without being asked. You are well experienced professional... You don't wait for a Jira ticket to be perfect; you ask the right questions and move the needle forward
· Transparency: Extreme honesty about timelines—if a task is more complex than estimated, you communicate it immediately, not at the deadline.
· Clear communicator with engineers and non-technical stakeholders.
The Deal
- Part-time Retainer: 100 hours per month.
- Rate: $15 – $18 USD per hour (Performance-based).
- Impact: Direct partnership with the founding team in a fast-paced, AI-driven startup.
- Location: We value the stability and focus of Tier-2 rockstars Kochi, Indore, Jaipur, or Ahmedabad and alike.
How to Apply If you are a self-starter who codes with AI and can bridge the gap between frontend and backend, send your resume and links to your 3 best live apps
Experience: 3+ years
Responsibilities:
- Build, train and fine tune ML models
- Develop features to improve model accuracy and outcomes.
- Deploy models into production using Docker, kubernetes and cloud services.
- Proficiency in Python, MLops, expertise in data processing and large scale data set.
- Hands on experience in Cloud AI/ML services.
- Exposure in RAG Architecture

Job Description – Social Media Executive
Responsibilities
- Manage and publish engaging content across Instagram, Facebook, and LinkedIn
- Create blogs, website content, landing pages, and webinar invitations
- Post customized content regularly across groups and communities
- Coordinate with content writers and designers to deliver high-quality creative output
- Handle SEO optimization and leverage AI tools for video/content creation
- Plan and run ad campaigns across platforms
- Craft and distribute WhatsApp promotional messages
- Share and manage testimonials, Google feedback, and career resources (Career Book, Cluster Books, Tests, etc.)
- Ensure routine social media posting at regular intervals to maintain consistency
Requirements
- Strong knowledge of social media platforms and content creation strategies
- Basic understanding of SEO and digital marketing practices
- Ability to use AI-based tools for video and content creation
- A creative mindset with strong attention to detail
- Good communication and coordination skills
About MyOperator:
MyOperator is a Business AI Operator, a category-leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance - all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement - without fragmented tools or increased headcount.
Role Overview:
We are seeking a Customer Success professional to lead and oversee the Customer Success department for our SMBG clients. This role involves managing a team Customer Success Executives. You will be responsible for driving end-to-end customer journey - from onboarding to product adoption, engagement, and retention - while building scalable processes suitable for a high-volume customer base.
Key Responsibilities:
- Lead and mentor team of Customer Success Executives.
- Drive customer onboarding, adoption, retention, and satisfaction across SMBG clients.
- Develop and implement customer success strategies and playbooks tailored for high-volume SMB customers.
- Implement and scale tech-touch engagement models for effective customer coverage.
- Develop strategies to drive deep product adoption and showcase the value of MyOperator's solutions (Cloud IVR, Call Center Software, WhatsApp API, etc.).
- Monitor health metrics, churn signals, and client escalations; design proactive action plans.
- Collaborate with Product, Sales, and Support teams to ensure a seamless customer experience.
- Deliver regular business reviews and performance reports to leadership (CEO and senior stakeholders).
- Continuously optimize processes to enhance team productivity and customer outcomes.
Qualifications:
- 3-6 years of proven experience in Customer Success / Account Management within SaaS, Telecom, CPaaS, or Cloud Communication.
- Minimum 2+ years of direct experience leading Team Leaders / Managers.
- Strong exposure to managing high-volume SMB customer bases.
- Excellent strategic thinking, problem-solving, and analytical skills.
- Tech-savvy mindset with experience implementing automation or tech-touch models.
- Experience in reporting to senior leadership (CEO/VP-level) is highly desirable.
- Exceptional communication and stakeholder management skills.
Join us at MyOperator and be part of a dynamic team that is transforming the way businesses communicate. We offer competitive compensation, comprehensive benefits, and ample opportunities for growth and career advancement. Apply today and embark on an exciting journey with us!
Benefits:
- Career growth opportunities in a fast-growing SaaS company.
- A competitive salary and performance-based incentives.
- A dynamic, inclusive, and collaborative work environment.
- Significant opportunities for professional growth and career advancement.
- The chance to make a real impact on thousands of growing businesses in India
Senior Machine Learning Engineer
About the Role
We are looking for a Senior Machine Learning Engineer who can take business problems, design appropriate machine learning solutions, and make them work reliably in production environments.
This role is ideal for someone who not only understands machine learning models, but also knows when and how ML should be applied, what trade-offs to make, and how to take ownership from problem understanding to production deployment.
Beyond technical skills, we need someone who can lead a team of ML Engineers, design end-to-end ML solutions, and clearly communicate decisions and outcomes to both engineering teams and business stakeholders. If you enjoy solving real problems, making pragmatic decisions, and owning outcomes from idea to deployment, this role is for you.
What You’ll Be Doing
Building and Deploying ML Models
- Design, build, evaluate, deploy, and monitor machine learning models for real production use cases.
- Take ownership of how a problem is approached, including deciding whether ML is the right solution and what type of ML approach fits the problem.
- Ensure scalability, reliability, and efficiency of ML pipelines across cloud and on-prem environments.
- Work with data engineers to design and validate data pipelines that feed ML systems.
- Optimize solutions for accuracy, performance, cost, and maintainability, not just model metrics.
Leading and Architecting ML Solutions
- Lead a team of ML Engineers, providing technical direction, mentorship, and review of ML approaches.
- Architect ML solutions that integrate seamlessly with business applications and existing systems.
- Ensure models and solutions are explainable, auditable, and aligned with business goals.
- Drive best practices in MLOps, including CI/CD, model monitoring, retraining strategies, and operational readiness.
- Set clear standards for how ML problems are framed, solved, and delivered within the team.
Collaborating and Communicating
- Work closely with business stakeholders to understand problem statements, constraints, and success criteria.
- Translate business problems into clear ML objectives, inputs, and expected outputs.
- Collaborate with software engineers, data engineers, platform engineers, and product managers to integrate ML solutions into production systems.
- Present ML decisions, trade-offs, and outcomes to non-technical stakeholders in a simple and understandable way.
What We’re Looking For
Machine Learning Expertise
- Strong understanding of supervised and unsupervised learning, deep learning, NLP techniques, and large language models (LLMs).
- Experience choosing appropriate modeling approaches based on the problem, available data, and business constraints.
- Experience training, fine-tuning, and deploying ML and LLM models for real-world use cases.
- Proficiency in common ML frameworks such as TensorFlow, PyTorch, Scikit-learn, etc.
Production and Cloud Deployment
- Hands-on experience deploying and running ML systems in production environments on AWS, GCP, or Azure.
- Good understanding of MLOps practices, including CI/CD for ML models, monitoring, and retraining workflows.
- Experience with Docker, Kubernetes, or serverless architectures is a plus.
- Ability to think beyond deployment and consider operational reliability and long-term maintenance.
Data Handling
- Strong programming skills in Python.
- Proficiency in SQL and working with large-scale datasets.
- Ability to reason about data quality, data limitations, and how they impact ML outcomes.
- Familiarity with distributed computing frameworks like Spark or Dask is a plus.
Leadership and Communication
- Ability to lead and mentor ML Engineers and work effectively across teams.
- Strong communication skills to explain ML concepts, decisions, and limitations to business teams.
- Comfortable taking ownership and making decisions in ambiguous problem spaces.
- Passion for staying updated with advancements in ML and AI, with a practical mindset toward adoption.
Experience Needed
- 6+ years of experience in machine learning engineering or related roles.
- Proven experience designing, selecting, and deploying ML solutions used in production.
- Experience managing ML systems after deployment, including monitoring and iteration.
- Proven track record of working in cross-functional teams and leading ML initiatives.
We are seeking an experienced AI Architect to design, build, and scale production-ready AI voice conversation agents deployed locally (on-prem / edge / private cloud) and optimized for GPU-accelerated, high-throughput environments.
You will own the end-to-end architecture of real-time voice systems, including speech recognition, LLM orchestration, dialog management, speech synthesis, and low-latency streaming pipelines—designed for reliability, scalability, and cost efficiency.
This role is highly hands-on and strategic, bridging research, engineering, and production infrastructure.
Key Responsibilities
Architecture & System Design
- Design low-latency, real-time voice agent architectures for local/on-prem deployment
- Define scalable architectures for ASR → LLM → TTS pipelines
- Optimize systems for GPU utilization, concurrency, and throughput
- Architect fault-tolerant, production-grade voice systems (HA, monitoring, recovery)
Voice & Conversational AI
- Design and integrate:
- Automatic Speech Recognition (ASR)
- Natural Language Understanding / LLMs
- Dialogue management & conversation state
- Text-to-Speech (TTS)
- Build streaming voice pipelines with sub-second response times
- Enable multi-turn, interruptible, natural conversations
Model & Inference Engineering
- Deploy and optimize local LLMs and speech models (quantization, batching, caching)
- Select and fine-tune open-source models for voice use cases
- Implement efficient inference using TensorRT, ONNX, CUDA, vLLM, Triton, or similar
Infrastructure & Production
- Design GPU-based inference clusters (bare metal or Kubernetes)
- Implement autoscaling, load balancing, and GPU scheduling
- Establish monitoring, logging, and performance metrics for voice agents
- Ensure security, privacy, and data isolation for local deployments
Leadership & Collaboration
- Set architectural standards and best practices
- Mentor ML and platform engineers
- Collaborate with product, infra, and applied research teams
- Drive decisions from prototype → production → scale
Required Qualifications
Technical Skills
- 7+ years in software / ML systems engineering
- 3+ years designing production AI systems
- Strong experience with real-time voice or conversational AI systems
- Deep understanding of LLMs, ASR, and TTS pipelines
- Hands-on experience with GPU inference optimization
- Strong Python and/or C++ background
- Experience with Linux, Docker, Kubernetes
AI & ML Expertise
- Experience deploying open-source LLMs locally
- Knowledge of model optimization:
- Quantization
- Batching
- Streaming inference
- Familiarity with voice models (e.g., Whisper-like ASR, neural TTS)
Systems & Scaling
- Experience with high-QPS, low-latency systems
- Knowledge of distributed systems and microservices
- Understanding of edge or on-prem AI deployments
Preferred Qualifications
- Experience building AI voice agents or call automation systems
- Background in speech processing or audio ML
- Experience with telephony, WebRTC, SIP, or streaming audio
- Familiarity with Triton Inference Server / vLLM
- Prior experience as Tech Lead or Principal Engineer
What We Offer
- Opportunity to architect state-of-the-art AI voice systems
- Work on real-world, high-scale production deployments
- Competitive compensation and equity (if applicable)
- High ownership and technical influence
- Collaboration with top-tier AI and infrastructure talent
Big companies are like giant boats with a thousand rowers — you can’t feel your pull move the boat. Shoppin isn’t that boat. We’re a 10-person crew rowing like our lives depend on it — each one the best at what they do, each stroke moving the product forward every single day. If you believe small, fast, obsessive teams can beat giants, read on.
What You’ll Do:
Build and optimize Shoppin’s vibe, image, and inspiration search, powering both text and image-based discovery.
Work on vector embeddings, retrieval pipelines, and semantic search using ElasticSearch, Redis caching, and LLM APIs.
Design and ship high-performance Python microservices that move fast and scale beautifully.
Experiment with prompt engineering, ranking models, and multimodal retrieval.
- Collaborate directly with the founder — moving from idea → prototype → production in hours, not weeks.
Tech You’ll Work With
- Languages & Frameworks: Python, FastAPI
- Search & Infra: ElasticSearch, Redis, PostgreSQL
- AI Stack: Vector Databases, Embeddings, LLM APIs (OpenAI, Gemini, etc.)
- Dev Tools: Cursor, Docker, Kubernetes
- Infra: AWS / GCP
What We’re Looking For
- Strong mathematical intuition — you understand cosine similarity, normalization, and ranking functions.
- Experience or deep curiosity in text + image search.
- Comfort with Python, data structures, and system design.
- Speed-obsessed — you optimize for velocity, not bureaucracy.
- Hungry to go all-in, ship hard things, and make a dent.
Bonus Points
- Experience with LLM prompting or orchestration.
- Exposure to recommendation systems, fashion/culture AI, or multimodal embeddings.
- You’ve built or scaled something end-to-end yourself.
Salary: ₹3.5 LPA( Based on the performance)
Experience: 1–3 Years (ONLY FOR FEMALES)
We are looking for a Technical Trainer skilled in HTML, Java, Python, and AI to conduct technical trainer. The trainer will create learning materials, deliver sessions, assess student performance, and support learners throughout the training. Strong communication skills and the ability to explain technical concepts clearly are essential.
Position Overview:
The AI Tech Lead will architect and guide the implementation of real-time AI systems across voice automation, LLM pipelines, and knowledge-enhanced applications. The role requires strong architectural judgment, hands-on expertise in AI/LLM systems, and the ability to define high-performance, scalable best practices.
Key Responsibilities:
• Architect and guide implementation of real-time AI systems across voice automation, LLM pipelines, and knowledge-enhanced applications
• Design and develop distributed, provider-agnostic AI architectures with performance guarantees including low latency, resilient failover, distributed scaling, and cost-efficiency
• Define architectural best practices for GenAI systems including prompt design, context shaping, fallback logic, caching, and real-time agent orchestration
• Lead AI model governance including evaluation and selection frameworks for multiple LLM providers, routing logic, benchmarking, and cost management
• Establish and monitor KPIs for LLM quality, latency, reliability, grounding accuracy, and system stability
• Ensure data security practices for handling of voice and transcript data, applying PII-safe methods, and supporting multi-tenant data isolation
• Own knowledge integration and RAG architecture including vector databases, retrieval strategies, chunking policies, and hybrid grounding methods
• Continuously evaluate new model capabilities and AI technology trends
Required Skills:
• Architectural judgment and hands-on expertise in AI and LLM systems • Experience designing scalable and low-latency AI architectures
• Knowledge of multi-provider LLM integration and orchestration
• Understanding of distributed systems, microservices, and load balancing
• Strong grounding in AI model governance and benchmarking
• Awareness of data security and privacy best practices
• Experience with retrieval-augmented generation (RAG) and vector databases
Preferred (Bonus) Skills:
• Experience with function-calling and knowledge graphs
• Familiarity with hybrid grounding and retrieval enhancement strategies • Experience with voice automation systems (STT/TTS pipelines). give me in a proper formate
We are seeking a Developer Team Lead who will own the technical execution, architecture, and delivery of Super AI’s core platforms and customer implementations. This role requires a hands-on leader who can guide full-stack and AI engineers, ensure high-quality code, and translate business and government requirements into scalable, secure, and production-ready systems.
The ideal candidate is equally comfortable writing code, reviewing architectures, mentoring developers, and collaborating with product, pre-sales, and leadership teams.
Company Description
VMax e-Solutions India Private Limited, based in Hyderabad, is a dynamic organization specializing in Open Source ERP Product Development and Mobility Solutions. As an ISO 9001:2015 and ISO 27001:2013 certified company, VMax is dedicated to delivering tailor-made and scalable products, with a strong focus on e-Governance projects across multiple states in India. The company's innovative technologies aim to solve real-life problems and enhance the daily services accessed by millions of citizens. With a culture of continuous learning and growth, VMax provides its team members opportunities to develop expertise, take ownership, and grow their careers through challenging and impactful work.
About the Role
We’re hiring a Senior Data Scientist with deep real-time voice AI experience and strong backend engineering skills.
1. You’ll own and scale our end-to-end voice agent pipeline that powers AI SDRs, customer support 2. agents, and internal automation agents on calls. This is a hands-on, highly technical role where you’ll design and optimize low-latency, high-reliability voice systems.
3. You’ll work closely with our founders, product, and platform teams, with significant ownership over architecture, benchmarks.
What You’ll Do
1. Own the voice stack end-to-end – from telephony / WebRTC entrypoints to STT, turn-taking, LLM reasoning, and TTS back to the caller.
2. Design for real-time – architect and optimize streaming pipelines for sub-second latency, barge-in, interruptions, and graceful recovery on bad networks.
3. Integrate and tune models – evaluate, select, and integrate STT/TTS/LLM/VAD providers (and self-hosted models) for different use-cases, balancing quality, speed, and cost.
4. Build orchestration & tooling – implement agent orchestration logic, evaluation frameworks, call simulators, and dashboards for latency, quality, and reliability.
5. Harden for production – ensure high availability, observability, and robust fault-tolerance for thousands of concurrent calls in customer VPCs.
6. Shape the voice roadmap – influence how voice fits into our broader Agentic OS vision (simulation, analytics, multi-agent collaboration, etc.).
You’re a Great Fit If You Have
1. 6+ years of software engineering experience (backend or full-stack) in production systems.
2. Strong experience building real-time voice agents or similar systems using:
STT / ASR (e.g. Whisper, Deepgram, Assembly, AWS Transcribe, GCP Speech)
TTS (e.g. ElevenLabs, PlayHT, AWS Polly, Azure Neural TTS)
VAD / turn-taking and streaming audio pipelines
LLMs (e.g. OpenAI, Anthropic, Gemini, local models)
3. Proven track record designing and operating low-latency, high-throughput streaming systems (WebRTC, gRPC, websockets, Kafka, etc.).
4. Hands-on experience integrating ML models into live, user-facing applications with real-time inference & monitoring.
5. Solid backend skills with Python and TypeScript/Node.js; strong fundamentals in distributed systems, concurrency, and performance optimization.
6. Experience with cloud infrastructure – especially AWS (EKS, ECS, Lambda, SQS/Kafka, API Gateway, load balancers).
7. Comfortable working in Kubernetes / Docker environments, including logging, metrics, and alerting.
8. Startup DNA – at least 2 years in an early or mid-stage startup where you shipped fast, owned outcomes, and worked close to the customer.
Nice to Have
1. Experience self-hosting AI models (ASR / TTS / LLMs) and optimizing them for latency, cost, and reliability.
2. Telephony integration experience (e.g. Twilio, Vonage, Aircall, SignalWire, or similar).
3. Experience with evaluation frameworks for conversational agents (call quality scoring, hallucination checks, compliance rules, etc.).
4. Background in speech processing, signal processing, or dialog systems.
5. Experience deploying into enterprise VPC / on-prem environments and working with security/compliance constraints.
. Security & Governance -
- Implement and enforce MFA / access control / role-based permissions.
- Strengthen endpoint security, patching, device encryption, and basic incident response.
- Network hygiene: Secure Wi-Fi (guest vs internal), firewall rules, VPN (as required)
- Cloud, Email & Data Organization
- Admin ownership of Google Workspace and/or Microsoft 365
- Improve email organisation (shared inboxes, groups, retention, anti-phishing)
- Define and maintain folder structure + naming standards + permissions
2. Backup & Recovery
- Design and execute a data backup strategy(3-2-1 approach preferred)
- Schedule backups + perform restore drills for critical data
- IT Operations & Internal Communication
- Hardware/software standardization (Windows / Apple devices, printers, scanners, NAS)
- IT Helpdesk basics + asset register + access documentation
3. AI & SaaS Enablement
- Execute and implement AI Tools / SaaS for departments(securely and responsibly)
- Drive adoption/training of Google Apps, MS 365, and Apple workflows for staff + technicians
About Us
Mobileum is a leading provider of Telecom analytics solutions for roaming, core network, security, risk management, domestic and international connectivity testing, and customer intelligence.
More than 1,000 customers rely on its Active Intelligence platform, which provides advanced analytics solutions, allowing customers to connect deep network and operational intelligence with real-time actions that increase revenue, improve customer experience, and reduce costs.
Headquartered in Silicon Valley, Mobileum has global offices in Australia, Dubai, Germany, Greece, India, Portugal, Singapore and UK with global HC of over 1800+.
Join Mobileum Team
At Mobileum we recognize that our team is the main reason for our success. What does work with us mean? Opportunities!
Role: GenAI/LLM Engineer – Domain-Specific AI Solutions (Telecom)
About the Job
We are seeking a highly skilled GenAI/LLM Engineer to design, fine-tune, and operationalize Large Language Models (LLMs) for telecom business applications. This role will be instrumental in building domain-specific GenAI solutions, including the development of domain-specific LLMs, to transform telecom operational processes, customer interactions, and internal decision-making workflows.
Roles & Responsibility:
- Build domain-specific LLMs by curating domain-relevant datasets and training/fine-tuning LLMs tailored for telecom use cases.
- Fine-tune pre-trained LLMs (e.g., GPT, Llama, Mistral) using telecom-specific datasets to improve task accuracy and relevance.
- Design and implement prompt engineering frameworks, optimize prompt construction and context strategies for telco-specific queries and processes.
- Develop Retrieval-Augmented Generation (RAG) pipelines integrated with vector databases (e.g., FAISS, Pinecone) to enhance LLM performance on internal knowledge.
- Build multi-agent LLM pipelines using orchestration tools (LangChain, LlamaIndex) to support complex telecom workflows.
- Collaborate cross-functionally with data engineers, product teams, and domain experts to translate telecom business logic into GenAI workflows.
- Conduct systematic model evaluation focused on minimizing hallucinations, improving domain-specific accuracy, and tracking performance improvements on business KPIs.
- Contribute to the development of internal reusable GenAI modules, coding standards, and best practices documentation.
Desired Profile
- Familiarity with multi-modal LLMs (text + tabular/time-series).
- Experience with OpenAI function calling, LangGraph, or agent-based orchestration.
- Exposure to telecom datasets (e.g., call records, customer tickets, network logs).
- Experience with low-latency inference optimization (e.g., quantization, distillation).
Technical skills
- Hands-on experience in fine-tuning transformer models, prompt engineering, and RAG architecture design.
- Experience delivering production-ready AI solutions in enterprise environments; telecom exposure is a plus.
- Advanced knowledge of transformer architectures, fine-tuning techniques (LoRA, PEFT, adapters), and transfer learning.
- Proficiency in Python, with significant experience using PyTorch, Hugging Face Transformers, and related NLP libraries.
- Practical expertise in prompt engineering, RAG pipelines, and LLM orchestration tools (LangChain, LlamaIndex).
- Ability to build domain-adapted LLMs, from data preparation to final model deployment.
Work Experience
7+ years of professional experience in AI/ML, with at least 2+ years of practical exposure to LLMs or GenAI deployments.
Educational Qualification
- Master’s or Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, Natural Language Processing, or a related field.
- Ph.D. preferred for foundational model work and advanced research focus.
This is a unique opportunity to work directly with the Founder’s Office at iDreamCareer. As a part of Founder's Office , you will get a front-row seat to how a mission-driven organization operates and scales. You'll contribute to key initiatives and support high-priority tasks across functions. We're looking for someone who’s organized, curious, and ready to take ownership.
Key Responsibilities:
- Provide direct support to the Founder's office on daily operations and key projects.
- Assist in tracking key business metrics and preparing reports and presentations for the leadership team.
- Help coordinate projects across different departments, ensuring smooth communication and timely execution of tasks.
- Manage documentation and handle scheduling for important internal and external meetings.
- Conduct foundational research on assigned topics to support ongoing projects.
- Handle special projects and ad-hoc tasks as they arise within the Founder's office.
Qualifications:
- 0-4 year of relevant work experience
- Strong organizational skills and the ability to manage multiple tasks simultaneously.
- Excellent communication and interpersonal skills.
- A strong sense of ownership and a proactive, "can-do" attitude.
Probation - 6 months
Preferred Skills:
- Passion for Impact: A genuine love for the education industry and a strong alignment with our mission to empower students to build better futures.
- First-Principles Thinking: The ability to break down complex problems into their core elements and reason up from there. We value clear, structured thought.
- Data-Driven Communication: High proficiency in using MS Excel for analysis and problem-solving, coupled with the ability to create clear, concise presentations (PowerPoint, Google Slides).
- AI Proficiency: Familiar with leveraging AI tools (e.g., ChatGPT) for research, drafting, and task efficiency.
- Comfortable using AI tools (like ChatGPT), spreadsheets, slides, and productivity software
- Grit & Adaptability: Resilience in the face of challenges and the ability to thrive in a fast-paced, dynamic environment.
- Intellectual Curiosity: A genuine desire to learn rapidly across different business functions.
The selected candidate is required to fine tune LLM models, train them in distributed mode using SLURM framework, perform the necessary data engineering, training process monitoring and tuning, analysis, and documentation.
ROLES AND RESPONSIBILITIES:
Video-led Content Strategy:
- Implement the video-led content strategy to meet business objectives like increase in views, leads, product awareness and CTR
- Execute the video content framework that talks to a varied TG, mixes formats and languages (English and vernacular)
Production:
- Ability to write/edit clear and concise copy and briefs unique to the platform, and influence/direct designers and agencies for creative output
- Creating the monthly production pipeline, review and edit every piece of video content that is produced
- Explore AI-based and production automation tools to help create communication stimuli at scale
- Manage our agency and partner ecosystem to support high-scale video production
- Manage the monthly production flow and maintain data sheets to enable ease of tracking
- Increase CTR, Views and other critical metrics for all video production
Project Management:
- Oversee the creation of various video formats, including explainer videos, product demos, customer testimonials, etc.
- Plan and manage video production schedules, ensuring timely delivery of projects within budget
- Upload and manage video content across multiple digital platforms, including the digital platforms and other relevant platforms
- Ensure all video content is optimized for each platform, following best practices for SEO and audience engagement
- Coordinate with the content team to integrate video content on the platforms
- Maintain an archive of video assets and ensure proper documentation and tagging
Capabilities:
- Drive the development of capabilities around production, automation, upload, thereby leading to a reduction in TAT and effort
- Work with technology teams to explore Gen AI tools to deliver output at scale and speed
- Identifying opportunities for new formats and keeping up with trends in the video content space
Customer obsession and governance:
- Relentless focus on making customer interactions non-intrusive; using video content to create a frictionless experience
- Zero tolerance for content and communication errors
- Develop a comprehensive video guidelines framework that is easy to use by businesses yet creates a distinct identity for the brand
- Have a strong eye for grammar and ensure that every content unit adheres to the brand tone of voice
- Create checks and balances in the system so that all customer-facing content is first time right, every time
Performance tracking:
- Tracking and analyzing production, go-live status and engagement metrics using tools like Google Analytics, etc.
- Gauge efficacy of video content produced, and drive changes wherever needed
- Provide regular reports on video performance, identifying trends, insights, and areas for improvement
IDEAL CANDIDATE:
Qualifications:
- Bachelor's degree in Communications, Digital Marketing, Advertising or a related field
- Proven experience as a creative/content writer or in a similar role, preferably with exposure to AI-driven content creation.
Work Experience:
- 3-5 years of relevant experience in the space of content marketing/advertising, experience in Digital Marketing with a focus on video content, will be an advantage
Skills:
- Excellent command over the English language
- Hands-on experience of copywriting, editing, and creating communication
- Ability to handle complex briefs and ideate out of the box
- Creative thinking and problem-solving skills, with a passion for storytelling and visual communication
- Deep customer focus by understanding customer behaviour and analyzing data & real-world experiences
- Detailed orientation & very structured thinking, think of customers' entire journey and experience
- Strong communication and collaboration skills to effectively work with diverse teams
- Passion for emerging technologies and the ability to adapt to a fast-paced and evolving environment
- Excellent project management skills, with the ability to manage multiple projects simultaneously and meet tight deadlines
- Proficiency in AI tools and video editing software (e.g., Adobe Premiere Pro, Final Cut Pro) and familiarity with graphic design
- software (e.g., Adobe After Effects, Photoshop)
PERKS, BENEFITS, WORK CULTURE:
Our people define our passion and our audacious, incredibly rewarding achievements. The company is one of India’s most diversified non-banking financial companies, and among Asia’s top 10 Large workplaces. If you have the drive to get ahead, we can help find you an opportunity at any of the 500+ locations we’re present in India.
AI Agent Builder – Internal Functions and Data Platform Development Tools
About the Role:
We are seeking a forward-thinking AI Agent Builder to lead the design, development, and deployment, and usage reporting of Microsoft Copilot and other AI-powered agents across our data platform development tools and internal business functions. This role will be instrumental in driving automation, improving onboarding, and enhancing operational efficiency through intelligent, context-aware assistants.
This role is central to our GenAI transformation strategy. You will help shape the future of how our teams interact with data, reduce administrative burden, and unlock new efficiencies across the organization. Your work will directly contribute to our “Art of the Possible” initiative—demonstrating tangible business value through AI.
You Will:
• Copilot Agent Development: Use Microsoft Copilot Studio and Agent Builder to create, test, and deploy AI agents that automate workflows, answer queries, and support internal teams.
• Data Engineering Enablement: Build agents that assist with data connector scaffolding, pipeline generation, and onboarding support for engineers.
• Knowledge Base Integration: Curate and integrate documentation (e.g., ERDs, connector specs) into Copilot-accessible repositories (SharePoint, Confluence) to support contextual AI responses.
• Prompt Engineering: Design reusable prompt templates and conversational flows to streamline repeated tasks and improve agent usability.
• Tool Evaluation & Integration: Assess and integrate complementary AI tools (e.g., GitLab Duo, Databricks AI, Notebook LM) to extend Copilot capabilities.
• Cross-Functional Collaboration: Partner with product, delivery, PMO, and security teams to identify high-value use cases and scale successful agent implementations.
• Governance & Monitoring: Ensure agents align with Responsible AI principles, monitor performance, and iterate based on feedback and evolving business needs.
• Adoption and Usage Reporting: Use Microsoft Viva Insights and other tools to report on user adoption, usage and business value delivered.
What We're Looking For:
• Proven experience with Microsoft 365 Copilot, Copilot Studio, or similar AI platforms, ChatGPT, Claude, etc.
• Strong understanding of data engineering workflows, tools (e.g., Git, Databricks, Unity Catalog), and documentation practices.
• Familiarity with SharePoint, Confluence, and Microsoft Graph connectors.
• Experience in prompt engineering and conversational UX design.
• Ability to translate business needs into scalable AI solutions.
• Excellent communication and collaboration skills across technical and non-technical
Bonus Points:
• Experience with GitLab Duo, Notebook LM, or other AI developer tools.
• Background in enterprise data platforms, ETL pipelines, or internal business systems.
• Exposure to AI governance, security, and compliance frameworks.
• Prior work in a regulated industry (e.g., healthcare, finance) is a plus.
1) Be open to learn new frameworks like Hapi.JS , Typescript , Nest.JS
2) Strong DB concepts , and hands on knowledge on MongoDB , REDIS
3) Experience working with micro-services will be a plus
4) Experience working with JWT and IAM systems will be a plus
5) Experience working with Postman , Swagger will be a plus
6) TDD knowledge is an advantage and also working with Unit Test code and familiar with test code coverage concepts.
7) Strong operating system knowledge is a plus with knowledge of how to manage threads.
8) Working experience with RabbitMQ , Kafka will be a plus
9) Strong knowledge of JS internals is a must.
You can contact me on nine three one six one two zero one three two
Review Criteria
- Strong Data Scientist/Machine Learnings/ AI Engineer Profile
- 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
- Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
- Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
- Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
- Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
- Preferred (Company) – Must be from product companies
Job Specific Criteria
- CV Attachment is mandatory
- What's your current company?
- Which use cases you have hands on experience?
- Are you ok for Mumbai location (if candidate is from outside Mumbai)?
- Reason for change (if candidate has been in current company for less than 1 year)?
- Reason for hike (if greater than 25%)?
Role & Responsibilities
- Partner with Product to spot high-leverage ML opportunities tied to business metrics.
- Wrangle large structured and unstructured datasets; build reliable features and data contracts.
- Build and ship models to:
- Enhance customer experiences and personalization
- Boost revenue via pricing/discount optimization
- Power user-to-user discovery and ranking (matchmaking at scale)
- Detect and block fraud/risk in real time
- Score conversion/churn/acceptance propensity for targeted actions
- Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
- Design and run A/B tests with guardrails.
- Build monitoring for model/data drift and business KPIs
Ideal Candidate
- 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
- Proven, hands-on success in at least two (preferably 3–4) of the following:
- Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
- Fraud/risk detection (severe class imbalance, PR-AUC)
- Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
- Propensity models (payment/churn)
- Programming: strong Python and SQL; solid git, Docker, CI/CD.
- Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
- ML breadth: recommender systems, NLP or user profiling, anomaly detection.
- Communication: clear storytelling with data; can align stakeholders and drive decisions.
🎯 About Us
Stupa builds cutting-edge AI for real-time sports intelligence ; automated commentary, player tracking, non-contact biomechanics, ball trajectory, LED graphics, and broadcast-grade stats. Your models will be seen live by millions across global events.
🌍 Global Travel
Work that literally travels the world. You’ll deploy systems at international tournaments across Asia, Europe, and the Middle East, working inside world-class stadiums, courts, and TV production rooms.
✨ What You’ll Build
- AI Language Products
- Automated live commentary (LLM + ASR + OCR), real-time subtitles, AI storytelling.
- Non-Contact Measurement (CV + Tracking + Pose Estimation)
- Player velocity, footwork, acceleration, shot recognition, 2D/3D reconstruction, real-time edge inference.
- End-to-End Streaming Pipelines
- Temporal segmentation, multi-modal fusion, low-latency edge + cloud deployment.
🧠 What You’ll Do
Train and optimise ML/CV/NLP models for live sports, build tracking & pose pipelines, create LLM/ASR-based commentary systems, deploy on edge/cloud, ship rapid POCs→production, manage datasets & accuracy, and collaborate with product, engineering, and broadcast teams.
🧩 Requirements
Core Skills:
- Strong ML fundamentals (NLP/CV/multimodal)
- PyTorch/TensorFlow, transformers, ASR or pose estimation
- Data pipelines, optimisation, evaluation
- Deployment (Docker, ONNX, TensorRT, FastAPI, K8s, edge GPU)
- Strong Python engineering
Bonus: Sports analytics, LLM fine-tuning, low-latency optimisation, prior production ML systems.
🌟 Why Join Us
- Your models go LIVE in global sports broadcasts
- International travel for tournaments
- High ownership, zero bureaucracy
- Build India’s most advanced AI × Sports product
- Cool, futuristic problems + freedom to innovate
- Up to ₹40LPA for exceptional talent
🔥 You Belong Here If You…
Build what the world hasn’t seen • Want impact on live sports • Thrive in fast-paced ownership-driven environments.
Job Description: Python-Azure AI Developer
Experience: 5+ years
Locations: Bangalore | Pune | Chennai | Jaipur | Hyderabad | Gurgaon | Bhopal
Mandatory Skills:
- Python: Expert-level proficiency with FastAPI/Flask
- Azure Services: Hands-on experience integrating Azure cloud services
- Databases: PostgreSQL, Redis
- AI Expertise: Exposure to Agentic AI technologies, frameworks, or SDKs with strong conceptual understanding
Good to Have:
- Workflow automation tools (n8n or similar)
- Experience with LangChain, AutoGen, or other AI agent frameworks
- Azure OpenAI Service knowledge
Key Responsibilities:
- Develop AI-powered applications using Python and Azure
- Build RESTful APIs with FastAPI/Flask
- Integrate Azure services for AI/ML workloads
- Implement agentic AI solutions
- Database optimization and management
- Workflow automation implementation
Job Overview:
As a Technical Lead, you will be responsible for leading the design, development, and deployment of AI-powered Edtech solutions. You will mentor a team of engineers, collaborate with data scientists, and work closely with product managers to build scalable and efficient AI systems. The ideal candidate should have 8-10 years of experience in software development, machine learning, AI use case development and product creation along with strong expertise in cloud-based architectures.
Key Responsibilities:
AI Tutor & Simulation Intelligence
- Architect the AI intelligence layer that drives contextual tutoring, retrieval-based reasoning, and fact-grounded explanations.
- Build RAG (retrieval-augmented generation) pipelines and integrate verified academic datasets from textbooks and internal course notes.
- Connect the AI Tutor with the Simulation Lab, enabling dynamic feedback — the system should read experiment results, interpret them, and explain why outcomes occur.
- Ensure AI responses remain transparent, syllabus-aligned, and pedagogically accurate.
Platform & System Architecture
- Lead the development of a modular, full-stack platform unifying courses, explainers, AI chat, and simulation windows in a single environment.
- Design microservice architectures with API bridges across content systems, AI inference, user data, and analytics.
- Drive performance, scalability, and platform stability — every millisecond and every click should feel seamless.
Reliability, Security & Analytics
- Establish system observability and monitoring pipelines (usage, engagement, AI accuracy).
- Build frameworks for ethical AI, ensuring transparency, privacy, and student safety.
- Set up real-time learning analytics to measure comprehension and identify concept gaps.
Leadership & Collaboration
- Mentor and elevate engineers across backend, ML, and front-end teams.
- Collaborate with the academic and product teams to translate physics pedagogy into engineering precision.
- Evaluate and integrate emerging tools — multi-modal AI, agent frameworks, explainable AI — into the product roadmap.
Qualifications & Skills:
- 8–10 years of experience in software engineering, ML systems, or scalable AI product builds.
- Proven success leading cross-functional AI/ML and full-stack teams through 0→1 and scale-up phases.
- Expertise in cloud architecture (AWS/GCP/Azure) and containerization (Docker, Kubernetes).
- Experience designing microservices and API ecosystems for high-concurrency platforms.
- Strong knowledge of LLM fine-tuning, RAG pipelines, and vector databases (Pinecone, Weaviate, etc.).
- Demonstrated ability to work with educational data, content pipelines, and real-time systems.
Bonus Skills (Nice to Have):
- Experience with multi-modal AI models (text, image, audio, video).
- Knowledge of AI safety, ethical AI, and explain ability techniques.
- Prior work in AI-powered automation tools or AI-driven SaaS products.

Global digital transformation solutions provider.
Job Description – Senior Technical Business Analyst
Location: Trivandrum (Preferred) | Open to any location in India
Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST
About the Role
We are seeking highly motivated and analytically strong Senior Technical Business Analysts who can work seamlessly with business and technology stakeholders to convert a one-line problem statement into a well-defined project or opportunity. This role is ideal for fresh graduates who have a strong foundation in data analytics, data engineering, data visualization, and data science, along with a strong drive to learn, collaborate, and grow in a dynamic, fast-paced environment.
As a Technical Business Analyst, you will be responsible for translating complex business challenges into actionable user stories, analytical models, and executable tasks in Jira. You will work across the entire data lifecycle—from understanding business context to delivering insights, solutions, and measurable outcomes.
Key Responsibilities
Business & Analytical Responsibilities
- Partner with business teams to understand one-line problem statements and translate them into detailed business requirements, opportunities, and project scope.
- Conduct exploratory data analysis (EDA) to uncover trends, patterns, and business insights.
- Create documentation including Business Requirement Documents (BRDs), user stories, process flows, and analytical models.
- Break down business needs into concise, actionable, and development-ready user stories in Jira.
Data & Technical Responsibilities
- Collaborate with data engineering teams to design, review, and validate data pipelines, data models, and ETL/ELT workflows.
- Build dashboards, reports, and data visualizations using leading BI tools to communicate insights effectively.
- Apply foundational data science concepts such as statistical analysis, predictive modeling, and machine learning fundamentals.
- Validate and ensure data quality, consistency, and accuracy across datasets and systems.
Collaboration & Execution
- Work closely with product, engineering, BI, and operations teams to support the end-to-end delivery of analytical solutions.
- Assist in development, testing, and rollout of data-driven solutions.
- Present findings, insights, and recommendations clearly and confidently to both technical and non-technical stakeholders.
Required Skillsets
Core Technical Skills
- 6+ years of Technical Business Analyst experience within an overall professional experience of 8+ years
- Data Analytics: SQL, descriptive analytics, business problem framing.
- Data Engineering (Foundational): Understanding of data warehousing, ETL/ELT processes, cloud data platforms (AWS/GCP/Azure preferred).
- Data Visualization: Experience with Power BI, Tableau, or equivalent tools.
- Data Science (Basic/Intermediate): Python/R, statistical methods, fundamentals of ML algorithms.
Soft Skills
- Strong analytical thinking and structured problem-solving capability.
- Ability to convert business problems into clear technical requirements.
- Excellent communication, documentation, and presentation skills.
- High curiosity, adaptability, and eagerness to learn new tools and techniques.
Educational Qualifications
- BE/B.Tech or equivalent in:
- Computer Science / IT
- Data Science
What We Look For
- Demonstrated passion for data and analytics through projects and certifications.
- Strong commitment to continuous learning and innovation.
- Ability to work both independently and in collaborative team environments.
- Passion for solving business problems using data-driven approaches.
- Proven ability (or aptitude) to convert a one-line business problem into a structured project or opportunity.
Why Join Us?
- Exposure to modern data platforms, analytics tools, and AI technologies.
- A culture that promotes innovation, ownership, and continuous learning.
- Supportive environment to build a strong career in data and analytics.
Skills: Data Analytics, Business Analysis, Sql
Must-Haves
Technical Business Analyst (6+ years), SQL, Data Visualization (Power BI, Tableau), Data Engineering (ETL/ELT, cloud platforms), Python/R
******
Notice period - 0 to 15 days (Max 30 Days)
Educational Qualifications: BE/B.Tech or equivalent in: (Computer Science / IT) /Data Science
Location: Trivandrum (Preferred) | Open to any location in India
Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST
























