
About Asha Health
Asha Health helps medical practices launch their own AI clinics. We're backed by Y Combinator, General Catalyst, 186 Ventures, Reach Capital and many more. We recently raised an oversubscribed seed round from some of the best investors in Silicon Valley. Our team includes AI product leaders from companies like Google, physician executives from major health systems, and more.
About the Role
We're looking for a Software Engineer to join our engineering team in our Bangalore office. We're looking for someone who is an all-rounder, but has particularly exceptional backend engineering skills.
Our ideal candidate has built AI agents at the orchestration layer level and leveraged clever engineering techniques to improve latency & reliability for complex workflows.
We pay well above market for the country's best talent and provide a number of excellent perks.
Responsibilities
In this role, you will have the opportunity to build state-of-the-art AI agents, and learn what it takes to build an industry-leading multimodal, multi-agent suite.
You'll wear many hats. Your responsibilities will fall into 2 categories:
AI Engineering
- Develop AI agents with a high bar for reliability and performance.
- Build SOTA LLM-powered tools for providers, practices, and patients.
- Architect our data annotation, fine tuning, and RLHF workflows.
- Live on the bleeding-edge ensuring that every week, we have the most cutting edge agents as the industry evolves.
Full-Stack Engineering (80% backend, 20% frontend)
- Lead the team in designing scalable architecture to support performant web applications.
- Develop features end-to-end for our web applications with industry leading product and user experience (Typescript, nodeJS, python etc).
Perks of Working at Asha Health
#1 Build cool stuff: work on the latest, cutting-edge tech (build frontier AI agents with technologies that evolve every 2 weeks).
#2 Surround yourself with top talent: our team includes senior AI product leaders from companies like Google, experienced physician executives, and top 1% engineering talent (the best of the best).
#3 Rocketship trajectory: we get more customer interest than we have time to onboard, it's a good problem to have :)

About Asha Health (YC F24)
About
Asha Health is a Y Combinator backed AI healthcare startup. We help medical practices spin up their own AI clinic. We've raised an oversubscribed seed round backed by top Silicon Valley investors, and are growing rapidly. Our team consists of AI product experts from companies like Google, as well as senior physician executives from major health systems.
Tech stack
Candid answers by the company
We help medical practices spin up their own AI clinic.
Similar jobs
We're at the forefront of creating advanced AI systems, from fully autonomous agents that provide intelligent customer interaction to data analysis tools that offer insightful business solutions. We are seeking enthusiastic interns who are passionate about AI and ready to tackle real-world problems using the latest technologies.
We are looking for a passionate AI/ML Intern with hands-on exposure to Large Language Models (LLMs), fine-tuning techniques like LoRA, and strong fundamentals in Data Structures & Algorithms (DSA). This role is ideal for someone eager to work on real-world AI applications, experiment with open-source models, and contribute to production-ready AI systems.
Duration: 6 months
Perks:
- Hands-on experience with real AI projects.
- Mentoring from industry experts.
- A collaborative, innovative and flexible work environment
After completion of the internship period, there is a chance to get a full-time opportunity as AI/ML engineer (6-8 LPA).
Compensation:
- Stipend: Base is INR 8000/- & can increase up to 20000/- depending upon performance matrix.
Key Responsibilities
- Work on Large Language Models (LLMs) for real-world AI applications.
- Implement and experiment with LoRA (Low-Rank Adaptation) and other parameter-efficient fine-tuning techniques.
- Perform model fine-tuning, evaluation, and optimization.
- Engage in prompt engineering to improve model outputs and performance.
- Develop backend services using Python for AI-powered applications.
- Utilize GitHub for version control, including managing branches, pull requests, and code reviews.
- Work with AI platforms such as Hugging Face and OpenAI to deploy and test models.
- Collaborate with the team to build scalable and efficient AI solutions.
Must-Have Skills
- Strong proficiency in Python.
- Hands-on experience with LLMs (open-source or API-based).
- Practical knowledge of LoRA or other parameter-efficient fine-tuning techniques.
- Solid understanding of Data Structures & Algorithms (DSA).
- Experience with GitHub and version control workflows.
- Familiarity with Hugging Face Transformers and/or OpenAI APIs.
- Basic understanding of Deep Learning and NLP concepts.
Senior BackEnd Engineer
The ideal candidate will have a strong background in building scalable applications, a deep understanding of back-end technologies, and experience with cloud infrastructure. As a Back End Engineer, you will be responsible for designing, developing, and maintaining a scalable workflow management system. You will work closely with cross-functional teams to build robust and efficient applications that meet the needs of our users. Your expertise in Scala, Python, AI Agents/APIs, and GCP will be crucial in ensuring our system is reliable, performant, and scalable.
Key Responsibilities:
Back-End Development:
- Build and maintain back-end services and APIs using Scala.
- Implement and optimize Orchestration workflow system involving database queries and operations.
- Build API integrations with Third Party APIs and services.
- Ensure robust and scalable server-side logic.
Cloud Integration:
- Deploy, manage, and monitor applications on Google Cloud Platform (GCP).
- Utilize GCP services to enhance application performance and scalability.
- Implement cloud-based solutions for data storage, processing, and analytics.
Collaboration And Communication:
- Work closely with cross-functional teams to define, design, and ship new features.
- Participate in code reviews and contribute to sharing team knowledge.
- Document development processes, coding standards, and project requirements.
Qualifications:
- Educational Background:
- Completed a masters/bachelor degree in Computer Science, Engineering, or a related field.
- Technical Skills:
- Proficiency in Scala programming language.
- Strong experience with React and ReactJS.
- Familiarity with Google Cloud Platform (GCP) and its services.
- Knowledge of front-end development tools and best practices.
- Understanding of RESTful API design and implementation.
- Soft Skills:
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration abilities.
- Eagerness to learn and adapt to new technologies and challenges.
Preferred Qualifications:
- Experience with version control systems such as Git.
- Familiarity with CI/CD pipelines and DevOps practices.
- Understanding of workflow management systems and their requirements.
- Experience with containerization technologies like Docker.
Must have Skills
- Scala - 4 Years
- React.Js - 1 Years
- RESTful API - 4 Years
- Docker - 2 Years
- Python - 3 Years
- Artificial Intelligence - 2 Years
Job Title: AI Engineer
Location: Bengaluru
Experience: 3 Years
Working Days: 5 Days
About the Role
We’re reimagining how enterprises interact with documents and workflows—starting with BFSI and healthcare. Our AI-first platforms are transforming credit decisioning, document intelligence, and underwriting at scale. The focus is on Intelligent Document Processing (IDP), GenAI-powered analysis, and human-in-the-loop (HITL) automation to accelerate outcomes across lending, insurance, and compliance workflows.
As an AI Engineer, you’ll be part of a high-caliber engineering team building next-gen AI systems that:
- Power robust APIs and platforms used by underwriters, credit analysts, and financial institutions.
- Build and integrate GenAI agents.
- Enable “human-in-the-loop” workflows for high-assurance decisions in real-world conditions.
Key Responsibilities
- Build and optimize ML/DL models for document understanding, classification, and summarization.
- Apply LLMs and RAG techniques for validation, search, and question-answering tasks.
- Design and maintain data pipelines for structured and unstructured inputs (PDFs, OCR text, JSON, etc.).
- Package and deploy models as REST APIs or microservices in production environments.
- Collaborate with engineering teams to integrate models into existing products and workflows.
- Continuously monitor and retrain models to ensure reliability and performance.
- Stay updated on emerging AI frameworks, architectures, and open-source tools; propose improvements to internal systems.
Required Skills & Experience
- 2–5 years of hands-on experience in AI/ML model development, fine-tuning, and building ML solutions.
- Strong Python proficiency with libraries such as NumPy, Pandas, scikit-learn, PyTorch, or TensorFlow.
- Solid understanding of transformers, embeddings, and NLP pipelines.
- Experience working with LLMs (OpenAI, Claude, Gemini, etc.) and frameworks like LangChain.
- Exposure to OCR, document parsing, and unstructured text analytics.
- Familiarity with model serving, APIs, and microservice architectures (FastAPI, Flask).
- Working knowledge of Docker, cloud environments (AWS/GCP/Azure), and CI/CD pipelines.
- Strong grasp of data preprocessing, evaluation metrics, and model validation workflows.
- Excellent problem-solving ability, structured thinking, and clean, production-ready coding practices.
7+ years of experience in Python Development
Good experience in Microservices and APIs development.
Must have exposure to large scale data
Good to have Gen AI experience
Code versioning and collaboration. (Git)
Knowledge for Libraries for extracting data from websites.
Knowledge of SQL and NoSQL databases
Familiarity with RESTful APIs
Familiarity with Cloud (Azure /AWS) technologies
About Wissen Technology:
• The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
• Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
• Our workforce consists of 550+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
• Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
• Globally present with offices US, India, UK, Australia, Mexico, and Canada.
• We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
• Wissen Technology has been certified as a Great Place to Work®.
• Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
• Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, StateStreet, Flipkart, Swiggy, Trafigura, GE to name a few.
Website : www.wissen.com
Location: Mumbai, Maharashtra, India
Sector: Technology, Information & Media
Company Size: 500 - 1,000 Employees
Employment: Full-Time, Permanent
Experience: 10 - 14 Years (Engineering Leadership)
Level: Engineering Manager / Group EM
ABOUT THIS MANDATE :
Recruiting Bond has been exclusively retained by one of India's most prominent and well-established digital platform organisations operating at the intersection of Technology, Information, and Media to identify and place an exceptional Engineering Manager who can lead engineering teams through an enterprise-wide AI adoption and digital transformation agenda.
This is a high-impact, hands-on leadership role at the nexus of people, product, and technology. The organisation is executing one of the most ambitious AI transformation programmes in its sector and this Engineering Manager will be a core driver of that change. You will lead multiple squads, own engineering delivery end-to-end, embed AI tooling and practices into the team's DNA, and shape the engineering culture of tomorrow.
We are seeking leaders who code when it matters, who build systems and teams with equal conviction, and who view AI not as a trend but as a fundamental shift in how great software is built.
THE OPPORTUNITY AT A GLANCE :
AI-First Engineering Culture :
- Own AI adoption across your squads - from LLM tooling integration to automation-first delivery workflows. Make AI a default, not an afterthought.
Hands-On Engineering Leadership :
- Stay close to the code. Lead architecture reviews, unblock engineers, and set the technical bar - not just the management agenda.
People & Org Builder :
- Grow engineers into leaders. Build squads of 615 across functions. Drive hiring, career frameworks, and a culture of psychological safety.
KEY RESPONSIBILITIES :
1. Hands-On Technical Engagement :
- Remain deeply embedded in the technical work participate in design reviews, architecture decisions, and critical code reviews
- Set and uphold the engineering quality bar : performance benchmarks, security standards, test coverage, and release quality
- Provide technical direction on backend platform strategy, API design, service decomposition, and data architecture
- Identify and resolve systemic technical debt and architectural risks across team-owned services
- Unblock engineers by diving into complex problems debugging, pair programming, and system analysis when it matters
- Own key technical decisions in collaboration with Tech Leads and Principal Engineers; balance pragmatism with long-term sustainability
2. AI Adoption, Integration & Transformation (2026 Mandate) :
- Define and execute the team's AI adoption roadmap - from developer tooling to product-facing AI features
- Champion the integration of GenAI tools (GitHub Copilot, Cursor, Claude, ChatGPT) across the full engineering workflow coding, testing, documentation, incident response
- Embed LLM-powered capabilities into the product : recommendation engines, intelligent search, conversational interfaces, content generation, and predictive systems
- Lead evaluation and adoption of AI-assisted SDLC practices : automated code review, AI-generated test suites, intelligent observability, and anomaly detection
- Partner with Data Science and ML Platform teams to productionise ML models with robust MLOps pipelines
- Build team literacy in prompt engineering, RAG (Retrieval-Augmented Generation), and AI agent frameworks
- Create an experimentation culture : run structured AI pilots, measure productivity impact, and scale what works
- Stay ahead of the AI tooling landscape and advise senior leadership on strategic AI investments and engineering implications
3. People Leadership & Team Development :
- Lead, manage, and grow squads of 6 - 15 engineers across seniority levels (L2 through L6 / Junior through Staff)
- Conduct structured 1 : 1s, career growth conversations, and development planning with every direct report
- Design and execute personalised AI upskilling programmes ensure every engineer develops practical AI fluency by end of 2026
- Build and maintain a high-performance team culture : clarity of ownership, accountability, fast feedback loops, and psychological safety
- Drive performance management fairly and rigorously recognise top performers, manage underperformance constructively
- Lead technical hiring end-to-end : define job requirements, conduct bar-raising interviews, and make data-driven hire decisions
- Contribute to engineering career frameworks and level definitions in partnership with the VP / Director of Engineering
4. Engineering Delivery & Execution Excellence :
- Own end-to-end delivery for multiple product squads from planning and scoping through production release and post-launch stability
- Implement and refine agile delivery frameworks (Scrum, Kanban, Shape Up) calibrated to squad needs and product cadence
- Drive predictable delivery : maintain healthy sprint velocity, manage WIP limits, and ensure dependency resolution across teams.
- Establish and own engineering KPIs : DORA metrics (deployment frequency, lead time, MTTR, change failure rate), uptime SLOs, and velocity trends
- Lead incident management : build blameless post-mortem culture, own RCA processes, and drive systemic reliability improvements
- Balance technical debt repayment with feature velocity negotiate prioritisation transparently with Product leadership
5. Strategic Leadership & Cross-Functional Influence :
- Serve as the primary engineering partner for Product, Design, Data, and Business stakeholders translate ambiguity into executable engineering plans
- Participate in quarterly roadmap planning, capacity forecasting, and OKR definition for engineering teams
- Represent engineering in leadership forums articulate technical constraints, risks, and opportunities in business terms
- Contribute to org-wide engineering strategy : platform investments, build-vs-buy decisions, and shared infrastructure priorities
- Build relationships across geographies (Mumbai HQ + distributed teams) to maintain alignment and delivery cohesion
- Act as a culture carrier and ambassador for engineering excellence, innovation, and responsible AI use
AI TRANSFORMATION LEADERSHIP 2026 EXPECTATIONS :
In 2026, Engineering Managers at this organisation are expected to be active architects of AI transformation not passive observers. The following outlines the specific AI leadership expectations for this role :
AI Developer Productivity
- Drive measurable uplift in developer velocity through AI tooling adoption. Target : 30%+ reduction in code review cycle time and 40%+ increase in test coverage automation by Q3 2026.
LLM & GenAI Product Features
- Own delivery of GenAI-powered product capabilities : intelligent content, semantic search, personalisation, and conversational UX in production, at scale.
AI-Augmented Observability
- Implement AI-driven monitoring and anomaly detection pipelines. Reduce MTTR by leveraging predictive alerting, intelligent runbooks, and auto-remediation scripts.
Team AI Fluency :
- Build mandatory AI literacy across all engineering levels.
- Every engineer understands prompt engineering basics, AI ethics guardrails, and responsible AI deployment practices.
Responsible AI Governance :
- Partner with Security, Legal, and Data Privacy to ensure all AI deployments meet compliance standards, bias mitigation requirements, and explainability benchmarks.
TECHNOLOGY STACK & DOMAIN FAMILIARITY REQUIRED :
- Languages: Java/ Go/ Python/ Node.js /PHP /Rust (must be hands-on in at least 2)
- Cloud: AWS / GCP / Azure (multi-cloud exposure strongly preferred)
- AI & GenAI: OpenAI / Anthropic / Gemini APIs /LangChain /LlamaIndex / RAG / Vector DBs / GitHub
- Copilot: Cursor /Hugging Face
- Containers: Docker /Kubernetes /Helm /Service Mesh (Istio / Linkerd)
- Databases: PostgreSQL /MongoDB / Redis / Cassandra / Elasticsearch / Pinecone (Vector DB)
- Messaging: Apache Kafka /RabbitMQ /AWS SQS/SNS /Google Pub/Sub
- MLOps & DataOps: MLflow /Kubeflow / SageMaker / Vertex AI /Airflow /dbt
- Observability: Datadog /Prometheus /Grafana /OpenTelemetry / Jaeger /ELK Stack
- CI/CD & IaC: GitHub Actions ArgoCD / Jenkins / Terraform /Ansible /Backstage (IDP)
QUALIFICATIONS & CANDIDATE PROFILE :
Education :
- B.E. / B.Tech or M.E. / M.Tech from a Tier-I or Tier-II Institution - CS, IS, ECE, AI/ML streams strongly preferred
- Demonstrated engineering depth and leadership impact may complement institution pedigree
Experience :
- 10 to 14 years of progressive engineering experience, with at least 3 years in a formal Engineering Manager or equivalent people-leadership role
- Proven track record of managing and scaling engineering teams (615+ engineers) in a fast-growing SaaS or digital product environment
- Hands-on backend engineering background must be able to read, write, and critique production code
- Direct experience driving AI/ML feature delivery or AI tooling adoption within engineering organisations
- Exposure across start-up, mid-size, and large-scale product organisations, preferred adaptability is a core requirement
- Strong CS fundamentals: distributed systems, algorithms, system design, and software architecture
- Demonstrated career stability minimum of 2 years of average tenure per organisation.
The Ideal Engineering Manager in 2026 :
- Leads with context, not control, empowers engineers while maintaining accountability and quality
- Is fluent in both people language and technical language, switches registers naturally with engineers and executives alike
- Sees AI as a force multiplier for the team, not a threat. Actively experiments with and advocates for AI tooling
- Measures success by team outcomes, not personal output. Takes pride in what the team ships, not what they build alone
- Creates feedback loops obsessively between product and engineering, between seniors and juniors, between metrics and decisions
- Has strong opinions, loosely held, brings conviction to discussions but updates on evidence
- Invests in engineering excellence as seriously as delivery velocity knows that quality and speed are not opposites
WHY THIS ROLE STANDS APART :
AI Transformation at Scale :
- Lead one of the most significant AI adoption programmes in India's digital media sector.
- Our decisions will shape how hundreds of engineers work in 2026 and beyond.
Hands-On & Strategic Balance :
- A rare EM role that actively encourages technical depth.
- Stay close to the code while owning the people agenda - the best of both worlds.
Established Platform, Real Scale :
- 5001,000 engineers, proven product-market fit, and the org maturity to execute.
- This is not a greenfield startup gamble it is a serious company with serious ambition.
Clear Leadership Growth Path :
- A visible, direct path toward Director / VP of Engineering.
- Senior leadership is invested in growing its next generation of technology executives.
Company: Grey Chain AI
Location: Remote
Experience: 7+ Years
Employment Type: Full Time
About the Role
We are looking for a Senior Python AI Engineer who will lead the design, development, and delivery of production-grade GenAI and agentic AI solutions for global clients. This role requires a strong Python engineering background, experience working with foreign enterprise clients, and the ability to own delivery, guide teams, and build scalable AI systems.
You will work closely with product, engineering, and client stakeholders to deliver high-impact AI-driven platforms, intelligent agents, and LLM-powered systems.
Key Responsibilities
- Lead the design and development of Python-based AI systems, APIs, and microservices.
- Architect and build GenAI and agentic AI workflows using modern LLM frameworks.
- Own end-to-end delivery of AI projects for international clients, from requirement gathering to production deployment.
- Design and implement LLM pipelines, prompt workflows, and agent orchestration systems.
- Ensure reliability, scalability, and security of AI solutions in production.
- Mentor junior engineers and provide technical leadership to the team.
- Work closely with clients to understand business needs and translate them into robust AI solutions.
- Drive adoption of latest GenAI trends, tools, and best practices across projects.
Must-Have Technical Skills
- 7+ years of hands-on experience in Python development, building scalable backend systems.
- Strong experience with Python frameworks and libraries (FastAPI, Flask, Pydantic, SQLAlchemy, etc.).
- Solid experience working with LLMs and GenAI systems (OpenAI, Claude, Gemini, open-source models).
- Good experience with agentic AI frameworks such as LangChain, LangGraph, LlamaIndex, or similar.
- Experience designing multi-agent workflows, tool calling, and prompt pipelines.
- Strong understanding of REST APIs, microservices, and cloud-native architectures.
- Experience deploying AI solutions on AWS, Azure, or GCP.
- Knowledge of MLOps / LLMOps, model monitoring, evaluation, and logging.
- Proficiency with Git, CI/CD, and production deployment pipelines.
Leadership & Client-Facing Experience
- Proven experience leading engineering teams or acting as a technical lead.
- Strong experience working directly with foreign or enterprise clients.
- Ability to gather requirements, propose solutions, and own delivery outcomes.
- Comfortable presenting technical concepts to non-technical stakeholders.
What We Look For
- Excellent communication, comprehension, and presentation skills.
- High level of ownership, accountability, and reliability.
- Self-driven professional who can operate independently in a remote setup.
- Strong problem-solving mindset and attention to detail.
- Passion for GenAI, agentic systems, and emerging AI trends.
Why Grey Chain AI
Grey Chain AI is a Generative AI-as-a-Service and Digital Transformation company trusted by global brands such as UNICEF, BOSE, KFINTECH, WHO, and Fortune 500 companies. We build real-world, production-ready AI systems that drive business impact across industries like BFSI, Non-profits, Retail, and Consulting.
Here, you won’t just experiment with AI — you will build, deploy, and scale it for the real world.
About Asha Health
Asha Health helps medical practices launch their own AI clinics. We're backed by Y Combinator, General Catalyst, 186 Ventures, Reach Capital and many more. We recently raised an oversubscribed seed round from some of the best investors in Silicon Valley. Our team includes AI product leaders from companies like Google, physician executives from major health systems, and more.
About the Role
We're looking for a Software Engineer to join our engineering team in our Bangalore office. We're looking for someone who is an all-rounder, but has particularly exceptional backend engineering skills.
Our ideal candidate has built AI agents at the orchestration layer level and leveraged clever engineering techniques to improve latency & reliability for complex workflows.
We pay well above market for the country's best talent and provide a number of excellent perks.
Responsibilities
In this role, you will have the opportunity to build state-of-the-art AI agents, and learn what it takes to build an industry-leading multimodal, multi-agent suite.
You'll wear many hats. Your responsibilities will fall into 2 categories:
AI Engineering
- Develop AI agents with a high bar for reliability and performance.
- Build SOTA LLM-powered tools for providers, practices, and patients.
- Architect our data annotation, fine tuning, and RLHF workflows.
- Live on the bleeding-edge ensuring that every week, we have the most cutting edge agents as the industry evolves.
Full-Stack Engineering (80% backend, 20% frontend)
- Lead the team in designing scalable architecture to support performant web applications.
- Develop features end-to-end for our web applications with industry leading product and user experience (Typescript, nodeJS, python etc).
Perks of Working at Asha Health
#1 Build cool stuff: work on the latest, cutting-edge tech (build frontier AI agents with technologies that evolve every 2 weeks).
#2 Surround yourself with top talent: our team includes senior AI product leaders from companies like Google, experienced physician executives, and top 1% engineering talent (the best of the best).
#3 Rocketship trajectory: we get more customer interest than we have time to onboard, it's a good problem to have :)
We are looking for a Technical Lead - GenAI with a strong foundation in Python, Data Analytics, Data Science or Data Engineering, system design, and practical experience in building and deploying Agentic Generative AI systems. The ideal candidate is passionate about solving complex problems using LLMs, understands the architecture of modern AI agent frameworks like LangChain/LangGraph, and can deliver scalable, cloud-native back-end services with a GenAI focus.
Key Responsibilities :
- Design and implement robust, scalable back-end systems for GenAI agent-based platforms.
- Work closely with AI researchers and front-end teams to integrate LLMs and agentic workflows into production services.
- Develop and maintain services using Python (FastAPI/Django/Flask), with best practices in modularity and performance.
- Leverage and extend frameworks like LangChain, LangGraph, and similar to orchestrate tool-augmented AI agents.
- Design and deploy systems in Azure Cloud, including usage of serverless functions, Kubernetes, and scalable data services.
- Build and maintain event-driven / streaming architectures using Kafka, Event Hubs, or other messaging frameworks.
- Implement inter-service communication using gRPC and REST.
- Contribute to architectural discussions, especially around distributed systems, data flow, and fault tolerance.
Required Skills & Qualifications :
- Strong hands-on back-end development experience in Python along with Data Analytics or Data Science.
- Strong track record on platforms like LeetCode or in real-world algorithmic/system problem-solving.
- Deep knowledge of at least one Python web framework (e.g., FastAPI, Flask, Django).
- Solid understanding of LangChain, LangGraph, or equivalent LLM agent orchestration tools.
- 2+ years of hands-on experience in Generative AI systems and LLM-based platforms.
- Proven experience with system architecture, distributed systems, and microservices.
- Strong familiarity with Any Cloud infrastructure and deployment practices.
- Should know about any Data Engineering or Analytics expertise (Preferred) e.g. Azure Data Factory, Snowflake, Databricks, ETL tools Talend, Informatica or Power BI, Tableau, Data modelling, Datawarehouse development.
Job Title: AI/ML Engineer – Voice (2–3 Years)
Location: Bengaluru (On-site)
Employment Type: Full-time
About Impacto Digifin Technologies
Impacto Digifin Technologies enables enterprises to adopt digital transformation through intelligent, AI-powered solutions. Our platforms reduce manual work, improve accuracy, automate complex workflows, and ensure compliance—empowering organizations to operate with speed, clarity, and confidence.
We combine automation where it’s fastest with human oversight where it matters most. This hybrid approach ensures trust, reliability, and measurable efficiency across fintech and enterprise operations.
Role Overview
We are looking for an AI Engineer Voice with strong applied experience in machine learning, deep learning, NLP, GenAI, and full-stack voice AI systems.
This role requires someone who can design, build, deploy, and optimize end-to-end voice AI pipelines, including speech-to-text, text-to-speech, real-time streaming voice interactions, voice-enabled AI applications, and voice-to-LLM integrations.
You will work across core ML/DL systems, voice models, predictive analytics, banking-domain AI applications, and emerging AGI-aligned frameworks. The ideal candidate is an applied engineer with strong fundamentals, the ability to prototype quickly, and the maturity to contribute to R&D when needed.
This role is collaborative, cross-functional, and hands-on.
Key Responsibilities
Voice AI Engineering
- Build end-to-end voice AI systems, including STT, TTS, VAD, audio processing, and conversational voice pipelines.
- Implement real-time voice pipelines involving streaming interactions with LLMs and AI agents.
- Design and integrate voice calling workflows, bi-directional audio streaming, and voice-based user interactions.
- Develop voice-enabled applications, voice chat systems, and voice-to-AI integrations for enterprise workflows.
- Build and optimize audio preprocessing layers (noise reduction, segmentation, normalization)
- Implement voice understanding modules, speech intent extraction, and context tracking.
Machine Learning & Deep Learning
- Build, deploy, and optimize ML and DL models for prediction, classification, and automation use cases.
- Train and fine-tune neural networks for text, speech, and multimodal tasks.
- Build traditional ML systems where needed (statistical, rule-based, hybrid systems).
- Perform feature engineering, model evaluation, retraining, and continuous learning cycles.
NLP, LLMs & GenAI
- Implement NLP pipelines including tokenization, NER, intent, embeddings, and semantic classification.
- Work with LLM architectures for text + voice workflows
- Build GenAI-based workflows and integrate models into production systems.
- Implement RAG pipelines and agent-based systems for complex automation.
Fintech & Banking AI
- Work on AI-driven features related to banking, financial risk, compliance automation, fraud patterns, and customer intelligence.
- Understand fintech data structures and constraints while designing AI models.
Engineering, Deployment & Collaboration
- Deploy models on cloud or on-prem (AWS / Azure / GCP / internal infra).
- Build robust APIs and services for voice and ML-based functionalities.
- Collaborate with data engineers, backend developers, and business teams to deliver end-to-end AI solutions.
- Document systems and contribute to internal knowledge bases and R&D.
Security & Compliance
- Follow fundamental best practices for AI security, access control, and safe data handling.
- Awareness of financial compliance standards (plus, not mandatory).
- Follow internal guidelines on PII, audio data, and model privacy.
Primary Skills (Must-Have)
Core AI
- Machine Learning fundamentals
- Deep Learning architectures
- NLP pipelines and transformers
- LLM usage and integration
- GenAI development
- Voice AI (STT, TTS, VAD, real-time pipelines)
- Audio processing fundamentals
- Model building, tuning, and retraining
- RAG systems
- AI Agents (orchestration, multi-step reasoning)
Voice Engineering
- End-to-end voice application development
- Voice calling & telephony integration (framework-agnostic)
- Realtime STT ↔ LLM ↔ TTS interactive flows
- Voice chat system development
- Voice-to-AI model integration for automation
Fintech/Banking Awareness
- High-level understanding of fintech and banking AI use cases
- Data patterns in core banking analytics (advantageous)
Programming & Engineering
- Python (strong competency)
- Cloud deployment understanding (AWS/Azure/GCP)
- API development
- Data processing & pipeline creation
Secondary Skills (Good to Have)
- MLOps & CI/CD for ML systems
- Vector databases
- Prompt engineering
- Model monitoring & evaluation frameworks
- Microservices experience
- Basic UI integration understanding for voice/chat
- Research reading & benchmarking ability
Qualifications
- 2–3 years of practical experience in AI/ML/DL engineering.
- Bachelor’s/Master’s degree in CS, AI, Data Science, or related fields.
- Proven hands-on experience building ML/DL/voice pipelines.
- Experience in fintech or data-intensive domains preferred.
Soft Skills
- Clear communication and requirement understanding
- Curiosity and research mindset
- Self-driven problem solving
- Ability to collaborate cross-functionally
- Strong ownership and delivery discipline
- Ability to explain complex AI concepts simply
SDE 2 / SDE 3 – AI Infrastructure & LLM Systems Engineer
Location: Pune / Bangalore (India)
Experience: 4–8 years
Compensation: no bar for the right candidate
Bonus: Up to 10% of base
About the Company
AbleCredit builds production-grade AI systems for BFSI enterprises, reducing OPEX by up to 70% across onboarding, credit, collections, and claims.
We run our own LLMs on GPUs, operate high-concurrency inference systems, and build AI workflows that must scale reliably under real enterprise traffic.
Role Summary (What We’re Really Hiring For)
We are looking for a strong backend / systems engineer who can:
- Deploy AI models on GPUs
- Expose them via APIs
- Scale inference under high parallel load using async systems and queues
This is not a prompt-engineering or UI-AI role.
Core Responsibilities
- Deploy and operate LLMs on GPU infrastructure (cloud or on-prem).
- Run inference servers such as vLLM / TGI / SGLang / Triton or equivalents.
- Build FastAPI / gRPC APIs on top of AI models.
- Design async, queue-based execution for AI workflows (fan-out, retries, backpressure).
- Plan and reason about capacity & scaling:
- GPU count vs RPS
- batching vs latency
- cost vs throughput
- Add observability around latency, GPU usage, queue depth, failures.
- Work closely with AI researchers to productionize models safely.
Must-Have Skills
- Strong backend engineering fundamentals (distributed systems, async workflows).
- Hands-on experience running GPU workloads in production.
- Proficiency in Python (Golang acceptable).
- Experience with Docker + Kubernetes (or equivalent).
- Practical knowledge of queues / workers (Redis, Kafka, SQS, Celery, Temporal, etc.).
- Ability to reason quantitatively about performance, reliability, and cost.
Strong Signals (Recruiter Screening Clues)
Look for candidates who have:
- Personally deployed models on GPUs
- Debugged GPU memory / latency / throughput issues
- Scaled compute-heavy backends under load
- Designed async systems instead of blocking APIs
Nice to Have
- Familiarity with LangChain / LlamaIndex (as infra layers, not just usage).
- Experience with vector DBs (Qdrant, Pinecone, Weaviate).
- Prior work on multi-tenant enterprise systems.
Not a Fit If
- Only experience is calling OpenAI / Anthropic APIs.
- Primarily a prompt engineer or frontend-focused AI dev.
- No hands-on ownership of infra, scaling, or production reliability.













