
About Asha Health
Asha Health helps medical practices launch their own AI clinics. We're backed by Y Combinator, General Catalyst, 186 Ventures, Reach Capital and many more. We recently raised an oversubscribed seed round from some of the best investors in Silicon Valley. Our team includes AI product leaders from companies like Google, physician executives from major health systems, and more.
About the Role
We're looking for an all-rounder backend software engineer that has incredibly strong product thinking skills to join our Bangalore team in person.
As part of this FDE role, you will work very closely with our mid market and enterprise customers closely to understand their pain points, dream up new solutions and then bring them to life for all of our customers.
You need to be ready to work very closely to understand customer pain points, and come up with ideas that drive real ROI.
Your day to day will involve building new AI agents, with a high degree of reliability, and ensuring that customers see real measurable value from them. Interfacing with customers and learning from them first hand will be one of the best facets of this role.
We pay well above market for the country's best talent and provide a number of excellent perks.
Requirements
You do not need AI experience to apply to this role, although we do prefer it.
We prefer candidates who have worked as a founding engineer at an early stage startup (Seed or Preseed) or a Senior Software Engineer at a Series A or B startup.
Perks of Working at Asha Health
#1 Build cool stuff: work on the latest, cutting-edge tech (build frontier AI agents with technologies that evolve every 2 weeks).
#2 Surround yourself with top talent: our team includes senior AI product leaders from companies like Google, experienced physician executives, and top 1% engineering talent (the best of the best).
#3 Rocketship trajectory: we get more customer interest than we have time to onboard, it's a good problem to have :)

About Asha Health (YC F24)
About
Asha Health is a Y Combinator backed AI healthcare startup. We help medical practices spin up their own AI clinic. We've raised an oversubscribed seed round backed by top Silicon Valley investors, and are growing rapidly. Our team consists of AI product experts from companies like Google, as well as senior physician executives from major health systems.
Tech stack
Candid answers by the company
We help medical practices spin up their own AI clinic.
Similar jobs
- Curiosity, passion, teamwork, and initiative
- Extensive experience with SQL Server (T-SQL, query optimization, performance tuning, schema design)
- Strong proficiency in C# and .NET Core for enterprise application development and integration with complex data models
- Experience with Azure cloud services (e.g., Azure SQL, App Services, Storage)
- Ability to leverage agentic AI as a development support tool, with a critical thinking approach
- Solid understanding of agile methodologies, DevOps, and CI/CD practices
- Ability to work independently and collaboratively in a fast-paced, distributed team environment
- Excellent problem-solving, analytical, and communication skills
- Master's degree in Computer Science or equivalent; 5+ years of relevant work experience
- Experience with ERP systems or other complex business applications is a strong plus
Job Title: AI Engineer
Location: Bengaluru
Experience: 3 Years
Working Days: 5 Days
About the Role
We’re reimagining how enterprises interact with documents and workflows—starting with BFSI and healthcare. Our AI-first platforms are transforming credit decisioning, document intelligence, and underwriting at scale. The focus is on Intelligent Document Processing (IDP), GenAI-powered analysis, and human-in-the-loop (HITL) automation to accelerate outcomes across lending, insurance, and compliance workflows.
As an AI Engineer, you’ll be part of a high-caliber engineering team building next-gen AI systems that:
- Power robust APIs and platforms used by underwriters, credit analysts, and financial institutions.
- Build and integrate GenAI agents.
- Enable “human-in-the-loop” workflows for high-assurance decisions in real-world conditions.
Key Responsibilities
- Build and optimize ML/DL models for document understanding, classification, and summarization.
- Apply LLMs and RAG techniques for validation, search, and question-answering tasks.
- Design and maintain data pipelines for structured and unstructured inputs (PDFs, OCR text, JSON, etc.).
- Package and deploy models as REST APIs or microservices in production environments.
- Collaborate with engineering teams to integrate models into existing products and workflows.
- Continuously monitor and retrain models to ensure reliability and performance.
- Stay updated on emerging AI frameworks, architectures, and open-source tools; propose improvements to internal systems.
Required Skills & Experience
- 2–5 years of hands-on experience in AI/ML model development, fine-tuning, and building ML solutions.
- Strong Python proficiency with libraries such as NumPy, Pandas, scikit-learn, PyTorch, or TensorFlow.
- Solid understanding of transformers, embeddings, and NLP pipelines.
- Experience working with LLMs (OpenAI, Claude, Gemini, etc.) and frameworks like LangChain.
- Exposure to OCR, document parsing, and unstructured text analytics.
- Familiarity with model serving, APIs, and microservice architectures (FastAPI, Flask).
- Working knowledge of Docker, cloud environments (AWS/GCP/Azure), and CI/CD pipelines.
- Strong grasp of data preprocessing, evaluation metrics, and model validation workflows.
- Excellent problem-solving ability, structured thinking, and clean, production-ready coding practices.
We're at the forefront of creating advanced AI systems, from fully autonomous agents that provide intelligent customer interaction to data analysis tools that offer insightful business solutions. We are seeking enthusiastic interns who are passionate about AI and ready to tackle real-world problems using the latest technologies.
Duration: 6 months
Perks:
- Hands-on experience with real AI projects.
- Mentoring from industry experts.
- A collaborative, innovative and flexible work environment
After completion of the internship period, there is a chance to get a full-time opportunity as AI/ML engineer (Up to 12 LPA).
Compensation:
- Joining Bonus: A one-time bonus of INR 2,500 will be awarded upon joining.
- Stipend: Base is INR 8000/- & can increase up to 20000/- depending upon performance matrix.
Key Responsibilities
- Experience working with python, LLM, Deep Learning, NLP, etc..
- Utilize GitHub for version control, including pushing and pulling code updates.
- Work with Hugging Face and OpenAI platforms for deploying models and exploring open-source AI models.
- Engage in prompt engineering and the fine-tuning process of AI models.
Requirements
- Proficiency in Python programming.
- Experience with GitHub and version control workflows.
- Familiarity with AI platforms such as Hugging Face and OpenAI.
- Understanding of prompt engineering and model fine-tuning.
- Excellent problem-solving abilities and a keen interest in AI technology.
To Apply Click below link and submit the Assignment
About FrontM
At FrontM, we are on a mission to transform the lives of frontline workforces, particularly in the maritime industry. We believe in creating a more connected, empowered, and engaged workforce by building cutting-edge solutions that merge the power of technology with human-centric needs. Our vision is to develop the world’s leading digital toolbox platform for maritime operations —a platform that brings everything for frontline workforces from digital wallets, recruitment, onboarding, healthcare, and learning to welfare and human capital management under one seamless umbrella.
Role Summary
As a JavaScript Developer at FrontM, you will be at the forefront of developing our pioneering digital toolbox platform and the low-code developer framework that powers it. You will have the opportunity to work with the latest JavaScript frameworks, integrating advanced technologies such as Large Language Models (LLMs), AI, and the latest GPT models. You’ll also be part of our exciting roadmap to evolve our low-code platform into a no-code solution, making app development accessible to everyone. Your contributions will be pivotal in the creation and enhancement of the Maritime App Store, where innovation meets practicality, offering solutions that make a tangible difference in the lives of seafarers and other frontline workers.
Key Responsibilities
Application Development (≈60%)
- Build micro-apps using the frontm.ai framework
- Implement intent-based architectures, context and state management
- Develop responsive UIs, forms, collections, filters, and workflows
- Integrate AWS services (Lambda, S3, DynamoDB, Bedrock)
- Build conversational AI features and real-time capabilities (messaging, video, notifications)
Framework Development (≈25%)
- Enhance and extend the frontm.ai core framework
- Build reusable components, patterns, and accelerators
- Improve performance for low-bandwidth environments
- Contribute to documentation, examples, and design reviews
- Support migration towards TypeScript and future Rust components
AI-Assisted Development (≈15%)
- Use Claude Code for efficient development
- Write and refine prompts for code generation
- Review, validate, and harden AI-generated code
- Implement LLM integrations via AWS Bedrock / OpenAI
- Build AI assistants using the skills layer
Required Technical Skills
JavaScript / TypeScript
- 5+ years professional JavaScript experience
- Strong TypeScript, async patterns, modular design
- Clean code practices and modern tooling
Architecture & Cloud
- Microservices and event-driven systems
- Serverless AWS (Lambda, API Gateway, DynamoDB, S3)
- REST APIs, WebSockets, CI/CD
- Infrastructure as Code experience preferred
AI & LLMs
- Hands-on use of Claude Code or similar tools
- Prompt engineering and hallucination mitigation
- Conversational AI and NLP experience
Data
- MongoDB / MongoDB Atlas
- Caching, indexing, and multi-tenant data patterns
Desired skills
- Experience with low-bandwidth or offline-first systems
- Understanding of secure, distributed deployments
- Exposure to healthcare, logistics, or maritime systems
Experience & Education
- 5+ years software development
- 2+ years AWS serverless
- 1+ year AI-assisted development
- Degree in Computer Science or equivalent experience
Personal Attributes
- Strong problem-solving and critical thinking
- Comfortable reviewing AI-generated code
- Clear communicator and reliable team contributor
- Self-driven, detail-oriented, and adaptable
Why join FrontM?
Above-Market Compensation: We believe in rewarding talent, offering a salary package that reflects your skills and potential.
Long-Term Career Growth: As FrontM expands, so will your opportunities. We are committed to helping our team members develop their careers, offering mentorship, learning opportunities, and the chance to take on more responsibility.
Cutting-Edge Technology: Work with the latest in JavaScript frameworks, AI, LLMs, and GPT models, contributing to a platform that’s at the forefront of technological innovation.
Make a Real Impact: This is your chance to work on something that matters—to build solutions that directly improve the quality of life for thousands of people worldwide.
We are looking for a Technical Lead - GenAI with a strong foundation in Python, Data Analytics, Data Science or Data Engineering, system design, and practical experience in building and deploying Agentic Generative AI systems. The ideal candidate is passionate about solving complex problems using LLMs, understands the architecture of modern AI agent frameworks like LangChain/LangGraph, and can deliver scalable, cloud-native back-end services with a GenAI focus.
Key Responsibilities :
- Design and implement robust, scalable back-end systems for GenAI agent-based platforms.
- Work closely with AI researchers and front-end teams to integrate LLMs and agentic workflows into production services.
- Develop and maintain services using Python (FastAPI/Django/Flask), with best practices in modularity and performance.
- Leverage and extend frameworks like LangChain, LangGraph, and similar to orchestrate tool-augmented AI agents.
- Design and deploy systems in Azure Cloud, including usage of serverless functions, Kubernetes, and scalable data services.
- Build and maintain event-driven / streaming architectures using Kafka, Event Hubs, or other messaging frameworks.
- Implement inter-service communication using gRPC and REST.
- Contribute to architectural discussions, especially around distributed systems, data flow, and fault tolerance.
Required Skills & Qualifications :
- Strong hands-on back-end development experience in Python along with Data Analytics or Data Science.
- Strong track record on platforms like LeetCode or in real-world algorithmic/system problem-solving.
- Deep knowledge of at least one Python web framework (e.g., FastAPI, Flask, Django).
- Solid understanding of LangChain, LangGraph, or equivalent LLM agent orchestration tools.
- 2+ years of hands-on experience in Generative AI systems and LLM-based platforms.
- Proven experience with system architecture, distributed systems, and microservices.
- Strong familiarity with Any Cloud infrastructure and deployment practices.
- Should know about any Data Engineering or Analytics expertise (Preferred) e.g. Azure Data Factory, Snowflake, Databricks, ETL tools Talend, Informatica or Power BI, Tableau, Data modelling, Datawarehouse development.
Company: Grey Chain AI
Location: Remote
Experience: 7+ Years
Employment Type: Full Time
About the Role
We are looking for a Senior Python AI Engineer who will lead the design, development, and delivery of production-grade GenAI and agentic AI solutions for global clients. This role requires a strong Python engineering background, experience working with foreign enterprise clients, and the ability to own delivery, guide teams, and build scalable AI systems.
You will work closely with product, engineering, and client stakeholders to deliver high-impact AI-driven platforms, intelligent agents, and LLM-powered systems.
Key Responsibilities
- Lead the design and development of Python-based AI systems, APIs, and microservices.
- Architect and build GenAI and agentic AI workflows using modern LLM frameworks.
- Own end-to-end delivery of AI projects for international clients, from requirement gathering to production deployment.
- Design and implement LLM pipelines, prompt workflows, and agent orchestration systems.
- Ensure reliability, scalability, and security of AI solutions in production.
- Mentor junior engineers and provide technical leadership to the team.
- Work closely with clients to understand business needs and translate them into robust AI solutions.
- Drive adoption of latest GenAI trends, tools, and best practices across projects.
Must-Have Technical Skills
- 7+ years of hands-on experience in Python development, building scalable backend systems.
- Strong experience with Python frameworks and libraries (FastAPI, Flask, Pydantic, SQLAlchemy, etc.).
- Solid experience working with LLMs and GenAI systems (OpenAI, Claude, Gemini, open-source models).
- Good experience with agentic AI frameworks such as LangChain, LangGraph, LlamaIndex, or similar.
- Experience designing multi-agent workflows, tool calling, and prompt pipelines.
- Strong understanding of REST APIs, microservices, and cloud-native architectures.
- Experience deploying AI solutions on AWS, Azure, or GCP.
- Knowledge of MLOps / LLMOps, model monitoring, evaluation, and logging.
- Proficiency with Git, CI/CD, and production deployment pipelines.
Leadership & Client-Facing Experience
- Proven experience leading engineering teams or acting as a technical lead.
- Strong experience working directly with foreign or enterprise clients.
- Ability to gather requirements, propose solutions, and own delivery outcomes.
- Comfortable presenting technical concepts to non-technical stakeholders.
What We Look For
- Excellent communication, comprehension, and presentation skills.
- High level of ownership, accountability, and reliability.
- Self-driven professional who can operate independently in a remote setup.
- Strong problem-solving mindset and attention to detail.
- Passion for GenAI, agentic systems, and emerging AI trends.
Why Grey Chain AI
Grey Chain AI is a Generative AI-as-a-Service and Digital Transformation company trusted by global brands such as UNICEF, BOSE, KFINTECH, WHO, and Fortune 500 companies. We build real-world, production-ready AI systems that drive business impact across industries like BFSI, Non-profits, Retail, and Consulting.
Here, you won’t just experiment with AI — you will build, deploy, and scale it for the real world.
Job Title: AI/ML Engineer – Voice (2–3 Years)
Location: Bengaluru (On-site)
Employment Type: Full-time
About Impacto Digifin Technologies
Impacto Digifin Technologies enables enterprises to adopt digital transformation through intelligent, AI-powered solutions. Our platforms reduce manual work, improve accuracy, automate complex workflows, and ensure compliance—empowering organizations to operate with speed, clarity, and confidence.
We combine automation where it’s fastest with human oversight where it matters most. This hybrid approach ensures trust, reliability, and measurable efficiency across fintech and enterprise operations.
Role Overview
We are looking for an AI Engineer Voice with strong applied experience in machine learning, deep learning, NLP, GenAI, and full-stack voice AI systems.
This role requires someone who can design, build, deploy, and optimize end-to-end voice AI pipelines, including speech-to-text, text-to-speech, real-time streaming voice interactions, voice-enabled AI applications, and voice-to-LLM integrations.
You will work across core ML/DL systems, voice models, predictive analytics, banking-domain AI applications, and emerging AGI-aligned frameworks. The ideal candidate is an applied engineer with strong fundamentals, the ability to prototype quickly, and the maturity to contribute to R&D when needed.
This role is collaborative, cross-functional, and hands-on.
Key Responsibilities
Voice AI Engineering
- Build end-to-end voice AI systems, including STT, TTS, VAD, audio processing, and conversational voice pipelines.
- Implement real-time voice pipelines involving streaming interactions with LLMs and AI agents.
- Design and integrate voice calling workflows, bi-directional audio streaming, and voice-based user interactions.
- Develop voice-enabled applications, voice chat systems, and voice-to-AI integrations for enterprise workflows.
- Build and optimize audio preprocessing layers (noise reduction, segmentation, normalization)
- Implement voice understanding modules, speech intent extraction, and context tracking.
Machine Learning & Deep Learning
- Build, deploy, and optimize ML and DL models for prediction, classification, and automation use cases.
- Train and fine-tune neural networks for text, speech, and multimodal tasks.
- Build traditional ML systems where needed (statistical, rule-based, hybrid systems).
- Perform feature engineering, model evaluation, retraining, and continuous learning cycles.
NLP, LLMs & GenAI
- Implement NLP pipelines including tokenization, NER, intent, embeddings, and semantic classification.
- Work with LLM architectures for text + voice workflows
- Build GenAI-based workflows and integrate models into production systems.
- Implement RAG pipelines and agent-based systems for complex automation.
Fintech & Banking AI
- Work on AI-driven features related to banking, financial risk, compliance automation, fraud patterns, and customer intelligence.
- Understand fintech data structures and constraints while designing AI models.
Engineering, Deployment & Collaboration
- Deploy models on cloud or on-prem (AWS / Azure / GCP / internal infra).
- Build robust APIs and services for voice and ML-based functionalities.
- Collaborate with data engineers, backend developers, and business teams to deliver end-to-end AI solutions.
- Document systems and contribute to internal knowledge bases and R&D.
Security & Compliance
- Follow fundamental best practices for AI security, access control, and safe data handling.
- Awareness of financial compliance standards (plus, not mandatory).
- Follow internal guidelines on PII, audio data, and model privacy.
Primary Skills (Must-Have)
Core AI
- Machine Learning fundamentals
- Deep Learning architectures
- NLP pipelines and transformers
- LLM usage and integration
- GenAI development
- Voice AI (STT, TTS, VAD, real-time pipelines)
- Audio processing fundamentals
- Model building, tuning, and retraining
- RAG systems
- AI Agents (orchestration, multi-step reasoning)
Voice Engineering
- End-to-end voice application development
- Voice calling & telephony integration (framework-agnostic)
- Realtime STT ↔ LLM ↔ TTS interactive flows
- Voice chat system development
- Voice-to-AI model integration for automation
Fintech/Banking Awareness
- High-level understanding of fintech and banking AI use cases
- Data patterns in core banking analytics (advantageous)
Programming & Engineering
- Python (strong competency)
- Cloud deployment understanding (AWS/Azure/GCP)
- API development
- Data processing & pipeline creation
Secondary Skills (Good to Have)
- MLOps & CI/CD for ML systems
- Vector databases
- Prompt engineering
- Model monitoring & evaluation frameworks
- Microservices experience
- Basic UI integration understanding for voice/chat
- Research reading & benchmarking ability
Qualifications
- 2–3 years of practical experience in AI/ML/DL engineering.
- Bachelor’s/Master’s degree in CS, AI, Data Science, or related fields.
- Proven hands-on experience building ML/DL/voice pipelines.
- Experience in fintech or data-intensive domains preferred.
Soft Skills
- Clear communication and requirement understanding
- Curiosity and research mindset
- Self-driven problem solving
- Ability to collaborate cross-functionally
- Strong ownership and delivery discipline
- Ability to explain complex AI concepts simply
About Corridor Platforms
Corridor Platforms is a leader in next-generation risk decisioning and responsible AI governance, empowering banks and lenders to build transparent, compliant, and data-driven solutions. Our platforms combine advanced analytics, real-time data integration, and GenAI to support complex financial decision workflows for regulated industries.
Role Overview
As a Backend Engineer at Corridor Platforms, you will:
- Architect, develop, and maintain backend components for our Risk Decisioning Platform.
- Build and orchestrate scalable backend services that automate, optimize, and monitor high-value credit and risk decisions in real time.
- Integrate with ORM layers – such as SQLAlchemy – and multi RDBMS solutions (Postgres, MySQL, Oracle, MSSQL, etc) to ensure data integrity, scalability, and compliance.
- Collaborate closely with Product Team, Data Scientists, QA Teams to create extensible APIs, workflow automation, and AI governance features.
- Architect workflows for privacy, auditability, versioned traceability, and role-based access control, ensuring adherence to regulatory frameworks.
- Take ownership from requirements to deployment, seeing your code deliver real impact in the lives of customers and end users.
Technical Skills
- Languages: Python 3.9+, SQL, JavaScript/TypeScript, Angular
- Frameworks: Flask, SQLAlchemy, Celery, Marshmallow, Apache Spark
- Databases: PostgreSQL, Oracle, SQL Server, Redis
- Tools: pytest, Docker, Git, Nx
- Cloud: Experience with AWS, Azure, or GCP preferred
- Monitoring: Familiarity with OpenTelemetry and logging frameworks
Why Join Us?
- Cutting-Edge Tech: Work hands-on with the latest AI, cloud-native workflows, and big data tools—all within a single compliant platform.
- End-to-End Impact: Contribute to mission-critical backend systems, from core data models to live production decision services.
- Innovation at Scale: Engineer solutions that process vast data volumes, helping financial institutions innovate safely and effectively.
- Mission-Driven: Join a passionate team advancing fair, transparent, and compliant risk decisioning at the forefront of fintech and AI governance.
What We’re Looking For
- Proficiency in Python, SQLAlchemy (or similar ORM), and SQL databases.
- Experience developing and maintaining scalable backend services, including API, data orchestration, ML workflows, and workflow automation.
- Solid understanding of data modeling, distributed systems, and backend architecture for regulated environments.
- Curiosity and drive to work at the intersection of AI/ML, fintech, and regulatory technology.
- Experience mentoring and guiding junior developers.
Ready to build backends that shape the future of decision intelligence and responsible AI?
Apply now and become part of the innovation at Corridor Platforms!
We're seeking a Generalist Software Engineer with a passion for the art and science of building software. The ideal candidate excels in technical execution and possesses the versatility to quickly adapt to new technologies. You'll be expected to deliver high-quality solutions across various tech stacks while maintaining a commitment to
continuous learning and improvement.
You Would Be a Good Fit If
- You're a back end developer with strong expertise in at least one programming language and have experience building and deploying production applications.
- You excel at problem-solving and apply first-principle thinking to complex challenges
- You communicate effectively, both in writing and verbally
- You're passionate about crafting high-quality, pragmatic software solutions
- You're self-driven, adaptable, and quick to learn new technologies
- You're excited to help shape both the code and the organization
You would not be a good fit if
- Prefer traditional office settings over remote work
- Require constant supervision or struggle with self-motivation
- View software development as just a job rather than a craft
- Prefer sticking to a single tech stack or strongly identify with specific technologies
- Specialize exclusively in frontend development or DevOps
- Seek a purely managerial position without hands-on coding responsibilities
What you’ll be doing
Weare much more than our job descriptions, but here is where you will begin:
As a Senior Software Engineer Data & ML You’ll Be:
● Architect, design, test, implement, deploy, monitor and maintain end-to-end backend
services. You build it, you own it.
● Work with people from other teams and departments on a day to day basis to ensure
efficient project execution with a focus on delivering value to our members.
● Regularly aligning your team’s vision and roadmap with the target architecture within your
domain and to ensure the success of complex multi domain initiatives.
● Integrate already trained ML and GenAI models (preferably GCP in services.
ROLE:
Whatyou’ll need,
Like us, you’ll be deeply committed to delivering impactful outcomes for customers.
What Makes You a Great Fit
● 5 years of proven work experience as a Backend Python Engineer
● Understanding of software engineering fundamentals OOPS, SOLID, etc.)
● Hands-on experience with Python libraries like Pandas, NumPy, Scikit-learn,
Lang chain/LLamaIndex etc.
● Experience with machine learning frameworks such as PyTorch or TensorFlow, Keras, being
proficient in Python
● Hands-on Experience with frameworks such as Django or FastAPI or Flask
● Hands-on experience with MySQL, MongoDB, Redis and BigQuery (or equivalents)
● Extensive experience integrating with or creating REST APIs
● Experience with creating and maintaining CI/CD pipelines- we use GitHub Actions.
● Experience with event-driven architectures like Kafka, RabbitMq or equivalents.
● Knowledge about:
o LLMs
o Vector stores/databases
o PromptEngineering
o Embeddings and their implementations
● Somehands-onexperience in implementations of the above ML/AI will be preferred
● Experience with GCP/AWS services.
● You are curious about and motivated by the future trends in data, AI/ML, analytics












