
About Asha Health
Asha Health helps medical practices launch their own AI clinics. We're backed by Y Combinator, General Catalyst, 186 Ventures, Reach Capital and many more. We recently raised an oversubscribed seed round from some of the best investors in Silicon Valley. Our team includes AI product leaders from companies like Google, physician executives from major health systems, and more.
About the Role
We're looking for an all-rounder backend software engineer that has incredibly strong product thinking skills to join our Bangalore team in person.
As part of this FDE role, you will work very closely with our mid market and enterprise customers closely to understand their pain points, dream up new solutions and then bring them to life for all of our customers.
You need to be ready to work very closely to understand customer pain points, and come up with ideas that drive real ROI.
Your day to day will involve building new AI agents, with a high degree of reliability, and ensuring that customers see real measurable value from them. Interfacing with customers and learning from them first hand will be one of the best facets of this role.
We pay well above market for the country's best talent and provide a number of excellent perks.
Requirements
You do not need AI experience to apply to this role, although we do prefer it.
We prefer candidates who have worked as a founding engineer at an early stage startup (Seed or Preseed) or a Senior Software Engineer at a Series A or B startup.
Perks of Working at Asha Health
#1 Build cool stuff: work on the latest, cutting-edge tech (build frontier AI agents with technologies that evolve every 2 weeks).
#2 Surround yourself with top talent: our team includes senior AI product leaders from companies like Google, experienced physician executives, and top 1% engineering talent (the best of the best).
#3 Rocketship trajectory: we get more customer interest than we have time to onboard, it's a good problem to have :)

About Asha Health (YC F24)
About
Asha Health is a Y Combinator backed AI healthcare startup. We help medical practices spin up their own AI clinic. We've raised an oversubscribed seed round backed by top Silicon Valley investors, and are growing rapidly. Our team consists of AI product experts from companies like Google, as well as senior physician executives from major health systems.
Tech stack

Candid answers by the company
We help medical practices spin up their own AI clinic.
Similar jobs

7+ years of experience in Python Development
Good experience in Microservices and APIs development.
Must have exposure to large scale data
Good to have Gen AI experience
Code versioning and collaboration. (Git)
Knowledge for Libraries for extracting data from websites.
Knowledge of SQL and NoSQL databases
Familiarity with RESTful APIs
Familiarity with Cloud (Azure /AWS) technologies
About Wissen Technology:
• The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
• Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
• Our workforce consists of 550+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
• Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
• Globally present with offices US, India, UK, Australia, Mexico, and Canada.
• We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
• Wissen Technology has been certified as a Great Place to Work®.
• Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
• Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, StateStreet, Flipkart, Swiggy, Trafigura, GE to name a few.
Website : www.wissen.com

About Asha Health
Asha Health helps medical practices launch their own AI clinics. We're backed by Y Combinator, General Catalyst, 186 Ventures, Reach Capital and many more. We recently raised an oversubscribed seed round from some of the best investors in Silicon Valley. Our team includes AI product leaders from companies like Google, physician executives from major health systems, and more.
About the Role
We're looking for a Software Engineer to join our engineering team (currently 5 teammates). We're looking for someone who is an all-rounder, but has particularly exceptional backend engineering skills.
Our ideal candidate has built AI agents at the orchestration layer level and leveraged clever engineering techniques to improve latency & reliability for complex workflows.
We pay well above market for the country's best talent and provide a number of excellent perks.
Responsibilities
In this role, you will have the opportunity to build state-of-the-art AI agents, and learn what it takes to build an industry-leading multimodal, multi-agent suite.
You'll wear many hats. Your responsibilities will fall into 3 categories:
AI Engineering
- Develop AI agents with a high bar for reliability and performance.
- Build SOTA LLM-powered tools for providers, practices, and patients.
- Architect our data annotation, fine tuning, and RLHF workflows.
- Live on the bleeding-edge ensuring that every week, we have the most cutting edge agents as the industry evolves.
Full-Stack Engineering (80% backend, 20% frontend)
- Lead the team in designing scalable architecture to support performant web applications.
- Develop features end-to-end for our web applications (Typescript, nodeJS, python etc).
Requirements
You do not need AI experience to apply to this role. While we prefer candidates that have some AI experience, we have hired engineers before that do not have any, but have demonstrated that they are very fast learners.
We prefer candidates who have worked as a founding engineer at an early stage startup (Seed or Preseed) or a Senior Software Engineer at a Series A or B startup.
Perks of Working at Asha Health
#1 Build cool stuff: work on the latest, cutting-edge tech (build frontier AI agents with technologies that evolve every 2 weeks).
#2 Surround yourself with top talent: our team includes senior AI product leaders from companies like Google, experienced physician executives, and top 1% engineering talent (the best of the best).
#3 Rocketship trajectory: we get more customer interest than we have time to onboard, it's a good problem to have :)

Designation: Python Developer
Experienced in AI/ML
Location: Turbhe, Navi Mumbai
CTC: 6-12 LPA
Years of Experience: 2-5 years
At Arcitech.ai, we’re redefining the future with AI-powered software solutions across education, recruitment, marketplaces, and beyond. We’re looking for a Python Developer passionate about AI/ML, who’s ready to work on scalable, cloud-native platforms and help build the next generation of intelligent, LLM-driven products.
💼 Your Responsibilities
AI/ML Engineering
- Develop, train, and optimize ML models using PyTorch/TensorFlow/Keras.
- Build end-to-end LLM and RAG (Retrieval-Augmented Generation) pipelines using LangChain.
- Collaborate with data scientists to convert prototypes into production-grade AI applications.
- Integrate NLP, Computer Vision, and Recommendation Systems into scalable products.
- Work with transformer-based architectures (BERT, GPT, LLaMA, etc.) for real-world AI use cases.
Backend & Systems Development
- Design, develop, and maintain robust Python microservices with REST/GraphQL APIs.
- Implement real-time communication with Django Channels/WebSockets.
- Containerize AI services with Docker and deploy on Kubernetes (EKS/GKE/AKS).
- Configure and manage AWS (EC2, S3, RDS, SageMaker, CloudWatch) for AI/ML workloads.
Reliability & Automation
- Develop background task queues with Celery, ensuring smart retries and monitoring.
- Implement CI/CD pipelines for automated model training, testing, and deployment.
- Write automated unit & integration tests (pytest/unittest) with ≥80% coverage.
Collaboration
- Contribute to MLOps best practices and mentor peers in LangChain/AI integration.
- Participate in tech talks, code reviews, and AI learning sessions within the team.
🎓 Required Qualifications
- Bachelor’s or Master’s degree in Computer Science, AI/ML, or related field.
- 2–5 years of experience in Python development with strong AI/ML exposure.
- Hands-on experience with LangChain for building LLM-powered workflows and RAG systems.
- Deep learning experience with PyTorch or TensorFlow.
- Experience deploying ML models and LLM apps into production systems.
- Familiarity with REST/GraphQL APIs and cloud platforms (AWS/Azure/GCP).
- Skilled in Git workflows, automated testing, and CI/CD practices.
🌟 Nice to Have
- Experience with vector databases (Pinecone, Weaviate, FAISS, Milvus) for retrieval pipelines.
- Knowledge of LLM fine-tuning, prompt engineering, and evaluation frameworks.
- Familiarity with Airflow/Prefect/Dagster for data and model pipelines.
- Background in statistics, optimization, or applied mathematics.
- Contributions to AI/ML or LangChain open-source projects.
- Experience with model monitoring and drift detection in production.
🎁 Why Join Us
- Competitive compensation and benefits 💰
- Work on cutting-edge LLM and AI/ML applications 🤖
- A collaborative, innovation-driven work culture 📚
- Opportunities to grow into AI/ML leadership roles 🚀

Job Title : Python Backend Engineer (with MLOps & LLMOps Experience)
Experience : 4 to 8 Years
Location : Gurgaon Sector - 43
Employment Type : Full-time
Job Summary :
We are looking for an experienced Python Backend Engineer with a strong background in FastAPI, Django, and hands-on exposure to MLOps and LLMOps practices.
The ideal candidate will be responsible for building scalable backend solutions, integrating AI/ML models into production environments, and implementing efficient pipelines for machine learning and large language model operations.
Mandatory Skills : Python, FastAPI, Django, MLOps, LLMOps, REST API development, Docker, Kubernetes, Cloud (AWS/Azure/GCP), CI/CD.
Key Responsibilities :
- Develop, optimize, and maintain backend services using Python (FastAPI, Django).
- Design and implement API endpoints for high-performance and secure data exchange.
- Collaborate with data science teams to deploy ML/LLM models into production using MLOps/LLMOps best practices.
- Build and manage CI/CD pipelines for ML models and ensure seamless integration with backend systems.
- Implement model monitoring, versioning, and retraining workflows for machine learning and large language models.
- Optimize backend performance for scalability and reliability in AI-driven applications.
- Work with Docker, Kubernetes, and cloud platforms (AWS/Azure/GCP) for deployment and orchestration.
- Ensure best practices in code quality, testing, and security for all backend and model deployment workflows.
Required Skills & Qualifications :
- 4 to 8 years of experience as a Backend Engineer with strong expertise in Python.
- Proficient in FastAPI and Django frameworks for API and backend development.
- Hands-on experience with MLOps and LLMOps workflows (model deployment, monitoring, scaling).
- Familiarity with machine learning model lifecycle and integration into production systems.
- Strong knowledge of RESTful APIs, microservices architecture, and asynchronous programming.
- Experience with Docker, Kubernetes, and cloud environments (AWS, Azure, or GCP).
- Exposure to CI/CD pipelines and DevOps tools.
- Good understanding of Git, version control, and testing frameworks.
Nice to Have :
- Experience with LangChain, Hugging Face, or similar LLM frameworks.
- Knowledge of data pipelines, feature engineering, and ML frameworks (TensorFlow, PyTorch, etc.).
- Understanding of vector databases (Pinecone, Chroma, etc.).
Education :
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
About Verix
Verix is a platform for verification, engagement, and trust in the age of AI. Powered by blockchain and agentic AI, Verix enables global organizations—such as Netflix, Amdocs, The Stevie Awards, Room to Read, and UNICEF—to seamlessly design, issue, and manage digital credentials for learning, enterprise skilling, continuing education, compliance, membership, and events.
With dynamic credentials that reflect recipient growth over time, modern design templates, and attached rewards, Verix empowers enterprises to drive engagement while building trust and community.
Founded by industry veterans Kirthiga Reddy (ex-Meta, MD Facebook India) and Saurabh Doshi (ex-Meta, ex-Viacom), Verix is backed by Polygon Ventures, Micron Ventures, FalconX, and leading angels including Randi Zuckerberg and Harsh Jain.
What is OptimizeGEO?
OptimizeGEO is Verix’s flagship product that helps brands stay visible and discoverable in AI-powered answers.
Unlike traditional SEO that optimizes for keywords and rankings, OptimizeGEO operationalizes AEO/GEO principles ensuring brands are mentioned, cited, and trusted by generative systems (ChatGPT, Gemini, Perplexity, Claude, etc.) and answer engines (featured snippets, voice search, and AI answer boxes).
Role Overview
We are hiring a Backend Engineer to build the data and services layer that powers OptimizeGEO’s analytics, scoring, and reporting.
This role partners closely with our SEO/AEO domain experts and data teams to translate frameworks—gap analysis, share-of-voice, entity/knowledge-graph coverage, trust signals—into scalable backend systems and APIs.
You will design secure, reliable, and observable services that ingest heterogeneous web and third-party data, compute metrics, and surface actionable insights to customers via dashboards and reports.
Key Responsibilities
- Own backend services for data ingestion, processing, and aggregation across crawlers, public APIs, search consoles, analytics tools, and third-party datasets.
- Operationalize GEO/AEO metrics (visibility scores, coverage maps, entity health, citation/trust signals, competitor benchmarks) as versioned, testable algorithms.
- Design & implement APIs for internal use (data science, frontend) and external consumption (partner/export endpoints), with clear SLAs and quotas.
- Data pipelines & orchestration: batch and incremental jobs, queueing, retries/backoff, idempotency, and cost-aware scaling.
- Storage & modeling: choose fit-for-purpose datastores (OLTP/OLAP), schema design, indexing/partitioning, lineage, and retention.
- Observability & reliability: logging, tracing, metrics, alerting; SLOs for freshness and accuracy; incident response playbooks.
- Security & compliance: authN/authZ, secrets management, encryption, PII governance, vendor integrations.
- Collaborate cross-functionally with domain experts to convert research into productized features and executive-grade reports.
Minimum Qualifications
- 4–8 years of experience building backend systems in production (startups or high-growth product teams preferred).
- Proficiency in one or more of: Python, Node.js/TypeScript, Go, or Java.
- Experience with cloud platforms (AWS/GCP/Azure) and containerized deployment (Docker, Kubernetes).
- Hands-on with data pipelines (Airflow/Prefect, Kafka/PubSub, Spark/Flink or equivalent) and REST/GraphQL API design.
- Strong grounding in systems design, scalability, reliability, and cost/performance trade-offs.
Preferred Qualifications (Nice to Have)
- Familiarity with technical SEO artifacts: schema.org/structured data, E-E-A-T, entity/knowledge-graph concepts, and crawl budgets.
- Exposure to AEO/GEO and how LLMs weigh sources, citations, and trust; awareness of hallucination risks and mitigation.
- Experience integrating SEO/analytics tools (Google Search Console, Ahrefs, SEMrush, Similarweb, Screaming Frog) and interpreting their data models.
- Background in digital PR/reputation signals and local/international SEO considerations.
- Comfort working with analysts to co-define KPIs and build executive-level reporting.
What Success Looks Like (First 6 Months)
- Ship a reliable data ingestion and scoring service with clear SLAs and automated validation.
- Stand up share-of-voice and entity-coverage metrics that correlate with customer outcomes.
- Deliver exportable executive reports and dashboard APIs consumed by the product team.
- Establish observability baselines (dashboards & alerts) and a lightweight on-call rotation.
Tooling & Stack (Illustrative)
- Runtime: Python / TypeScript / Go
- Data: Postgres / BigQuery + object storage (S3 / GCS)
- Pipelines: Airflow / Prefect, Kafka / PubSub
- Infra: AWS / GCP, Docker, Kubernetes, Terraform
- Observability: OpenTelemetry, Prometheus / Grafana, ELK / Cloud Logging
- Collab: GitHub, Linear / Jira, Notion, Looker / Metabase
Working Model
- Hybrid-remote within India with periodic in-person collaboration (Bengaluru or mutually agreed hubs).
- Startup velocity with pragmatic processes; bias to shipping, measurement, and iteration.
Equal Opportunity
Verix is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

Certa (getcerta.com) is a Silicon Valley-based startup automating the vendor, supplier, and stakeholder onboarding processes for businesses globally. Serving Fortune 500 and Fortune 1000 clients, Certa's engineering team tackles expansive and deeply technical challenges, driving innovation in business processes across industries.
Location: Remote (India only)
Role Overview
We are looking for an experienced and innovative AI Engineer to join our team and push the boundaries of large language model (LLM) technology to drive significant impact in our products and services . In this role, you will leverage your strong software engineering skills (particularly in Python and cloud-based backend systems) and your hands-on experience with cutting-edge AI (LLMs, prompt engineering, Retrieval-Augmented Generation, etc.) to build intelligent features for enterprise (B2B SaaS). As an AI Engineer on our team, you will design and deploy AI-driven solutions (such as LLM-powered agents and context-aware systems) from prototype to production, iterating quickly and staying up-to-date with the latest developments in the AI space . This is a unique opportunity to be at the forefront of a new class of engineering roles that blend robust backend system design with state-of-the-art AI integration, shaping the future of user experiences in our domain.
Key Responsibilities
- Design and Develop AI Features: Lead the design, development, and deployment of generative AI capabilities and LLM-powered services that deliver engaging, human-centric user experiences . This includes building features like intelligent chatbots, AI-driven recommendations, and workflow automation into our products.
- RAG Pipeline Implementation: Design, implement, and continuously optimize end-to-end RAG (Retrieval-Augmented Generation) pipelines, including data ingestion and parsing, document chunking, vector indexing, and prompt engineering strategies to provide relevant context to LLMs . Ensure that our AI systems can efficiently retrieve and use information from knowledge bases to enhance answer accuracy.
- Build LLM-Based Agents: Develop and refine LLM-based agentic systems that can autonomously perform complex tasks or assist users in multi-step workflows. Incorporate tools for planning, memory, and context management (e.g. long-term memory stores, tool use via APIs) to extend the capabilities of our AI agents . Experiment with emerging best practices in agent design (planning algorithms, self-healing loops, etc.) to make these agents more reliable and effective.
- Integrate with Product Teams: Work closely with product managers, designers, and other engineers to integrate AI capabilities seamlessly into our products, ensuring that features align with user needs and business goals . You’ll collaborate cross-functionally to translate product requirements into AI solutions, and iterate based on feedback and testing.
- System Evaluation & Iteration: Rigorously evaluate the performance of AI models and pipelines using appropriate metrics – including accuracy/correctness, response latency, and avoidance of errors like hallucinations . Conduct thorough testing and use user feedback to drive continuous improvements in model prompts, parameters, and data processing.
- Code Quality & Best Practices: Write clean, maintainable, and testable code while following software engineering best practices . Ensure that the AI components are well-structured, scalable, and fit into our overall system architecture. Implement monitoring and logging for AI services to track performance and reliability in production.
- Mentorship and Knowledge Sharing: Provide technical guidance and mentorship to team members on best practices in generative AI development . Help educate and upskill colleagues (e.g. through code reviews, tech talks) in areas like prompt engineering, using our AI toolchain, and evaluating model outputs. Foster a culture of continuous learning and experimentation with new AI technologies.
- Research & Innovation: Continuously explore the latest advancements in AI/ML (new model releases, libraries, techniques) and assess their potential value for our products . You will have the freedom to prototype innovative solutions – for example, trying new fine-tuning methods or integrating new APIs – and bring those into our platform if they prove beneficial. Staying current with emerging research and industry trends is a key part of this role .
Required Skills and Qualifications
- Software Engineering Experience: 3+ years (Mid-level) / 5+ years (Senior) of professional software engineering experience. Rock-solid backend development skills with expertise in Python and designing scalable APIs/services. Experience building and deploying systems on AWS or similar cloud platforms is required (including familiarity with cloud infrastructure and distributed computing) . Strong system design abilities with a track record of designing robust, maintainable architectures is a must.
- LLM/AI Application Experience: Proven experience building applications that leverage large language models or generative AI. You have spent time prompting and integrating language models into real products (e.g. building chatbots, semantic search, AI assistants) and understand their behavior and failure modes . Demonstrable projects or work in LLM-powered application development – especially using techniques like RAG or building LLM-driven agents – will make you stand out .
- AI/ML Knowledge: Prioritize applied LLM product engineering over traditional ML pipelines. Strong chops in prompt design, function calling/structured outputs, tool use, context-window management, and the RAG levers that matter (document parsing/chunking, metadata, re-ranking, embedding/model selection). Make pragmatic model/provider choices (hosted vs. open) using latency, cost, context length, safety, and rate-limit trade-offs; know when simple prompting/config changes beat fine-tuning, and when lightweight adapters or fine-tuning are justified. Design evaluation that mirrors product outcomes: golden sets, automated prompt unit tests, offline checks, and online A/Bs for helpfulness/correctness/safety; track production proxies like retrieval recall and hallucination rate. Solid understanding of embeddings, tokenization, and vector search fundamentals, plus working literacy in transformers to reason about capabilities/limits. Familiarity with agent patterns (planning, tool orchestration, memory) and guardrail/safety techniques.
- Tooling & Frameworks: Hands-on experience with the AI/LLM tech stack and libraries. This includes proficiency with LLM orchestration libraries such as LangChain, LlamaIndex, etc., for building prompt pipelines . Experience working with vector databases or semantic search (e.g. Pinecone, Chroma, Milvus) to enable retrieval-augmented generation is highly desired.
- Cloud & DevOps: Own the productionization of LLM/RAG-backed services as high-availability, low-latency backends. Expertise in AWS (e.g., ECS/EKS/Lambda, API Gateway/ALB, S3, DynamoDB/Postgres, OpenSearch, SQS/SNS/Step Functions, Secrets Manager/KMS, VPC) and infrastructure-as-code (Terraform/CDK). You’re comfortable shipping stateless APIs, event-driven pipelines, and retrieval infrastructure (vector stores, caches) with strong observability (p95/p99 latency, distributed tracing, retries/circuit breakers), security (PII handling, encryption, least-privilege IAM, private networking to model endpoints), and progressive delivery (blue/green, canary, feature flags). Build prompt/config rollout workflows, manage token/cost budgets, apply caching/batching/streaming strategies, and implement graceful fallbacks across multiple model providers.
- Product and Domain Experience: Experience building enterprise (B2B SaaS) products is a strong plus . This means you understand considerations like user experience, scalability, security, and compliance. Past exposure to these types of products will help you design AI solutions that cater to a range of end-users.
- Strong Communication & Collaboration: Excellent interpersonal and communication skills, with an ability to explain complex AI concepts to non-technical stakeholders and create clarity from ambiguity . You work effectively in cross-functional teams and can coordinate with product, design, and ops teams to drive projects forward.
- Problem-Solving & Autonomy: Self-motivated and able to manage multiple priorities in a fast-paced environment . You have a demonstrated ability to troubleshoot complex systems, debug issues across the stack, and quickly prototype solutions. A “figure it out” attitude and creative approach to overcoming technical challenges are key.
Preferred (Bonus) Qualifications
- Multi-Modal and Agents: Experience developing complex agentic systems using LLMs (for example, multi-agent systems or integrating LLMs with tool networks) is a bonus . Similarly, knowledge of multi-modal AI (combining text with vision or other data) could be useful as we expand our product capabilities.
- Startup/Agile Environment: Prior experience in an early-stage startup or similarly fast-paced environment where you’ve worn multiple hats and adapted to rapid changes . This role will involve quick iteration and evolving requirements, so comfort with ambiguity and agility is valued.
- Community/Research Involvement: Active participation in the AI community (open-source contributions, research publications, or blogging about AI advancements) is appreciated. It demonstrates passion and keeps you at the cutting edge. If you have published research or have a portfolio of AI side projects, let us know !
Perks of working at Certa.ai:
- Best-in-class compensation
- Fully-remote work with flexible schedules
- Continuous learning
- Massive opportunities for growth
- Yearly offsite
- Quarterly hacker house
- Comprehensive health coverage
- Parental Leave
- Latest Tech Workstation
- Rockstar team to work with (we mean it!)
About us
At Reltio®, we believe data should fuel business success. Reltio’s AI-powered data unification and management capabilities—encompassing entity resolution, multi-domain master data management (MDM), and data products—transform siloed data from disparate sources into unified, trusted, and interoperable data. Reltio Data Cloud™ delivers interoperable data where and when it's needed, empowering data and analytics leaders with unparalleled business responsiveness. Leading enterprise brands—across multiple industries around the globe—rely on our award-winning data unification and cloud-native MDM capabilities to improve efficiency, manage risk and drive growth.
About the Role
We are seeking a Senior Backend Engineer with a strong foundation in Java, solid Python capabilities, and hands-on experience building or supporting Agentic AI applications that integrate with Large Language Models (LLMs). This role blends traditional backend engineering expertise with next-generation AI integration, requiring not just system design skills, but also creativity in prompt engineering and working with data-rich environments.
You will contribute to the design and implementation of intelligent backend services that coordinate LLMs, agents, and data workflows to support enterprise-grade automation and decision-making systems.
Key Responsibilities
- Develop robust, scalable Java-based backend systems that enable the orchestration of Agentic AI workflows.
- Integrate LLMs (e.g., OpenAI, Mistral, Claude, LLaMA) into backend pipelines to power autonomous or semi-autonomous decision-making.
- Work closely with AI/ML teams to design agent architectures and LLM toolchains (using frameworks like LangGraph, Crew AI, AutoGen, etc.).
- Implement services and APIs that manage prompt chaining, tool invocation, and contextual memory for agents.
- Collaborate with data engineers to ensure clean, efficient, and real-time access to structured and semi-structured data.
- Apply prompt engineering best practices to improve agent behavior, task accuracy, and adaptability.
- Monitor and optimize service performance, reliability, and scalability in production environments.
- Contribute to code reviews, mentoring, and best practices for hybrid backend/AI development.
Required Qualifications
- 4+ years of experience in backend engineering, with strong proficiency in Java for scalable, distributed systems.
- Practical knowledge of Python, particularly for scripting, API consumption, or integration with AI frameworks.
- Exposure to Agentic AI systems using tools such as LangGraph, Crew AI, AutoGen, or similar.
- Experience integrating or orchestrating LLMs within business workflows or intelligent assistants.
- Solid understanding of prompt engineering techniques (e.g., system messages, chaining, tool invocation prompts).
- Familiarity with data engineering concepts, including pipelines, data models, and ETL workflows.
- Strong grasp of object-oriented design, data structures, and algorithms.
Preferred Qualifications
- Experience with LLM orchestration frameworks (e.g., LangChain, LlamaIndex, RAG pipelines).
- Background in building microservices or backend systems for data-rich AI/ML platforms.
- Experience with vector databases, embedding services, or semantic search integration.
- Knowledge of cloud platforms (AWS, GCP, or Azure), containerization (Docker), and DevOps best practices.
- Prior involvement in developing LLM-powered agents, copilots, or task bots is a strong plus.
What We Offer
- Opportunity to work on cutting-edge backend and AI systems, including agentic automation and LLM innovation.
- A flexible hybrid work model in our Bangalore office.
- Competitive compensation with performance-based rewards.
- A culture that promotes engineering excellence, experimentation, and growth.
- Access to learning platforms, technical workshops, and industry-leading AI research.
Why Join Reltio?
- Support for home office setup:Home office setup allowance.
- Stay Connected, Work Flexibly: Mobile & Internet Reimbursement
- No need to pack a lunch—we’ve got you covered with a free meal
Health & Wellness:
- Comprehensive Group medical insurance including your parents with additional top-up options.Accidental Insurance
- Life insurance
- Free online unlimited doctor consultations
- An Employee Assistance Program (EAP)
Work-Life Balance
- 36 annual leaves, which includes Sick leaves – 18, Earned Leaves - 18
- 26 weeks of maternity leave, 15 days of paternity leave
- Very unique to Reltio - 01 week of additional off as recharge week every year globally

Job Type: Full-time
Location: Remote
Company Description
The Blue Owls Solutions specializes in delivering cutting-edge Generative AI Solutions, AI-Powered Software Development, and comprehensive Data Analytics and Engineering services. Our expertise in End-To-End ML/AI Development ensures that clients benefit from scalable and efficient AI-driven solutions tailored to their unique business needs. We create intelligent voice and text agents, chatbots, and process automation solutions, and our data analytics services provide actionable insights for strategic decision-making. Our mission is to bridge the gap between AI innovation and adoption, delivering value-driven, outcome-based solutions that empower our clients to achieve their business goals.
Role Description
We're seeking an enthusiastic Backend Developer who thrives on solving interesting challenges and building reliable, efficient applications. While basic competency in frontend (React) is sufficient, strong backend skills (Python, FastAPI, SQL, pandas) and cloud-native awareness are essential. The ideal candidate enjoys learning new tech stacks, and enjoys solving problems independently.
Required Skills (In order of importance)
- Strong proficiency in Python backend development with FastAPI.
- Familiarity with data analysis using pandas, numpy, and SQL.
- Familiarity with cloud-native concepts and containerization (Docker).
- Basic React skills for frontend integration.
- Excellent problem-solving skills, adaptability, and quick learning abilities.
- Experience with version control systems (e.g., Git)
Preferred Qualifications:
- 3+ Years of experience as Backend Engineer
- Experience with PostgreSQL or other relational databases.
- Azure Cloud Experience.
- Experience writing clean, maintainable, and testable code.
- Experience in AI/ML development is a plus
Why Join Us?
- Collaborative, remote-first environment.
- Opportunities for rapid career growth and learning.
- Competitive Pay.
- Engaging projects focused on practical problem-solving.

bout the Role
We are seeking an experienced Python Data Engineer with a strong foundation in API and basic UI development. This role is essential for advancing our analytics capabilities for AI products, helping us gain deeper insights into product performance and driving data-backed improvements. If you have a background in AI/ML, familiarity with large language models (LLMs), and a solid grasp of Python libraries for AI, we’d like to connect!
Key Responsibilities
• Develop Analytics Framework: Build a comprehensive analytics framework to evaluate and monitor AI product performance and business value.
• Define KPIs with Stakeholders: Collaborate with key stakeholders to establish and measure KPIs that gauge AI product maturity and impact.
• Data Analysis for Actionable Insights: Dive into complex data sets to identify patterns and provide actionable insights to support product improvements.
• Data Collection & Processing: Lead data collection, cleaning, and processing to ensure high-quality, actionable data for analysis.
• Clear Reporting of Findings: Present findings to stakeholders in a clear, concise manner, emphasizing actionable insights.
Required Skills
• Technical Skills:
o Proficiency in Python, including experience with key AI/ML libraries.
o Basic knowledge of UI and API development.
o Understanding of large language models (LLMs) and experience using them effectively.
• Analytical & Communication Skills:
o Strong problem-solving skills to address complex, ambiguous challenges.
o Ability to translate data insights into understandable reports for non-technical stakeholders.
o Knowledge of machine learning algorithms and frameworks to assess AI product effectiveness.
o Experience in statistical methods to interpret data and build metrics frameworks.
o Skilled in quantitative analysis to drive actionable insights.

What you’ll be doing
Weare much more than our job descriptions, but here is where you will begin:
As a Senior Software Engineer Data & ML You’ll Be:
● Architect, design, test, implement, deploy, monitor and maintain end-to-end backend
services. You build it, you own it.
● Work with people from other teams and departments on a day to day basis to ensure
efficient project execution with a focus on delivering value to our members.
● Regularly aligning your team’s vision and roadmap with the target architecture within your
domain and to ensure the success of complex multi domain initiatives.
● Integrate already trained ML and GenAI models (preferably GCP in services.
ROLE:
Whatyou’ll need,
Like us, you’ll be deeply committed to delivering impactful outcomes for customers.
What Makes You a Great Fit
● 5 years of proven work experience as a Backend Python Engineer
● Understanding of software engineering fundamentals OOPS, SOLID, etc.)
● Hands-on experience with Python libraries like Pandas, NumPy, Scikit-learn,
Lang chain/LLamaIndex etc.
● Experience with machine learning frameworks such as PyTorch or TensorFlow, Keras, being
proficient in Python
● Hands-on Experience with frameworks such as Django or FastAPI or Flask
● Hands-on experience with MySQL, MongoDB, Redis and BigQuery (or equivalents)
● Extensive experience integrating with or creating REST APIs
● Experience with creating and maintaining CI/CD pipelines- we use GitHub Actions.
● Experience with event-driven architectures like Kafka, RabbitMq or equivalents.
● Knowledge about:
o LLMs
o Vector stores/databases
o PromptEngineering
o Embeddings and their implementations
● Somehands-onexperience in implementations of the above ML/AI will be preferred
● Experience with GCP/AWS services.
● You are curious about and motivated by the future trends in data, AI/ML, analytics

