

Quantiphi
https://quantiphi.comAbout
Quantiphi is an award-winning AI-first digital engineering company driven by the desire to reimagine and realize transformational opportunities at the heart of the business. Since its inception in 2013, Quantiphi has solved the toughest and most complex business problems by combining deep industry experience, disciplined cloud, and data-engineering practices, and cutting-edge artificial intelligence research to achieve accelerated and quantifiable business results.
Tech stack
Company video

Candid answers by the company
Bengaluru, Mumbai, and Trivandrum
Photos
Connect with the team
Jobs at Quantiphi
Role & Responsibilities
- Design, develop, and deliver automation solutions to enhance platform functionality and reliability.
- Deploy, manage, and maintain Azure cloud infrastructure ensuring high availability, scalability, and security.
- Champion and implement Infrastructure as Code (IaC) practices using Terraform.
- Build and maintain containerized environments using Kubernetes (AKS).
- Develop self-service, self-healing, monitoring, and alerting systems for cloud platforms.
- Automate development, testing, and deployment workflows using CI/CD pipelines.
- Integrate DevOps tools such as Git, Jenkins/Azure DevOps, SonarQube, Artifactory, and Docker to streamline delivery pipelines.
- Ensure platform observability through monitoring, logging, and alerting frameworks.
Requirements
- 6+ years of experience in Cloud / DevOps / Platform Engineering roles.
- Strong hands-on experience with Microsoft Azure cloud infrastructure.
- Experience working with Azure services such as compute, networking, storage, identity, and messaging services.
- Expertise in container orchestration using Kubernetes (AKS).
- Strong experience implementing Infrastructure as Code using Terraform.
- Familiarity with cloud-native and microservices architecture patterns.
- Experience with relational and NoSQL databases such as PostgreSQL or Cassandra.
Additional Skills
- Strong Linux administration and troubleshooting skills.
- Programming or scripting experience in Bash, Python, Java, or similar languages.
- Hands-on experience with CI/CD tools such as Jenkins, Git, Maven, or Azure DevOps.
- Experience managing multi-region or high-availability cloud environments.
- Familiarity with Agile / Scrum / DevOps practices and collaboration tools.
We are looking for a highly experienced and technically strong AI Engineering Manager to lead and mentor a team of Machine Learning and AI Engineers. This role will drive the execution, delivery, and operational excellence of enterprise AI/ML products and platforms within the Life Insurance and Financial Services sector.
The role operates primarily in a GCP cloud environment and requires bridging architectural design with hands-on engineering execution. The candidate will manage engineering workloads, guide technical decisions, ensure high code quality, and drive project timelines for enterprise-grade AI solutions.
Key Responsibilities
Team Leadership & Project Management
- Own end-to-end delivery of AI/ML projects including sprint planning, backlog grooming, workload management, and resource allocation.
- Provide technical guidance and mentorship to AI/ML engineers, including code reviews and best practices for MLOps, software engineering, and cloud infrastructure.
- Act as the primary technical point of contact for product managers, architects, and business stakeholders.
- Define and enforce engineering standards for development, testing, CI/CD, and monitoring of AI services.
- Recruit, onboard, mentor, and conduct performance reviews for the AI engineering team.
- Collaborate with Data Engineering, DevOps, and other teams to integrate AI models into enterprise systems and data pipelines.
- Identify and manage technical risks, dependencies, and delivery blockers to maintain project velocity.
Technical Delivery
- Implement and operationalize MLOps pipelines for model training, versioning, deployment, monitoring, and explainability.
- Guide teams in leveraging AI platform capabilities such as RAG pipelines, LLM gateways, and vector databases to build business use cases.
- Ensure security, scalability, and performance of production AI services and underlying cloud infrastructure.
Must-Have Skills & Requirements
- 8+ years of experience in Software Engineering, Machine Learning Engineering, or Data Science.
- Minimum 3+ years in a team management or leadership role.
- Proven experience managing and mentoring engineering teams.
- Strong experience with project management methodologies (Scrum/Kanban) and tools such as Jira.
- Hands-on experience with production-scale MLOps and GenAIOps implementations.
- Deep expertise in cloud platforms, preferably GCP.
- Strong understanding of modern data architecture including vector databases, data warehousing, and ETL/ELT pipelines.
- Solid software engineering fundamentals including API design, system architecture, Git, and CI/CD or DevSecOps pipelines.
- Experience with GenAI technologies such as LLM orchestration frameworks, prompt engineering, and RAG architectures.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
Good-to-Have / Preferred Skills
- Experience managing distributed or multi-geographical engineering teams.
- Knowledge of regulatory requirements within the BFSI or insurance domain (data privacy, Responsible AI).
- Azure cloud or multi-cloud project experience.
- Experience with streaming data platforms and real-time AI processing.
- Cloud certifications such as GCP Professional Cloud Architect or ML Engineer.
We are seeking a skilled Data Engineer to join the AI Platform Capabilities team supporting the UDP Uplift program.
In this role, you will design, build, and test standardized data and AI platform capabilities across a multi-cloud environment (Azure & GCP).
You will collaborate closely with AI use case teams to develop:
- Scalable data pipelines
- Reusable data products
- Foundational data infrastructure
Your work will support advanced AI solutions such as:
- GenAI
- RAG (Retrieval-Augmented Generation)
- Document Intelligence
Key Responsibilities
- Design and develop scalable ETL/ELT pipelines for AI workloads
- Build and optimize data pipelines for structured & unstructured data
- Enable context processing & vector store integrations
- Support streaming data workflows and batch processing
- Ensure adherence to enterprise data models, governance, and security standards
- Collaborate with DataOps, MLOps, Security, and business teams (LBUs)
- Contribute to data lifecycle management for AI platforms
Required Skills
- 5–7 years of hands-on experience in Data Engineering
- Strong expertise in Python and advanced SQL
- Experience with GCP and/or Azure cloud-native data services
- Hands-on experience with PySpark / Spark SQL
- Experience building data pipelines for ML/AI workloads
- Understanding of CI/CD, Git, and Agile methodologies
- Knowledge of data quality, governance, and security practices
- Strong collaboration and stakeholder management skills
Nice-to-Have Skills
- Experience with Vector Databases / Vector Stores (for RAG pipelines)
- Familiarity with MLOps / GenAIOps concepts (feature stores, model registries, prompt management)
- Exposure to Knowledge Graphs / Context Stores / Document Intelligence workflows
- Experience with DBT (Data Build Tool)
- Knowledge of Infrastructure-as-Code (Terraform)
- Experience in multi-cloud deployments (Azure + GCP)
- Familiarity with event-driven systems (Kafka, Pub/Sub) & API integrations
Ideal Candidate Profile
- Strong data engineering foundation with AI/ML exposure
- Experience working in multi-cloud environments
- Ability to build production-grade, scalable data systems
- Comfortable working in cross-functional, fast-paced environments
We are looking for a QA Engineer with hands-on experience in testing Generative AI systems, LLMs, and RAG pipelines. This role goes beyond traditional QA and focuses on evaluating non-deterministic AI outputs, testing agentic workflows, and ensuring AI safety, accuracy, and reliability in enterprise-grade AI services.
You will work on API-driven AI services such as Intelligent Document Processing and AI Gateways, ensuring they meet enterprise standards before deployment.
Key Responsibilities
- Test and validate Generative AI applications, LLMs, and RAG-based systems
- Evaluate AI outputs for accuracy, groundedness, coherence, and hallucination
- Design and execute test strategies for multi-step agentic workflows
- Perform API and integration testing for AI services
- Build automated test pipelines using Python
- Create synthetic datasets for testing AI systems
- Conduct adversarial testing (prompt injection, safety, guardrails)
- Integrate AI testing into CI/CD pipelines
Must-Have Skills
- 5–7 years of experience in QA / Test Automation
- Hands-on experience testing Generative AI / LLM-based applications
- Strong programming skills in Python
- Experience with RAG pipelines
- Knowledge of LLM evaluation frameworks (RAGAS, TruLens, LangSmith or similar)
- Strong experience in API testing (Postman, REST Assured, etc.)
- Experience testing multi-agent workflows / agentic systems
- Understanding of hallucination, prompt injection, and AI safety concepts
Good-to-Have Skills
- Experience with GCP (Vertex AI) or Azure OpenAI
- SQL / NoSQL knowledge for data validation
- Experience in BFSI / Insurance domain
- Performance testing of APIs and AI systems
Additional Information
- Candidates without hands-on experience in testing Generative AI / LLM systems will not be considered
- Immediate to 45 days notice period preferred
We are looking for a hands-on Generative AI Engineer to design, build, and deploy enterprise-grade GenAI platform capabilities across multiple business units.
This role focuses on developing scalable, reusable AI components across the full stack—covering RAG systems, agent orchestration, LLM infrastructure, and GenAIOps—on GCP (primary) and Azure.
This is a core engineering role, not a research or client-facing position.
Key Responsibilities
- Design and build production-ready GenAI systems and platform components
- Develop and deploy RAG pipelines (data ingestion, embeddings, retrieval, APIs)
- Implement agent-based architectures (orchestration, routing, memory, workflows)
- Build and manage LLM infrastructure (model routing, caching, rate limiting, observability)
- Develop scalable APIs and services for AI capabilities
- Implement GenAIOps/MLOps practices (prompt management, evaluation, monitoring, deployment)
- Work with GCP services (Vertex AI, BigQuery, Cloud Run, GKE, Pub/Sub) to deploy solutions
- Ensure AI safety, governance, and compliance (PII protection, guardrails, auditability)
- Collaborate with cross-functional teams to deliver reusable, enterprise-grade solutions
Required Skills & Experience
- Strong hands-on experience in Generative AI and RAG systems (production level)
- Experience building multi-agent or agentic AI systems
- Proficiency in Python and backend/API development
- Hands-on experience with GCP AI/ML ecosystem (Vertex AI, BigQuery, etc.)
- Solid understanding of LLM infrastructure and orchestration layers
- Experience with CI/CD pipelines and Infrastructure as Code (Terraform)
- Knowledge of GenAIOps/MLOps practices and model lifecycle management
- Understanding of AI safety, governance, and compliance
Nice to Have
- Experience with LangChain, LlamaIndex, or similar frameworks
- Familiarity with RAG evaluation tools (RAGAS, DeepEval)
- Knowledge of Knowledge Graphs with RAG
- Experience in multi-cloud environments (GCP + Azure)
- Exposure to BFSI/regulated domains
What We’re Looking For
- Engineers who have built and deployed real-world GenAI systems at scale
- Strong backend and systems-thinking mindset
- Ability to work in fast-paced, enterprise environments
Role Type
- Individual Contributor (IC)
- Platform Engineering / Backend-heavy GenAI role
We are looking for a Mid-Level .NET Developer to design, develop, and maintain scalable microservices for enterprise applications. The role involves working on high-performance, reliable systems deployed in containerized environments.
Key Responsibilities:
- Develop and maintain scalable .NET microservices
- Build robust Web APIs with proper validation, error handling, and security
- Write unit and integration tests to ensure code quality
- Design portable and environment-agnostic solutions
- Collaborate with cross-functional teams and client stakeholders
- Optimize performance and implement caching strategies
- Follow security best practices for enterprise applications
- Participate in code reviews and maintain coding standards
- Support deployment and troubleshoot issues in client environments
Must-Have Skills:
Core Technical Expertise:
- 4+ years of experience with .NET Core (3.1+) / .NET 5+ and C# (8+)
- Strong hands-on experience with ASP.NET Core Web API & Entity Framework Core
- Experience building REST APIs and middleware
- Strong understanding of SOLID principles, Dependency Injection, Repository pattern
- Experience with unit testing (xUnit / NUnit / MSTest), Moq, integration testing
Microservices & Deployment:
- Hands-on experience with Docker
- Understanding of microservices architecture & distributed systems
- Experience with configuration management (appsettings.json, IConfiguration)
- Knowledge of NuGet and dependency management
Good-to-Have Skills:
Advanced Technical:
- Experience with .NET 6/7/8, Minimal APIs, gRPC, SignalR
- Advanced EF Core, Dapper, database migrations
- Kubernetes and container orchestration
- Cloud platforms: Azure / GCP / Alibaba Cloud
- Message brokers: Azure Service Bus, RabbitMQ, Kafka
- Databases: PostgreSQL, MySQL, MongoDB, Cassandra
- API Gateways: Azure API Management, Kong
Development & Operations:
- CI/CD tools: Azure DevOps, Jenkins, GitHub Actions
- Monitoring: Application Insights, Serilog, Prometheus
- Security: HTTPS, CORS, input validation, secure coding
- Background services: Hangfire, Quartz.NET
Client-Facing Experience:
- Experience in service-based organizations
- Ability to adapt to multiple domains
- Understanding of industry standards and compliance
We are looking for a Mid-Level .NET Developer to design, develop, and maintain scalable microservices for enterprise applications. The role involves working on high-performance, reliable systems deployed in containerized environments.
Key Responsibilities:
- Develop and maintain scalable .NET microservices
- Build robust Web APIs with proper validation, error handling, and security
- Write unit and integration tests to ensure code quality
- Design portable and environment-agnostic solutions
- Collaborate with cross-functional teams and client stakeholders
- Optimize performance and implement caching strategies
- Follow security best practices for enterprise applications
- Participate in code reviews and maintain coding standards
- Support deployment and troubleshoot issues in client environments
Must-Have Skills:
Core Technical Expertise:
- 4+ years of experience with .NET Core (3.1+) / .NET 5+ and C# (8+)
- Strong hands-on experience with ASP.NET Core Web API & Entity Framework Core
- Experience building REST APIs and middleware
- Strong understanding of SOLID principles, Dependency Injection, Repository pattern
- Experience with unit testing (xUnit / NUnit / MSTest), Moq, integration testing
Microservices & Deployment:
- Hands-on experience with Docker
- Understanding of microservices architecture & distributed systems
- Experience with configuration management (appsettings.json, IConfiguration)
- Knowledge of NuGet and dependency management
Good-to-Have Skills:
Advanced Technical:
- Experience with .NET 6/7/8, Minimal APIs, gRPC, SignalR
- Advanced EF Core, Dapper, database migrations
- Kubernetes and container orchestration
- Cloud platforms: Azure / GCP / Alibaba Cloud
- Message brokers: Azure Service Bus, RabbitMQ, Kafka
- Databases: PostgreSQL, MySQL, MongoDB, Cassandra
- API Gateways: Azure API Management, Kong
Development & Operations:
- CI/CD tools: Azure DevOps, Jenkins, GitHub Actions
- Monitoring: Application Insights, Serilog, Prometheus
- Security: HTTPS, CORS, input validation, secure coding
- Background services: Hangfire, Quartz.NET
Client-Facing Experience:
- Experience in service-based organizations
- Ability to adapt to multiple domains
- Understanding of industry standards and compliance
Responsible for developing, enhancing, modifying, and maintaining chatbot applications in the Global Markets environment. The role involves designing, coding, testing, debugging, and documenting conversational AI solutions, along with supporting activities aligned to the corporate systems architecture.
You will work closely with business partners to understand requirements, analyze data, and deliver optimal, market-ready conversational AI and automation solutions.
Key Responsibilities
- Design, develop, test, debug, and maintain chatbot and virtual agent applications
- Collaborate with business stakeholders to define and translate requirements into technical solutions
- Analyze large volumes of conversational data to improve chatbot accuracy and performance
- Develop automation workflows for data handling and refinement
- Train and optimize chatbots using historical chat logs and user-generated content
- Ensure solutions align with enterprise architecture and best practices
- Document solutions, workflows, and technical designs clearly
Required Skills
- Hands-on experience in developing virtual agents (chatbots/voicebots) and Natural Language Processing (NLP)
- Experience with one or more AI/NLP platforms such as:
- Dialogflow, Amazon Lex, Alexa, Rasa, LUIS, Kore.AI
- Microsoft Bot Framework, IBM Watson, Wit.ai, Salesforce Einstein, Converse.ai
- Strong programming knowledge in Python, JavaScript, or Node.js
- Experience training chatbots using historical conversations or large-scale text datasets
- Practical knowledge of:
- Formal syntax and semantics
- Corpus analysis
- Dialogue management
- Strong written communication skills
- Strong problem-solving ability and willingness to learn emerging technologies
Nice-to-Have Skills
- Understanding of conversational UI and voice-based processing (Text-to-Speech, Speech-to-Text)
- Experience building voice apps for Amazon Alexa or Google Home
- Experience with Test-Driven Development (TDD) and Agile methodologies
- Ability to design and implement end-to-end pipelines for AI-based conversational applications
- Experience in text mining, hypothesis generation, and historical data analysis
- Strong knowledge of regular expressions for data cleaning and preprocessing
- Understanding of API integrations, SSO, and token-based authentication
- Experience writing unit test cases as per project standards
- Knowledge of HTTP, REST APIs, sockets, and web services
- Ability to perform keyword and topic extraction from chat logs
- Experience training and tuning topic modeling algorithms such as LDA and NMF
- Understanding of classical Machine Learning algorithms and appropriate evaluation metrics
- Experience with NLP frameworks such as NLTK and spaCy

We are hiring an Associate Technical Architect with strong expertise in Azure-based data platforms to design scalable data lakes, data warehouses, and enterprise data pipelines, while working with global teams.
Key Responsibilities
- Design and implement scalable data lake, data warehouse, and lakehouse architectures on Azure
- Build resilient data pipelines using Azure services
- Architect and optimize cloud-based data platforms
- Improve large-scale data processing and query performance
- Collaborate with engineering teams, QA, product managers, and stakeholders
- Communicate technical roadmap, risks, and mitigation strategies
Must-Have Skills:
- 6+ years of experience in Azure Data Engineering / Data Architecture
Azure Data Platform
- Experience with Azure Data Factory
- Hands-on with Azure Databricks and PySpark
- Experience with Azure Data Lake Storage
- Knowledge of Azure Synapse or Azure SQL for data warehousing
Programming & Data Skills
- Strong programming skills in Python and PySpark
- Advanced SQL with query optimization and performance tuning
- Experience building ETL / ELT data pipelines
Data Architecture Knowledge
- Understanding of MPP databases
- Knowledge of partitioning, indexing, and performance optimization
- Experience with data modeling (dimensional, normalized, lakehouse)
Cloud Fundamentals
- Azure security, networking, scalability, and disaster recovery
- Experience with on-premise to Azure migrations
Certification (Preferred)
- Azure Data Engineer or Azure Solutions Architect certification
Good-to-Have Skills
- Domain experience in FSI, Retail, or CPG
- Exposure to data governance tools
- Experience with BI tools such as Power BI or Tableau
- Familiarity with Terraform, CI/CD pipelines, or Azure DevOps
- Experience with NoSQL databases such as Cosmos DB or MongoDB
Soft Skills
- Strong problem-solving and analytical thinking
- Good communication and stakeholder management
- Ability to translate technical concepts into business outcomes
- Experience working with global or distributed teams
Must have skills:
● Experience: 6+ years of hands-on experience in Cloud Platform Engineering, DevOps, or Site Reliability Engineering (SRE).
● Multi-Cloud Infrastructure: Proficiency in architecting, deploying, and maintaining cloud infrastructure across GCP and Azure (VPC, IAM, Cloud Storage/Blob, Cloud Run/Functions, Pub/Sub, GKE/AKS, Cloud SQL).
● Container Orchestration: Extensive experience with Kubernetes (GKE or AKS) and Docker for managing and scaling containerized applications.
● Infrastructure as Code (IaC) & Automation: Strong proficiency using Terraform along with Python and Bash/Shell scripting for infrastructure automation.
● CI/CD Automation: Experience building and managing CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI, or ArgoCD.
● Observability & Monitoring: Experience using tools such as Datadog, Prometheus, Grafana, or Splunk for monitoring, logging, and alerting.
● Secrets & Security Management: Experience managing sensitive credentials using HashiCorp Vault, GCP Secret Manager, or Azure Key Vault.
● Architecture & Networking: Understanding of microservices architecture, service-oriented architecture, event-driven systems (Pub/Sub), and cloud networking principles.
Good to have skills:
● AI/ML Infrastructure: Familiarity with infrastructure for ML workloads such as Vertex AI, Azure Machine Learning, GPU node pools, or Vector Databases.
● Advanced Kubernetes: Working knowledge of Kyverno for policy management, Karpenter for cluster autoscaling, or building Kubernetes operators using Go.
● Multi-Cloud Management: Familiarity with Crossplane for managing multi-cloud environments and building cloud-native platforms.
● Cloud Reliability & FinOps: Understanding of disaster recovery, fault tolerance, and cost allocation practices through resource tagging.
● Domain & Compliance: Experience working in regulated environments such as BFSI or Insurance.
Similar companies
About the company
Jobs
4
About the company
Jobs
26
About the company
Wishup is India’s largest remote work platform (since 2017), connecting global businesses with top remote professionals in roles such as Virtual Assistants, Operations/Admin Managers, Executive Assistants, Project Managers, Bookkeepers, and Accountants. With a stringent 0.1% acceptance rate, each professional is upskilled and managed via our AI-based remote work tool.
Backed by marquee investors (Orios Ventures, Inflection Point Ventures, 500 Startups, and Tracxn Labs), Wishup’s leadership team includes alumni from premier institutes like IIT Madras, IIM Ahmedabad, IIT Kanpur, and DCE.
Jobs
3
About the company
The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains.
With offices in US, India, UK, Australia, Mexico, and Canada, we offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
Leveraging our multi-site operations in the USA and India and availability of world-class infrastructure, we offer a combination of on-site, off-site and offshore service models. Our technical competencies, proactive management approach, proven methodologies, committed support and the ability to quickly react to urgent needs make us a valued partner for any kind of Digital Enablement Services, Managed Services, or Business Services.
We believe that the technology and thought leadership that we command in the industry is the direct result of the kind of people we have been able to attract, to form this organization (you are one of them!).
Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like MIT, Wharton, IITs, IIMs, and BITS and with rich work experience in some of the biggest companies in the world.
Wissen Technology has been certified as a Great Place to Work®. The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.
Jobs
475
About the company
Devcare Solution is a place where a best-in-class working environment, professional management, and opportunities to learn exist bundled with exceptional rewards. It is ready to take more on board for all those who deserve a dream career. The team is full of good spirits complimenting each one's brilliance at workplace. Needless to say, about the interesting projects, you for sure will gain an enriching positive experience every moment.
Jobs
16
About the company
We are hiring for multiple clients
Jobs
1
About the company
Speer is an AI-native platform that turns scattered life sciences field data and expert interactions into structured, queryable intelligence.
It automates capturing insights, managing expert relationships, and generating instant answers and executive-ready reports so teams can make faster, evidence-based decisions.
Jobs
1
About the company
Jobs
9
About the company
VYXN.AI is a founder-led, pre-launch AI product startup building a multilingual AI content generation platform for text, image, video, templates, workflow automation, and Telegram-based AI experiences.
The product vision includes a web platform, Telegram bot, Telegram Mini App, dynamic credits/coins pricing, AI provider integrations, Templates, and a professional node-based Workflow / Space module.
We are currently building the core technical team and looking for practical engineers who can move fast, think clearly, work by milestones, and help turn the product into a stable commercial platform.
Jobs
1








