

Quantiphi
https://quantiphi.comAbout
Quantiphi is an award-winning AI-first digital engineering company driven by the desire to reimagine and realize transformational opportunities at the heart of the business. Since its inception in 2013, Quantiphi has solved the toughest and most complex business problems by combining deep industry experience, disciplined cloud, and data-engineering practices, and cutting-edge artificial intelligence research to achieve accelerated and quantifiable business results.
Tech stack
Company video

Candid answers by the company
Bengaluru, Mumbai, and Trivandrum
Photos
Connect with the team
Jobs at Quantiphi

Must have skills:
● Experience: 6+ years of hands-on experience in Cloud Platform Engineering, DevOps, or Site Reliability Engineering (SRE).
● Multi-Cloud Infrastructure: Proficiency in architecting, deploying, and maintaining cloud infrastructure across GCP and Azure (VPC, IAM, Cloud Storage/Blob, Cloud Run/Functions, Pub/Sub, GKE/AKS, Cloud SQL).
● Container Orchestration: Extensive experience with Kubernetes (GKE or AKS) and Docker for managing and scaling containerized applications.
● Infrastructure as Code (IaC) & Automation: Strong proficiency using Terraform along with Python and Bash/Shell scripting for infrastructure automation.
● CI/CD Automation: Experience building and managing CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI, or ArgoCD.
● Observability & Monitoring: Experience using tools such as Datadog, Prometheus, Grafana, or Splunk for monitoring, logging, and alerting.
● Secrets & Security Management: Experience managing sensitive credentials using HashiCorp Vault, GCP Secret Manager, or Azure Key Vault.
● Architecture & Networking: Understanding of microservices architecture, service-oriented architecture, event-driven systems (Pub/Sub), and cloud networking principles.
Good to have skills:
● AI/ML Infrastructure: Familiarity with infrastructure for ML workloads such as Vertex AI, Azure Machine Learning, GPU node pools, or Vector Databases.
● Advanced Kubernetes: Working knowledge of Kyverno for policy management, Karpenter for cluster autoscaling, or building Kubernetes operators using Go.
● Multi-Cloud Management: Familiarity with Crossplane for managing multi-cloud environments and building cloud-native platforms.
● Cloud Reliability & FinOps: Understanding of disaster recovery, fault tolerance, and cost allocation practices through resource tagging.
● Domain & Compliance: Experience working in regulated environments such as BFSI or Insurance.
Join our core Platform Implementation Team to build intuitive, high-performance user interfaces for a cutting-edge enterprise Data & AI platform. You will develop scalable frontend applications including AI agent marketplaces, operational dashboards, and real-time chat interfaces that integrate seamlessly with backend APIs to support global business units and insurance advisors.
Key Responsibilities
• Design and develop responsive user interfaces for AI-driven applications such as web/app chat interfaces, AI copilots, and personalized content delivery systems.
• Build complex, data-rich dashboards supporting MLOps, GenAIOps, and AgentOps workflows to monitor model performance, manage approval gates, and track infrastructure costs.
• Develop a centralized Agent Marketplace portal for discovering, publishing, and managing reusable AI agents with strict versioning and access controls.
• Integrate frontend applications with APIs to consume reusable AI business services (e.g., document intelligence) and enterprise data products.
• Handle real-time data streams and asynchronous interactions to ensure smooth, low-latency user experiences for chat SDKs and streaming APIs.
• Implement robust state management solutions to support complex user workflows, session states, and multi-step AI agent interactions.
• Optimize frontend performance for high-traffic enterprise applications ensuring fast load times, smooth rendering, and cross-browser/device compatibility.
• Ensure all frontend implementations follow enterprise security standards, data privacy requirements, and WCAG accessibility guidelines relevant to the BFSI sector.
• Collaborate closely with UI/UX designers, backend engineers, integration engineers, and ML architects to bridge design and technical implementation.
• Write and maintain comprehensive unit and integration tests to ensure UI reliability and prevent regressions during continuous deployment cycles.
Must-Have Skills
• Experience with modern frontend frameworks such as React, Next.js, Angular, or Vue.js for building complex SPAs and SSR applications.
• Strong proficiency in JavaScript (ES6+) and TypeScript for writing scalable, maintainable, and type-safe code.
• Familiarity with modern styling approaches such as Tailwind CSS, SASS/LESS, Styled Components, Material UI, Ant Design, or Bootstrap.
• Experience building interfaces for AI/ML platforms, such as chatbot UIs, prompt engineering tools, or complex data visualization dashboards.
• Experience integrating frontend applications with REST APIs, GraphQL, WebSockets, and handling asynchronous data fetching using tools like React Query, SWR, or Axios.
• Familiarity with cloud-native deployment environments such as GCP Cloud Run, Firebase, or Azure Static Web Apps and build tools like Vite or Webpack.
• Hands-on experience with frontend testing frameworks including Jest, React Testing Library, Cypress, or Playwright.
• Experience integrating enterprise Identity and Access Management solutions such as OAuth 2.0, OIDC, or MSAL.
• Understanding of CI/CD pipelines for frontend applications and automated deployment workflows.
Good to Have
• Experience in the Life Insurance or broader BFSI domain with familiarity with user personas such as insurance agents, underwriters, or policyholders.
Role & Responsibilities
- Design, develop, and deliver automation solutions to enhance platform functionality and reliability.
- Deploy, manage, and maintain Azure cloud infrastructure ensuring high availability, scalability, and security.
- Champion and implement Infrastructure as Code (IaC) practices using Terraform.
- Build and maintain containerized environments using Kubernetes (AKS).
- Develop self-service, self-healing, monitoring, and alerting systems for cloud platforms.
- Automate development, testing, and deployment workflows using CI/CD pipelines.
- Integrate DevOps tools such as Git, Jenkins/Azure DevOps, SonarQube, Artifactory, and Docker to streamline delivery pipelines.
- Ensure platform observability through monitoring, logging, and alerting frameworks.
Requirements
- 6+ years of experience in Cloud / DevOps / Platform Engineering roles.
- Strong hands-on experience with Microsoft Azure cloud infrastructure.
- Experience working with Azure services such as compute, networking, storage, identity, and messaging services.
- Expertise in container orchestration using Kubernetes (AKS).
- Strong experience implementing Infrastructure as Code using Terraform.
- Familiarity with cloud-native and microservices architecture patterns.
- Experience with relational and NoSQL databases such as PostgreSQL or Cassandra.
Additional Skills
- Strong Linux administration and troubleshooting skills.
- Programming or scripting experience in Bash, Python, Java, or similar languages.
- Hands-on experience with CI/CD tools such as Jenkins, Git, Maven, or Azure DevOps.
- Experience managing multi-region or high-availability cloud environments.
- Familiarity with Agile / Scrum / DevOps practices and collaboration tools.

This role is responsible for architecting and implementing the Agentic capabilities of the PHI ecosystem. The engineer will lead the development of multi-agent systems, enabling seamless interoperability between AI agents, internal tools, and external services.
The position requires a strong focus on AI safety, secure agent orchestration, and tool-connected AI systems capable of executing complex workflows within the health insurance domain.
1. Agent Orchestration
- Build and manage autonomous AI agents using Agent Development Kit (ADK) and Vertex AI Agent Engine.
- Design and implement multi-agent workflows capable of handling complex tasks.
2. Interoperability
- Implement the Model Context Protocol (MCP) to enable connectivity between:
- AI agents
- Internal PHI tools
- External services and APIs.
3. Multimodal Development
- Build real-time, bidirectional audio applications using the Gemini Live API.
- Integrate image generation models and support multimodal AI capabilities.
4. Safety Engineering
- Implement AI safety layers to protect sensitive healthcare data.
- Use Model Armor and Cloud DLP API to:
- Sanitize prompts
- Prevent exposure of PII/PHI data
- Enforce secure AI interactions.
5. Agent-to-Agent (A2A) Communication
- Configure remote agent connectivity using the A2A SDK.
- Enable cross-agent collaboration and workflow orchestration.
Must-Have Skills
- Advanced proficiency with Agent Development Kit (ADK).
- Strong experience with Vertex AI Agent Engine.
- Hands-on experience with Model Context Protocol (MCP).
- Experience implementing Agent-to-Agent (A2A) workflows using the A2A SDK.
- Expertise in Google Gen AI SDK for Python.
- Experience building multimodal AI applications.
- Proven experience implementing AI safety layers, including:
- Model Armor
- Cloud DLP API
Good-to-Have Skills (Foundation)
Data & Analytics
- BigQuery optimization techniques, including:
- Partitioning
- Clustering
- Denormalization for performance and cost optimization.
Streaming & Real-Time Pipelines
- Experience building real-time data pipelines using:
- Google Pub/Sub
- BigQuery streaming pipelines
We are seeking a Senior Machine Learning Engineer to support the development and deployment of advanced AI capabilities within the PHI ecosystem.
This role focuses on the execution of Generative AI tasks, including model integration and agent deployment. The candidate will be responsible for building RAG-based workflows and ensuring AI interactions remain grounded and accurate using Google Cloud AI tools.
Key Responsibilities
1. GenAI Integration
- Develop and maintain integrations with Gemini 1.5 Pro and Flash models
- Use the Google Gen AI SDK for Python to build and manage model integrations
2. Agent Deployment
- Assist in deploying AI agents to Vertex AI Agent Engine
- Work with the Agent Development Kit (ADK) for agent lifecycle management
3. RAG & Embeddings
- Generate and manage text and multimodal embeddings
- Support semantic search and Retrieval-Augmented Generation (RAG) pipelines
4. Testing & Quality
- Run evaluation scripts to verify model output quality
- Ensure models follow grounding and response accuracy guidelines
Must-Have Skills
- Strong Python programming
- Experience working with REST APIs
- Hands-on experience with Vertex AI Studio
- Experience working with Gemini APIs
- Understanding of Agentic AI concepts
- Familiarity with ADK CLI
- Experience or understanding of RAG architecture
- Knowledge of embedding generation
Good-to-Have Skills (Foundation):
BigQuery
- Basic SQL knowledge
- Experience with data loading
- Ability to debug and troubleshoot queries
Data Streaming
- Familiarity with Google Pub/Sub
- Understanding of synthetic data generation
Visualization
- Basic reporting and dashboards using Looker Studio
As a Backend Engineer, you will be a core member of the Platform Implementation Team, responsible for building the robust, scalable, and secure backend infrastructure for a multi-cloud enterprise Data & AI platform.
You will design and develop high-performance microservices, RESTful APIs, and event-driven architectures that serve as the backbone for enterprise-wide applications.
Working closely with Platform Engineers, Data Modelers, and UI teams, you will ensure seamless data flow between core business systems (CRM, ERP) and the platform, enabling the rollout of critical business services across multiple global Local Business Units (LBUs).
Backend Development
- Design and develop scalable backend services and microservices
- Build and maintain RESTful APIs for enterprise applications
- Define and maintain API contracts using OpenAPI/Swagger
Platform & System Integration
- Enable seamless integration between enterprise systems (CRM, ERP) and the platform
- Support data flow across multiple global business units
Event-Driven Architecture
- Implement asynchronous processing and event-driven systems
- Work with message brokers and streaming platforms
Cross-Functional Collaboration
- Collaborate with platform engineers, data modelers, and frontend teams
- Contribute to architecture discussions and backend design decisions
Must-Have Skills
Experience
- 5–7 years of hands-on experience in backend software engineering
- Experience building enterprise-grade backend systems
Core Programming
Strong proficiency in at least one backend language:
- Python
- Node.js
- Java
Strong understanding of:
- Object-oriented programming (OOP)
- Functional programming principles
API & Microservices
- Extensive experience building RESTful APIs
- Experience designing microservices architectures
- Ability to define API contracts using OpenAPI / Swagger
Cloud Infrastructure
Hands-on experience with cloud platforms:
- Google Cloud Platform (GCP)
- Microsoft Azure
Examples of services:
- Cloud Functions
- Cloud Run
- Azure App Services
Database Management
Experience with both Relational and NoSQL databases
Relational:
- PostgreSQL
- Cloud SQL
NoSQL:
- Schema design
- Complex querying
- Performance optimization
Event-Driven Architecture
Experience with asynchronous processing and message brokers:
- GCP Pub/Sub
- Apache Kafka
- RabbitMQ
Security & Authentication
Strong understanding of:
- OAuth 2.0
- JWT authentication
- Role-Based Access Control (RBAC)
- Data encryption
Software Engineering Best Practices
- Writing clean, maintainable code
- Version control using Git
- Writing unit and integration tests
- Familiarity with CI/CD pipelines
- Containerization using Docker
Good-to-Have Skills
AI & LLM Integration
- Experience integrating Generative AI models
- Exposure to:
- OpenAI
- Vertex AI
- LLM gateways
- Retrieval-Augmented Generation (RAG)
Frontend Exposure
Basic familiarity with frontend frameworks such as:
- React
- Next.js
- Angular
Understanding how backend APIs integrate with UI applications
Advanced Data Stores
Experience with:
- Vector databases (Pinecone, Milvus)
- Knowledge graphs
Domain Knowledge
- Experience in Life Insurance or BFSI sector
- Understanding of enterprise data governance and compliance standards
We are looking for a highly experienced and technically strong AI Engineering Manager to lead and mentor a team of Machine Learning and AI Engineers. This role will drive the execution, delivery, and operational excellence of enterprise AI/ML products and platforms within the Life Insurance and Financial Services sector.
The role operates primarily in a GCP cloud environment and requires bridging architectural design with hands-on engineering execution. The candidate will manage engineering workloads, guide technical decisions, ensure high code quality, and drive project timelines for enterprise-grade AI solutions.
Key Responsibilities
Team Leadership & Project Management
- Own end-to-end delivery of AI/ML projects including sprint planning, backlog grooming, workload management, and resource allocation.
- Provide technical guidance and mentorship to AI/ML engineers, including code reviews and best practices for MLOps, software engineering, and cloud infrastructure.
- Act as the primary technical point of contact for product managers, architects, and business stakeholders.
- Define and enforce engineering standards for development, testing, CI/CD, and monitoring of AI services.
- Recruit, onboard, mentor, and conduct performance reviews for the AI engineering team.
- Collaborate with Data Engineering, DevOps, and other teams to integrate AI models into enterprise systems and data pipelines.
- Identify and manage technical risks, dependencies, and delivery blockers to maintain project velocity.
Technical Delivery
- Implement and operationalize MLOps pipelines for model training, versioning, deployment, monitoring, and explainability.
- Guide teams in leveraging AI platform capabilities such as RAG pipelines, LLM gateways, and vector databases to build business use cases.
- Ensure security, scalability, and performance of production AI services and underlying cloud infrastructure.
Must-Have Skills & Requirements
- 8+ years of experience in Software Engineering, Machine Learning Engineering, or Data Science.
- Minimum 3+ years in a team management or leadership role.
- Proven experience managing and mentoring engineering teams.
- Strong experience with project management methodologies (Scrum/Kanban) and tools such as Jira.
- Hands-on experience with production-scale MLOps and GenAIOps implementations.
- Deep expertise in cloud platforms, preferably GCP.
- Strong understanding of modern data architecture including vector databases, data warehousing, and ETL/ELT pipelines.
- Solid software engineering fundamentals including API design, system architecture, Git, and CI/CD or DevSecOps pipelines.
- Experience with GenAI technologies such as LLM orchestration frameworks, prompt engineering, and RAG architectures.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
Good-to-Have / Preferred Skills
- Experience managing distributed or multi-geographical engineering teams.
- Knowledge of regulatory requirements within the BFSI or insurance domain (data privacy, Responsible AI).
- Azure cloud or multi-cloud project experience.
- Experience with streaming data platforms and real-time AI processing.
- Cloud certifications such as GCP Professional Cloud Architect or ML Engineer.
We are seeking a highly skilled Senior Backend Developer with deep expertise in Python and FastAPI to join our team. This role focuses on building high-performance, scalable backend services capable of handling high request volumes while integrating advanced LLM technologies.
The ideal candidate will design robust distributed systems, implement efficient data storage solutions, and ensure enterprise-grade security within an Azure-based infrastructure. This is a great opportunity to work on AI/ML integrations and mission-critical applications requiring high performance and reliability.
Key Responsibilities:
Backend Development
- Design and maintain high-performance backend services using Python and FastAPI
- Implement advanced FastAPI features such as dependency injection, middleware, and async programming
- Write comprehensive unit tests using pytest
- Design and maintain Pydantic schemas
High-Concurrency Systems
- Implement asynchronous code for high-volume request processing
- Apply concurrency patterns and atomic operations to ensure efficient system performance
Data & Storage
- Optimize MongoDB operations
- Implement Redis caching strategies (TTL, performance tuning, caching patterns)
Distributed Systems
- Implement rate limiting, retry logic, failover mechanisms, and region routing
- Build microservices and event-driven architectures
- Work with EventHub, Blob Storage, and Databricks
AI/ML Integration
- Integrate OpenAI API, Gemini API, and Claude API
- Manage LLM integrations using LiteLLM
- Optimize AI service usage within the Azure ecosystem
Security
- Implement JWT authentication
- Manage API keys and encryption protocols
- Implement PII masking and data security mechanisms
Collaboration
- Work with cross-functional teams on architecture and system design
- Contribute to engineering best practices and technical improvements
- Mentor junior developers where required
Must-Have Skills & Requirements
Experience
- 7+ years of hands-on Python backend development
- Bachelor’s degree in Computer Science, Engineering, or related field
- Experience building high-traffic, scalable systems
Core Technical Skills
Python
- Advanced knowledge of asynchronous programming, concurrency, and atomic operations
FastAPI
- Expert-level experience with dependency injection, middleware, and async code
Testing
- Strong experience with pytest and Pydantic schemas
Databases
- Hands-on experience with MongoDB and Redis
- Strong understanding of caching patterns, TTL, and performance optimization
Distributed Systems
- Experience with rate limiting, retry logic, failover mechanisms, high concurrency processing, and region routing
Microservices
- Experience building microservices and event-driven systems
- Exposure to EventHub, Blob Storage, and Databricks
Cloud
- Strong experience working in Azure environments
AI Integration
- Familiarity with OpenAI API, Gemini API, Claude API, and LiteLLM
Security
- Implementation experience with JWT authentication, API keys, encryption, and PII masking
Soft Skills
- Strong problem-solving and debugging skills
- Excellent communication and collaboration
- Ability to manage multiple priorities
- Detail-oriented approach to code quality
- Experience mentoring junior developers
Good-to-Have Skills
Containerization
- Docker, Kubernetes (preferably within Azure)
DevOps
- CI/CD pipelines and automated deployment
Monitoring & Observability
- Experience with Grafana, distributed tracing, custom metrics
Industry Experience
- Experience in Insurance, Financial Services, or regulated industries
Advanced AI/ML
- Vector databases
- Similarity search optimization
- LangChain / LangSmith
Data Processing
- Real-time data processing and event streaming
Database Expertise
- PostgreSQL with vector extensions
- Advanced Redis clustering
Multi-Cloud
- Experience with AWS or GCP alongside Azure
Performance Optimization
- Advanced caching strategies
- Backend performance tuning
Role & Responsibilities
- Develop and deliver automation software to build and improve platform functionality
- Ensure reliability, availability, and manageability of applications and cloud platforms
- Champion adoption of Infrastructure as Code (IaC) practices
- Design and build self-service, self-healing, monitoring, and alerting platforms
- Automate development and testing workflows through CI/CD pipelines (Git, Jenkins, SonarQube, Artifactory, Docker containers)
- Build and manage container hosting platforms using Kubernetes
Requirements
- Strong experience deploying and maintaining GCP cloud infrastructure
- Well-versed in service-oriented and cloud-based architecture design patterns
- Knowledge of cloud services including compute, storage, networking, messaging, and automation tools (e.g., CloudFormation/Terraform equivalents)
- Experience with relational and NoSQL databases (Postgres, Cassandra)
- Hands-on experience with automation/configuration tools (Puppet, Chef, Ansible, Terraform)
Additional Skills
- Strong Linux system administration and troubleshooting skills
- Programming/scripting exposure (Bash, Python, Core Java, or Scala)
- CI/CD pipeline experience (Jenkins, Git, Maven, etc.)
- Experience integrating solutions in multi-region environments
- Familiarity with Agile/Scrum/DevOps methodologies
Build, deploy, and maintain production-grade AI/ML solutions for Fortune 500 enterprise clients on Google Cloud Platform. Hands-on role focused on shipping scalable AI systems across GenAI, agentic workflows, traditional ML, and computer vision.
Key Responsibilities:
Generative AI & Agentic Systems
- Design and build GenAI applications (RAG, agentic workflows, multi-agent systems)
- Develop intelligent systems with memory, planning, and reasoning capabilities
- Implement prompt engineering, context optimization, and evaluation frameworks
- Build observable and reliable multi-agent architectures
Traditional ML & Computer Vision
- Develop ML pipelines (forecasting, recommendation, classification, regression)
- Build production-grade computer vision solutions (document AI, image analysis)
- Perform feature engineering, model optimization, and benchmarking
MLOps & Production Engineering
- Own end-to-end ML lifecycle (CI/CD, testing, versioning, deployment)
- Build scalable APIs, microservices, and data pipelines
- Monitor models, detect drift, and implement A/B testing frameworks
Knowledge Solutions
- Architect knowledge graphs and semantic search systems
- Implement hybrid retrieval (vector + keyword search)
Client Collaboration
- Present technical solutions to enterprise clients
- Collaborate with architects, data engineers, and business teams
Required Skills & Experience
- 3–6 years of hands-on ML Engineering experience
- Strong Python and software engineering fundamentals
- Experience shipping production ML systems on cloud (GCP preferred)
- Experience across GenAI, Traditional ML, Computer Vision
- MLOps experience and RAG-based systems
Preferred
- GCP Professional ML Engineer certification
- Knowledge graphs / semantic search experience
- Experience in regulated industries (Healthcare / BFSI)
- Open-source or technical publications
Similar companies
About the company
What is ‘searce’
Searce means ‘a fine sieve’ & indicates ‘to refine, to analyze, to improve’. It signifies our way of working: To improve to the finest degree of excellence, ‘solving for better’ every time. Searcians are passionate improvers & solvers who love to question the status quo.
The primary purpose of all of us, at Searce, is driving intelligent, impactful & futuristic business outcomes using new-age technology. This purpose is driven passionately by HAPPIER people who aim to become better, everyday.
What we do
Searce is a modern tech consulting firm that empowers clients to futurify their businesses, leveraging Cloud, AI & Analytics.
- We are a category defining niche’ cloud-native technology consulting company, specializing in modernizing (improve, automate & transform) the full-scope of infra, app, process & work
- We partner with clients in their ‘beyond x’ journey to drive intelligent, impactful & futuristic business outcomes
- We are the most preferred tech partner of choice when it comes to ‘solving for better’ for the new-age tech startups & digital enterprises, leading disruption in their industries
- Our Service Offerings: We offer Advanced Cloud, Data & App Modernization, Cloud Consulting, Management & Improvement (DevOps, SysOps & Cloud Managed Services), Applied AI & Analytics services
- As one of the top 5 niche’ full scope global partners for Google Cloud & a preferred partner for AWS, we are the most preferred ‘engineering-led’ tech company of choice when it comes to solving complex business problems.
Who we are
We are passionate improvers, solvers & futurists. Driven by our engineering excellence mindset, we care most about delivering intelligent, impactful & futuristic business outcomes. Searcians are motivated by continuous improvement & solving for better in everything we do.
At the core, a Searcian is self-driven to become better, everyday. In passionate pursuit of the finest degree of excellence we drive exceptional outcomes in everything we do.
We believe that trust is the most important value. We also believe that we need to ‘earn the trust’, everytime one engages with us. And earning trust for us is far more important than anything else. We aim to be the *most trusted* tech consulting partner for our clients.
We are HAPPIER at heart. Humble, Adaptable, Positive, Passionate, Innovative, Excellence focused, & Responsible. We live the HAPPIER Culture Code.
Being HAPPIER.
How we work
- Customers. Partners. Our aim is to build relationships with customers for life. And meaningfully improve the life of every customer.
- We do what we say. We say what we do. We are uncomfortably honest and transparent. Being genuine wins trust & makes people happier.
- Mistakes are encouraged. We make mistakes. Tons of those. Everyday. And we don’t mind apologizing to our juniors, peers or superiors. We are no ego-doers.
- Underpromise. Overdeliver. We work with a deep desire to go above and beyond in everything we do. Everytime.
So, If you are passionate about tech, future & what you read above (we really are!), apply here to experience the ‘Art of Possible’
Jobs
12
About the company
Founded in 2004, Inteliment helps some of the most forward-thinking enterprises worldwide derive maximum business impact through their Data. With 20+ years in analytics, we help businesses harness deep expertise and the latest technologies to drive innovation, sharpen their competitive edge, and stay future-ready.
Inteliment is recognized as a leading provider of Data Driven Analytical Solutions & Services in Visual & Predictive Analytics, Data Science, IoT, Mobility & Artificial Intelligence areas. We strive for the success of our customers through Innovation, Technology and Partnerships.
Inteliment operates its Delivery & IP Centers in India and Australia and through its group companies in Singapore, Finland and USA.
Jobs
1
About the company
Who are we?
Trendlyne is a funded, profitable products startup in the financial markets space. We have cutting-edge analytics products built for Indian and US customers, for stock markets and mutual funds.
Our founders are IIT + IIM graduates, with strong tech and marketing experience. We have top finance and management experts on the Board of Directors.
What do we do?
We build best in class analytics in the US and Indian stock market space. Organic growth in B2B and B2C products have already made the company profitable. We deliver 1 billion+ APIs every month to B2B customers, and have a B2C website and app.
Visit us at trendlyne.com, or look for the Trendlyne mobile app on the Google Play Store:
https://play.google.com/store/apps/details?id=com.trendlyne.markets
We are a great place to work
We have a culture where you are building something awesome, and your work makes a difference. Full-time employees get paid leave, parental leave, medical insurance, and employee stock options.
We invest in your learning and check in with you to help you meet your career goals. We keep regular hours and don't work on weekends.
Jobs
4
About the company
Baker Street Fintech (Product Name: Cambridge Wealth) is a Financial Products Company. We help build world-class Fintech Products for our Clients who want to manage their wealth on our platform. Founded by professionals with Experiences spanning from PwC UK to Banking and Technology firms, we are a financially stable, profitable company growing quickly!
Jobs
3
About the company
Jobs
42
About the company
ToppersNotes, is a venture backed edtech startup. We are looking for exciting people to work on board to build automated learning platform.
We have a team full of IIT grads, faculties from IITB and have vast experience in fields of AI/ML, Data Analytics.
We are expanding out our technical team in order to realize this solution.
Jobs
14
About the company
Ctruh is building the world’s first AI-powered Unified XR Commerce Studio, enabling brands to create immersive digital commerce experiences directly in the browser. The platform combines artificial intelligence with real-time 3D technology to transform how products are discovered, experienced, and purchased online.
Ctruh’s technology allows companies to instantly create virtual stores, AR try-ons, interactive 3D product pages, and mixed-reality marketing experiences without writing code or requiring specialized hardware. These experiences launch directly through a simple URL and work across web, mobile, and XR devices.
👥 About the Team
Ctruh is a deep-tech startup headquartered in Bengaluru and founded by Vinay Agastya. The team combines expertise across AI, XR, graphics engineering, and enterprise SaaS to build scalable infrastructure for immersive commerce.
Their mission is to democratize XR technology, making immersive digital experiences as easy to create as building a website.
🏆 Milestones
- Founded in 2022
- Built a no-code/low-code web-based 3D engine for XR experiences
- Platform powered by VersaAI, enabling instant text-to-3D asset creation
- Team of ~40+ employees globally
- Achieved approximately $5M revenue in 2025
- Growing community of 39K+ LinkedIn followers
Jobs
7
About the company
One2N is a boutique technology consulting firm that helps fast-growing companies build and scale high-performance backend systems. As startups evolve from MVP to tens of thousands of users, the engineering challenges change dramatically — and we specialise in solving exactly those problems. Our work ensures that scaling, reliability, and performance never become bottlenecks to growth.
With deep expertise in Site Reliability Engineering, Cloud Infrastructure & DevOps, Data Engineering, and Backend Architecture, we partner with engineering teams to design, build, and operate resilient cloud-native systems. We believe in pragmatic, impact-driven engineering — creating solutions that are robust today and adaptable for tomorrow, so teams can ship faster, stay reliable, and scale confidently.
Jobs
1
About the company
Vijesha IT Services LLP is a fast-growing technology and education company headquartered in Bengaluru, Karnataka, with operations and presence in India and the UK. Founded in 2023, we specialize in delivering custom software solutions, web and mobile app development, UI/UX design, digital marketing, cloud-powered IT services, and e-commerce platforms.
Beyond technology services, Vijesha runs a robust EdTech platform offering industry-aligned IT and non-IT courses, interactive live classes, skill-building programs, corporate training, and real-world internships, bridging the gap between learning and employability.
At Vijesha, innovation, collaboration, and learner-centric solutions are at the core of our mission—helping businesses transform digitally and empowering students and professionals with the skills for tomorrow’s workforce.
Jobs
1
About the company
Jobs
1










