Cutshort logo
Quantiphi
Quantiphi cover picture
Founded :
2013
Type :
Products & Services
Size :
1000-5000
Stage :
Profitable

About

Quantiphi is an award-winning AI-first digital engineering company driven by the desire to reimagine and realize transformational opportunities at the heart of the business. Since its inception in 2013, Quantiphi has solved the toughest and most complex business problems by combining deep industry experience, disciplined cloud, and data-engineering practices, and cutting-edge artificial intelligence research to achieve accelerated and quantifiable business results.

Read more

Tech stack

Google Cloud Platform (GCP)
skill iconNodeJS (Node.js)
skill iconPython
Artificial Intelligence (AI)
MLOps

Company video

Quantiphi's video section

Candid answers by the company

What is the location preference of jobs?
What does Quantiphi do as a company?
What makes Quantiphi different from other AI companies?

Bengaluru, Mumbai, and Trivandrum

Photos

Company featured pictures
Company featured pictures

Connect with the team

Profile picture
Sameer Balpande

Company social profiles

bloglinkedintwitterfacebook

Jobs at Quantiphi

Quantiphi
at Quantiphi
3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
5 - 7 yrs
Upto ₹30L / yr (Varies
)
skill iconPython
skill iconNodeJS (Node.js)
skill iconJava
RESTful APIs
Microservices

As a Backend Engineer, you will be a core member of the Platform Implementation Team, responsible for building the robust, scalable, and secure backend infrastructure for a multi-cloud enterprise Data & AI platform.


You will design and develop high-performance microservices, RESTful APIs, and event-driven architectures that serve as the backbone for enterprise-wide applications.

Working closely with Platform Engineers, Data Modelers, and UI teams, you will ensure seamless data flow between core business systems (CRM, ERP) and the platform, enabling the rollout of critical business services across multiple global Local Business Units (LBUs).



Backend Development

  • Design and develop scalable backend services and microservices
  • Build and maintain RESTful APIs for enterprise applications
  • Define and maintain API contracts using OpenAPI/Swagger

Platform & System Integration

  • Enable seamless integration between enterprise systems (CRM, ERP) and the platform
  • Support data flow across multiple global business units

Event-Driven Architecture

  • Implement asynchronous processing and event-driven systems
  • Work with message brokers and streaming platforms

Cross-Functional Collaboration

  • Collaborate with platform engineers, data modelers, and frontend teams
  • Contribute to architecture discussions and backend design decisions

Must-Have Skills

Experience

  • 5–7 years of hands-on experience in backend software engineering
  • Experience building enterprise-grade backend systems

Core Programming

Strong proficiency in at least one backend language:

  • Python
  • Node.js
  • Java

Strong understanding of:

  • Object-oriented programming (OOP)
  • Functional programming principles

API & Microservices

  • Extensive experience building RESTful APIs
  • Experience designing microservices architectures
  • Ability to define API contracts using OpenAPI / Swagger

Cloud Infrastructure

Hands-on experience with cloud platforms:

  • Google Cloud Platform (GCP)
  • Microsoft Azure

Examples of services:

  • Cloud Functions
  • Cloud Run
  • Azure App Services

Database Management

Experience with both Relational and NoSQL databases

Relational:

  • PostgreSQL
  • Cloud SQL

NoSQL:

  • Schema design
  • Complex querying
  • Performance optimization

Event-Driven Architecture

Experience with asynchronous processing and message brokers:

  • GCP Pub/Sub
  • Apache Kafka
  • RabbitMQ

Security & Authentication

Strong understanding of:

  • OAuth 2.0
  • JWT authentication
  • Role-Based Access Control (RBAC)
  • Data encryption

Software Engineering Best Practices

  • Writing clean, maintainable code
  • Version control using Git
  • Writing unit and integration tests
  • Familiarity with CI/CD pipelines
  • Containerization using Docker

Good-to-Have Skills

AI & LLM Integration

  • Experience integrating Generative AI models
  • Exposure to:
  • OpenAI
  • Vertex AI
  • LLM gateways
  • Retrieval-Augmented Generation (RAG)

Frontend Exposure

Basic familiarity with frontend frameworks such as:

  • React
  • Next.js
  • Angular

Understanding how backend APIs integrate with UI applications

Advanced Data Stores

Experience with:

  • Vector databases (Pinecone, Milvus)
  • Knowledge graphs

Domain Knowledge

  • Experience in Life Insurance or BFSI sector
  • Understanding of enterprise data governance and compliance standards


Read more
Quantiphi
at Quantiphi
3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
10 - 15 yrs
Best in industry
MLOps
Google Cloud Platform (GCP)
Datawarehousing
ETL
Artificial Intelligence (AI)


We are looking for a highly experienced and technically strong AI Engineering Manager to lead and mentor a team of Machine Learning and AI Engineers. This role will drive the execution, delivery, and operational excellence of enterprise AI/ML products and platforms within the Life Insurance and Financial Services sector.

The role operates primarily in a GCP cloud environment and requires bridging architectural design with hands-on engineering execution. The candidate will manage engineering workloads, guide technical decisions, ensure high code quality, and drive project timelines for enterprise-grade AI solutions.

Key Responsibilities

Team Leadership & Project Management

  • Own end-to-end delivery of AI/ML projects including sprint planning, backlog grooming, workload management, and resource allocation.
  • Provide technical guidance and mentorship to AI/ML engineers, including code reviews and best practices for MLOps, software engineering, and cloud infrastructure.
  • Act as the primary technical point of contact for product managers, architects, and business stakeholders.
  • Define and enforce engineering standards for development, testing, CI/CD, and monitoring of AI services.
  • Recruit, onboard, mentor, and conduct performance reviews for the AI engineering team.
  • Collaborate with Data Engineering, DevOps, and other teams to integrate AI models into enterprise systems and data pipelines.
  • Identify and manage technical risks, dependencies, and delivery blockers to maintain project velocity.

Technical Delivery

  • Implement and operationalize MLOps pipelines for model training, versioning, deployment, monitoring, and explainability.
  • Guide teams in leveraging AI platform capabilities such as RAG pipelines, LLM gateways, and vector databases to build business use cases.
  • Ensure security, scalability, and performance of production AI services and underlying cloud infrastructure.

Must-Have Skills & Requirements

  • 8+ years of experience in Software Engineering, Machine Learning Engineering, or Data Science.
  • Minimum 3+ years in a team management or leadership role.
  • Proven experience managing and mentoring engineering teams.
  • Strong experience with project management methodologies (Scrum/Kanban) and tools such as Jira.
  • Hands-on experience with production-scale MLOps and GenAIOps implementations.
  • Deep expertise in cloud platforms, preferably GCP.
  • Strong understanding of modern data architecture including vector databases, data warehousing, and ETL/ELT pipelines.
  • Solid software engineering fundamentals including API design, system architecture, Git, and CI/CD or DevSecOps pipelines.
  • Experience with GenAI technologies such as LLM orchestration frameworks, prompt engineering, and RAG architectures.
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.

Good-to-Have / Preferred Skills

  • Experience managing distributed or multi-geographical engineering teams.
  • Knowledge of regulatory requirements within the BFSI or insurance domain (data privacy, Responsible AI).
  • Azure cloud or multi-cloud project experience.
  • Experience with streaming data platforms and real-time AI processing.
  • Cloud certifications such as GCP Professional Cloud Architect or ML Engineer.


Read more
Quantiphi
at Quantiphi
3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
7 - 10 yrs
Upto ₹40L / yr (Varies
)
skill iconPython
RESTful APIs
Microservices
skill iconMongoDB
pytest
+1 more

We are seeking a highly skilled Senior Backend Developer with deep expertise in Python and FastAPI to join our team. This role focuses on building high-performance, scalable backend services capable of handling high request volumes while integrating advanced LLM technologies.


The ideal candidate will design robust distributed systems, implement efficient data storage solutions, and ensure enterprise-grade security within an Azure-based infrastructure. This is a great opportunity to work on AI/ML integrations and mission-critical applications requiring high performance and reliability.


Key Responsibilities:


Backend Development

  • Design and maintain high-performance backend services using Python and FastAPI
  • Implement advanced FastAPI features such as dependency injection, middleware, and async programming
  • Write comprehensive unit tests using pytest
  • Design and maintain Pydantic schemas

High-Concurrency Systems

  • Implement asynchronous code for high-volume request processing
  • Apply concurrency patterns and atomic operations to ensure efficient system performance

Data & Storage

  • Optimize MongoDB operations
  • Implement Redis caching strategies (TTL, performance tuning, caching patterns)

Distributed Systems

  • Implement rate limiting, retry logic, failover mechanisms, and region routing
  • Build microservices and event-driven architectures
  • Work with EventHub, Blob Storage, and Databricks

AI/ML Integration

  • Integrate OpenAI API, Gemini API, and Claude API
  • Manage LLM integrations using LiteLLM
  • Optimize AI service usage within the Azure ecosystem

Security

  • Implement JWT authentication
  • Manage API keys and encryption protocols
  • Implement PII masking and data security mechanisms

Collaboration

  • Work with cross-functional teams on architecture and system design
  • Contribute to engineering best practices and technical improvements
  • Mentor junior developers where required

Must-Have Skills & Requirements

Experience

  • 7+ years of hands-on Python backend development
  • Bachelor’s degree in Computer Science, Engineering, or related field
  • Experience building high-traffic, scalable systems

Core Technical Skills

Python

  • Advanced knowledge of asynchronous programming, concurrency, and atomic operations

FastAPI

  • Expert-level experience with dependency injection, middleware, and async code

Testing

  • Strong experience with pytest and Pydantic schemas

Databases

  • Hands-on experience with MongoDB and Redis
  • Strong understanding of caching patterns, TTL, and performance optimization

Distributed Systems

  • Experience with rate limiting, retry logic, failover mechanisms, high concurrency processing, and region routing

Microservices

  • Experience building microservices and event-driven systems
  • Exposure to EventHub, Blob Storage, and Databricks

Cloud

  • Strong experience working in Azure environments

AI Integration

  • Familiarity with OpenAI API, Gemini API, Claude API, and LiteLLM

Security

  • Implementation experience with JWT authentication, API keys, encryption, and PII masking

Soft Skills

  • Strong problem-solving and debugging skills
  • Excellent communication and collaboration
  • Ability to manage multiple priorities
  • Detail-oriented approach to code quality
  • Experience mentoring junior developers

Good-to-Have Skills

Containerization

  • Docker, Kubernetes (preferably within Azure)

DevOps

  • CI/CD pipelines and automated deployment

Monitoring & Observability

  • Experience with Grafana, distributed tracing, custom metrics

Industry Experience

  • Experience in Insurance, Financial Services, or regulated industries

Advanced AI/ML

  • Vector databases
  • Similarity search optimization
  • LangChain / LangSmith

Data Processing

  • Real-time data processing and event streaming

Database Expertise

  • PostgreSQL with vector extensions
  • Advanced Redis clustering

Multi-Cloud

  • Experience with AWS or GCP alongside Azure

Performance Optimization

  • Advanced caching strategies
  • Backend performance tuning


Read more
Quantiphi
at Quantiphi
3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Mumbai, Trivandrum, Bengaluru (Bangalore)
3 - 6 yrs
Upto ₹30L / yr (Varies
)
Google Cloud Platform (GCP)
DevOps
CI/CD
skill iconKubernetes
skill iconGitHub
+2 more

Role & Responsibilities

  • Develop and deliver automation software to build and improve platform functionality
  • Ensure reliability, availability, and manageability of applications and cloud platforms
  • Champion adoption of Infrastructure as Code (IaC) practices
  • Design and build self-service, self-healing, monitoring, and alerting platforms
  • Automate development and testing workflows through CI/CD pipelines (Git, Jenkins, SonarQube, Artifactory, Docker containers)
  • Build and manage container hosting platforms using Kubernetes

Requirements

  • Strong experience deploying and maintaining GCP cloud infrastructure
  • Well-versed in service-oriented and cloud-based architecture design patterns
  • Knowledge of cloud services including compute, storage, networking, messaging, and automation tools (e.g., CloudFormation/Terraform equivalents)
  • Experience with relational and NoSQL databases (Postgres, Cassandra)
  • Hands-on experience with automation/configuration tools (Puppet, Chef, Ansible, Terraform)

Additional Skills

  • Strong Linux system administration and troubleshooting skills
  • Programming/scripting exposure (Bash, Python, Core Java, or Scala)
  • CI/CD pipeline experience (Jenkins, Git, Maven, etc.)
  • Experience integrating solutions in multi-region environments
  • Familiarity with Agile/Scrum/DevOps methodologies
Read more
Quantiphi
at Quantiphi
3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Mumbai, Trivandrum
4 - 7 yrs
Upto ₹35L / yr (Varies
)
Large Language Models (LLM) tuning
skill iconDeep Learning
Google Cloud Platform (GCP)
Google Vertex AI
Windows Azure
+2 more

Build, deploy, and maintain production-grade AI/ML solutions for Fortune 500 enterprise clients on Google Cloud Platform. Hands-on role focused on shipping scalable AI systems across GenAI, agentic workflows, traditional ML, and computer vision.


Key Responsibilities:


Generative AI & Agentic Systems

  • Design and build GenAI applications (RAG, agentic workflows, multi-agent systems)
  • Develop intelligent systems with memory, planning, and reasoning capabilities
  • Implement prompt engineering, context optimization, and evaluation frameworks
  • Build observable and reliable multi-agent architectures

Traditional ML & Computer Vision

  • Develop ML pipelines (forecasting, recommendation, classification, regression)
  • Build production-grade computer vision solutions (document AI, image analysis)
  • Perform feature engineering, model optimization, and benchmarking

MLOps & Production Engineering

  • Own end-to-end ML lifecycle (CI/CD, testing, versioning, deployment)
  • Build scalable APIs, microservices, and data pipelines
  • Monitor models, detect drift, and implement A/B testing frameworks

Knowledge Solutions

  • Architect knowledge graphs and semantic search systems
  • Implement hybrid retrieval (vector + keyword search)

Client Collaboration

  • Present technical solutions to enterprise clients
  • Collaborate with architects, data engineers, and business teams

Required Skills & Experience

  • 3–6 years of hands-on ML Engineering experience
  • Strong Python and software engineering fundamentals
  • Experience shipping production ML systems on cloud (GCP preferred)
  • Experience across GenAI, Traditional ML, Computer Vision
  • MLOps experience and RAG-based systems

Preferred

  • GCP Professional ML Engineer certification
  • Knowledge graphs / semantic search experience
  • Experience in regulated industries (Healthcare / BFSI)
  • Open-source or technical publications
Read more
Quantiphi
at Quantiphi
3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
4 - 12 yrs
Upto ₹45L / yr (Varies
)
MLOps
skill iconPython
databricks
Windows Azure
skill iconAmazon Web Services (AWS)

We are seeking a skilled and passionate ML Engineer with 3+ years of experience to join our team. The ideal candidate will be instrumental in developing, deploying, and maintaining machine learning models, with a strong focus on MLOps practices.

This role requires hands-on experience with Azure cloud services, Databricks, and MLflow to build robust and scalable ML solutions.


Responsibilities

  • Design, develop, and implement machine learning models and algorithms to solve complex business problems.
  • Collaborate with data scientists to transition models from research and development into production-ready systems.
  • Build and maintain scalable data pipelines for ML model training and inference using Databricks.
  • Implement and manage the ML model lifecycle using MLflow, including experiment tracking, model versioning, and model registry.
  • Deploy and manage ML models in production environments on Azure, leveraging services such as:
  • Azure Machine Learning
  • Azure Kubernetes Service (AKS)
  • Azure Functions
  • Support MLOps workloads by automating model training, evaluation, deployment, and monitoring processes.
  • Ensure the reliability, performance, and scalability of ML systems in production.
  • Monitor model performance, detect model drift, and implement retraining strategies.
  • Collaborate with DevOps and Data Engineering teams to integrate ML solutions into existing infrastructure and CI/CD pipelines.
  • Document model architecture, data flows, and operational procedures.

Qualifications

Education

  • Bachelor’s or Master’s degree in Computer Science, Engineering, Statistics, or a related quantitative field.

Experience

  • Minimum 3+ years of professional experience as an ML Engineer or in a similar role.

Required Skills

  • Strong proficiency in Python for data manipulation, machine learning, and scripting.
  • Hands-on experience with machine learning frameworks, such as:
  • Scikit-learn
  • TensorFlow
  • PyTorch
  • Keras
  • Demonstrated experience with MLflow for:
  • Experiment tracking
  • Model management
  • Model deployment
  • Proven experience working with Microsoft Azure cloud services, specifically:
  • Azure Machine Learning
  • Azure Databricks
  • Related compute and storage services
  • Solid experience with Databricks for:
  • Data processing
  • ETL pipelines
  • ML model development
  • Strong understanding of MLOps principles and practices, including:
  • CI/CD for ML
  • Model versioning
  • Model monitoring
  • Model retraining
  • Experience with containerization and orchestration technologies, including:
  • Docker
  • Kubernetes (especially AKS)
  • Familiarity with SQL and data warehousing concepts.
  • Experience working with large datasets and distributed computing frameworks.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and collaboration skills.

Nice-to-Have Skills

  • Experience with other cloud platforms (AWS or GCP).
  • Knowledge of big data technologies such as Apache Spark.
  • Experience with Azure DevOps for CI/CD pipelines.
  • Familiarity with real-time inference patterns and streaming data.
  • Understanding of Responsible AI principles, including fairness, explainability, and privacy.

Certifications (Preferred)

  • Microsoft Certified: Azure AI Engineer Associate
  • Databricks Certified Machine Learning Associate (or higher) 
Read more
Quantiphi
at Quantiphi
3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
6 - 10 yrs
Upto ₹40L / yr (Varies
)
Windows Azure
databricks
Data Structures
Data engineering

We are hiring an Associate Technical Architect with strong expertise in Azure-based data platforms to design scalable data lakes, data warehouses, and enterprise data pipelines, while working with global teams.


Key Responsibilities

  • Design and implement scalable data lake, data warehouse, and lakehouse architectures on Azure
  • Build resilient data pipelines using Azure services
  • Architect and optimize cloud-based data platforms
  • Improve large-scale data processing and query performance
  • Collaborate with engineering teams, QA, product managers, and stakeholders
  • Communicate technical roadmap, risks, and mitigation strategies


Must-Have Skills:


  • 6+ years of experience in Azure Data Engineering / Data Architecture

Azure Data Platform

  • Experience with Azure Data Factory
  • Hands-on with Azure Databricks and PySpark
  • Experience with Azure Data Lake Storage
  • Knowledge of Azure Synapse or Azure SQL for data warehousing

Programming & Data Skills

  • Strong programming skills in Python and PySpark
  • Advanced SQL with query optimization and performance tuning
  • Experience building ETL / ELT data pipelines

Data Architecture Knowledge

  • Understanding of MPP databases
  • Knowledge of partitioning, indexing, and performance optimization
  • Experience with data modeling (dimensional, normalized, lakehouse)

Cloud Fundamentals

  • Azure security, networking, scalability, and disaster recovery
  • Experience with on-premise to Azure migrations

Certification (Preferred)

  • Azure Data Engineer or Azure Solutions Architect certification

Good-to-Have Skills

  • Domain experience in FSI, Retail, or CPG
  • Exposure to data governance tools
  • Experience with BI tools such as Power BI or Tableau
  • Familiarity with Terraform, CI/CD pipelines, or Azure DevOps
  • Experience with NoSQL databases such as Cosmos DB or MongoDB

Soft Skills

  • Strong problem-solving and analytical thinking
  • Good communication and stakeholder management
  • Ability to translate technical concepts into business outcomes
  • Experience working with global or distributed teams
Read more
Quantiphi
at Quantiphi
3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Mumbai, Trivandrum
4 - 8 yrs
Upto ₹30L / yr (Varies
)
skill iconNodeJS (Node.js)
skill iconPython
Dialog Flow
rasa
yellow.ai
+1 more

Responsible for developing, enhancing, modifying, and maintaining chatbot applications in the Global Markets environment. The role involves designing, coding, testing, debugging, and documenting conversational AI solutions, along with supporting activities aligned to the corporate systems architecture.

You will work closely with business partners to understand requirements, analyze data, and deliver optimal, market-ready conversational AI and automation solutions.


Key Responsibilities

  • Design, develop, test, debug, and maintain chatbot and virtual agent applications
  • Collaborate with business stakeholders to define and translate requirements into technical solutions
  • Analyze large volumes of conversational data to improve chatbot accuracy and performance
  • Develop automation workflows for data handling and refinement
  • Train and optimize chatbots using historical chat logs and user-generated content
  • Ensure solutions align with enterprise architecture and best practices
  • Document solutions, workflows, and technical designs clearly

Required Skills

  • Hands-on experience in developing virtual agents (chatbots/voicebots) and Natural Language Processing (NLP)
  • Experience with one or more AI/NLP platforms such as:
  • Dialogflow, Amazon Lex, Alexa, Rasa, LUIS, Kore.AI
  • Microsoft Bot Framework, IBM Watson, Wit.ai, Salesforce Einstein, Converse.ai
  • Strong programming knowledge in Python, JavaScript, or Node.js
  • Experience training chatbots using historical conversations or large-scale text datasets
  • Practical knowledge of:
  • Formal syntax and semantics
  • Corpus analysis
  • Dialogue management
  • Strong written communication skills
  • Strong problem-solving ability and willingness to learn emerging technologies

Nice-to-Have Skills

  • Understanding of conversational UI and voice-based processing (Text-to-Speech, Speech-to-Text)
  • Experience building voice apps for Amazon Alexa or Google Home
  • Experience with Test-Driven Development (TDD) and Agile methodologies
  • Ability to design and implement end-to-end pipelines for AI-based conversational applications
  • Experience in text mining, hypothesis generation, and historical data analysis
  • Strong knowledge of regular expressions for data cleaning and preprocessing
  • Understanding of API integrations, SSO, and token-based authentication
  • Experience writing unit test cases as per project standards
  • Knowledge of HTTP, REST APIs, sockets, and web services
  • Ability to perform keyword and topic extraction from chat logs
  • Experience training and tuning topic modeling algorithms such as LDA and NMF
  • Understanding of classical Machine Learning algorithms and appropriate evaluation metrics
  • Experience with NLP frameworks such as NLTK and spaCy


Read more
Quantiphi
at Quantiphi
3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Mumbai, Trivandrum
4 - 7 yrs
Upto ₹30L / yr (Varies
)
Google Cloud Platform (GCP)
SQL
ETL
Datawarehousing
Data-flow analysis

We are looking for a skilled Data Engineer / Data Warehouse Engineer to design, develop, and maintain scalable data pipelines and enterprise data warehouse solutions. The role involves close collaboration with business stakeholders and BI teams to deliver high-quality data for analytics and reporting.


Key Responsibilities

  • Collaborate with business users and stakeholders to understand business processes and data requirements
  • Design and implement dimensional data models, including fact and dimension tables
  • Identify, design, and implement data transformation and cleansing logic
  • Build and maintain scalable, reliable, and high-performance ETL/ELT pipelines
  • Extract, transform, and load data from multiple source systems into the Enterprise Data Warehouse
  • Develop conceptual, logical, and physical data models, including metadata, data lineage, and technical definitions
  • Design, develop, and maintain ETL workflows and mappings using appropriate data load techniques
  • Provide high-level design, research, and effort estimates for data integration initiatives
  • Provide production support for ETL processes to ensure data availability and SLA adherence
  • Analyze and resolve data pipeline and performance issues
  • Partner with BI teams to design and develop reports and dashboards while ensuring data integrity and quality
  • Translate business requirements into well-defined technical data specifications
  • Work with data from ERP, CRM, HRIS, and other transactional systems for analytics and reporting
  • Define and document BI usage through use cases, prototypes, testing, and deployment
  • Support and enhance data governance and data quality processes
  • Identify trends, patterns, anomalies, and data quality issues, and recommend improvements
  • Train and support business users, IT analysts, and developers
  • Lead and collaborate with teams spread across multiple locations

Required Skills & Qualifications

  • Bachelor’s degree in Computer Science or a related field, or equivalent work experience
  • 3+ years of experience in Data Warehousing, Data Engineering, or Data Integration
  • Strong expertise in data warehousing concepts, tools, and best practices
  • Excellent SQL skills
  • Strong knowledge of relational databases such as SQL Server, PostgreSQL, and MySQL
  • Hands-on experience with Google Cloud Platform (GCP) services, including:
  1. BigQuery
  2. Cloud SQL
  3. Cloud Composer (Airflow)
  4. Dataflow
  5. Dataproc
  6. Cloud Functions
  7. Google Cloud Storage (GCS)
  • Experience with Informatica PowerExchange for Mainframe, Salesforce, and modern data sources
  • Strong experience integrating data using APIs, XML, JSON, and similar formats
  • In-depth understanding of OLAP, ETL frameworks, Data Warehousing, and Data Lakes
  • Solid understanding of SDLC, Agile, and Scrum methodologies
  • Strong problem-solving, multitasking, and organizational skills
  • Experience handling large-scale datasets and database design
  • Strong verbal and written communication skills
  • Experience leading teams across multiple locations

Good to Have

  • Experience with SSRS and SSIS
  • Exposure to AWS and/or Azure cloud platforms
  • Experience working with enterprise BI and analytics tools

Why Join Us

  • Opportunity to work on large-scale, enterprise data platforms
  • Exposure to modern cloud-native data engineering technologies
  • Collaborative environment with strong stakeholder interaction
  • Career growth and leadership opportunities
Read more
Quantiphi
at Quantiphi
3 candid answers
1 video
Karishma Chakraborty
Posted by Karishma Chakraborty
icon

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.

Remote, Mumbai, Bengaluru (Bangalore)
6 - 9 yrs
₹15L - ₹38L / yr
skill iconVue.js
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconJavascript
+4 more

Company Profile

Quantiphi is an award-winning Applied AI and Big Data software and services company, driven by a deep desire to solve transformational problems at the heart of businesses. Our signature approach combines groundbreaking machine-learning research with disciplined cloud and data-engineering practices to create breakthrough impact at unprecedented speed.

Some company highlights:

  • Quantiphi has seen 2.5x growth YoY since its inception in 2013.
  • Winner of the "Machine Learning Partner of the Year" award from Google for two consecutive years - 2017 and 2018.
  • Winner of the "Social Impact Partner of the Year" award from Google for 2019.
  • Headquartered in Boston, with 700+ data science professionals across different offices.

 

For more details, visit: our http://www.quantiphi.com/">Website or our https://www.linkedin.com/company/quantiphi/">LinkedIn Page

 

Job Description

 

Role:                                 Associate Tech Architect / Tech Architect ReactJS +Python+AWS

Experience Level:           7-13 Years

Work location:                Mumbai & Bangalore

 

We are looking for an experienced full stack developer( ReactJS and Python ) who can help create dynamic software applications for our clients with their skill set. In this role, you will be responsible for gathering requirements from clients and accordingly write and test scalable code, and develop front end and back-end components.

 

Technologies worked on:

ReactJS, Python, AWS

 

Requirement Description:

 

  • Full Stack developer with experience in ReactJS, Python, API Gateway, Fargate and ECS
  • Well-experienced in working with tools like Git, Maven, JFrog
  • Should have a solid understanding of object-oriented programming (OOP)
  • Well-experienced to perform Unit Testing and Integration Testing and have good experience in Agile based development approach
  • Expertise in developing enterprise-level web applications and REST and SOAP APIs using MicroServices, with demonstrable production-scale experience
  • Demonstrate strong design and programming skills using JSON, Web Services, XML, XSLT, PL/SQL in Unix and Windows environments
  • Strong background working with Linux/UNIX environments and strong Shell scripting experience
  • Working knowledge with SQL or  No SQL databases
  • Understand Architecture Requirements and ensure effective design, development, validation, and support activities
  • Understanding of core AWS services, uses, and basic AWS architecture best practices
  • Proficiency in developing, deploying, and debugging cloud-based applications using AWS
  • Ability to use the AWS service APIs, AWS CLI, and SDKs to write applications
  • Ability to identify key features of AWS services
  • Identify bottlenecks and bugs, and recommend solutions by comparing the advantages and disadvantages of custom development
  • Should contribute to team meetings, troubleshooting development and production problems across multiple environments and operating platforms
  • Execute strong collaboration and communication skills within distributed project teams
  • Continuously discover, evaluate, and implement new technologies to maximize development efficiency

 

Read more
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo

Similar companies

REConnect Energy cover picture
REConnect Energy's logo

REConnect Energy

https://reconnectenergy.com
Founded
2010
Type
Products & Services
Size
100-1000
Stage
Raised funding

About the company

REConnect Energy is a leading digital energy platform startup focused on climate resilience solutions. Headquartered in Bangalore, India, with offices in London, Gurgaon, and Mumbai, the company has established itself as India's largest tech-enabled service provider in predictive analytics and demand-supply aggregation for the energy sector. REConnect Energy develops AI and Grid Automation software products for renewables and energy utilities, with a core focus on efficient asset and grid management, climate risk mitigation, and real-time asset visibility.


REConnect Energy offers a comprehensive range of services, including predictive analytics for electric utilities, renewable energy forecasting and grid integration, machine learning and AI for energy markets, and an OTC marketplace for clean energy. The company also specializes in environmental markets, renewable energy policies, and energy dispatch and aggregation. Positioned at the forefront of the energy transition, REConnect Energy addresses complex challenges in climate data and analytics, driving innovation in the renewable energy sector.

Jobs

3

NonStop io Technologies Pvt Ltd cover picture
NonStop io Technologies Pvt Ltd's logo

NonStop io Technologies Pvt Ltd

https://nonstopio.com
Founded
2015
Type
Products & Services
Size
20-100
Stage
Profitable

About the company

NonStop io Technologies Pvt. Ltd.est. in 2015 is a software product development company.We invest in our client’s vision, build the technology and make sure the end-product is in alignment with their end business goals over short and the long term.

Jobs

17

AerTrip India Limited cover picture
AerTrip India Limited's logo

AerTrip India Limited

https://aertrip.com
Founded
2012
Type
Product
Size
20-100
Stage
Bootstrapped

About the company

Founded in 2012 Aertrip aspires to become the number one travel portal in the world by providing a convenient and enjoyable booking experience at cheapest prices. At Aertrip are trying to build a team of dedicated and experienced talent. We are looking to hire the best PHP Developers, HTML / Front End Developers, Software Analysts and UI/UX Designers to create a truly magical booking experience for our customers. We have studied over 90 travel websites and apps and designed an interface which provides the best and convenient way to book travel. We are now trying to build a team of extra-ordinary talent to complete this task.

Jobs

3

OneFin cover picture
OneFin's logo

OneFin

https://onefin.in
Founded
2017
Type
Product
Size
20-100
Stage
Profitable

About the company

OneFin is a technology credit platform - we are building the operating system for financial services. We have built modular, plug-and-play APIs to help our partners (like NBFCs, financial institutions, fintech's, startups etc.) underwrite and collect loans from end customers. In a highly credit underserved country with rapidly increasing smartphone adoption, we are enabling any company to become a fintech company through our suite of APIs and regulatory layer, and helping build customised financial products for "Middle India" and its 360 million customers - for consumption based use cases, upskilling / education financing, medical financing, etc.

Jobs

6

Peliqan cover picture
Peliqan's logo

Peliqan

https://peliqan.io
Founded
2022
Type
Product
Size
0-20
Stage
Bootstrapped

About the company

Data collaboration reinvented We are on a mission to reinvent how teams collaborate on data. Currently in stealth mode. Want to join our team ? Let’s have a coffee ! Yes, I want to build stellar tech ! Discover some of our job openings Join a stellar team, local or remote! Backend developer (Python) Frontend developer […]

Jobs

2

Wama Technology cover picture
Wama Technology's logo

Wama Technology

https://wamatechnology.com
Founded
2015
Type
Services
Size
20-100
Stage
Profitable

About the company

Wama Technology integrates state-of-the-art technology smoothly to promote corporate success, providing a comprehensive “One-Stop Solution” for all digital demands, from cloud services, artificial intelligence, machine learning, and mobile and web app development technologies. Wama Technologies offers customized solutions that enable businesses to prosper in the digital era. At Wama Technology, our team prioritizes innovation, user experience, and client happiness to provide digital transformation. Wama Technology assists companies in improving user interaction, automatization processes, or delivering new ideas through the strategic use of technology.

Jobs

8

OpenIAM cover picture
OpenIAM's logo

OpenIAM

https://openiam.com
Founded
2008
Type
Product
Size
20-100
Stage
Bootstrapped

About the company

OpenIAM is a pioneering Identity and Access Management (IAM) solutions provider that has been transforming enterprise security since 2008. Based in New York, this self-funded and profitable company has established itself as an innovator in the IAM space, being the first to introduce a converged architecture stack and fully containerized suite for cloud environments. With a global presence and partnerships with major systems integrators like Thales and Indra, OpenIAM serves mid to large enterprises across various sectors including financial services, healthcare, education, and manufacturing.

Jobs

1

Blitzy cover picture
Blitzy's logo

Blitzy

https://blitzy.com
Founded
2024
Type
Product
Size
0-20
Stage
Raised funding

About the company

Blitzy is a Boston, MA based Generative AI Start-up with an established office in Pune, India. We are on a mission to automate custom software creation to unlock the next industrial revolution. We're backed by multiple tier 1 investors, have success as founders at the last start-up, and dozens of Generative AI patents to our names.


Our Culture

Our Co-Founder and CTO is a Serial Gen AI Inventor who grew up in Pune, India, is a BITS Pilani graduate, and worked at NVIDIA's Pune office for 6 years. There, he was promoted 5 times in 6 years and was transferred to the NVIDIA Headquarters in Santa Clara, California. After making significant contributions to NVIDIA, he proceeded to attend Harvard for his dual Masters in Engineering and MBA from HBS. Our other Co-Founder/CEO is a successful Serial Entrepreneur who has built multiple companies. As a team, we work very hard, have a curious mind-set, and believe in a low-ego high output approach.


Funding Journey

In September 2024, Blitzy secured $4.4M in seed funding from prominent investors including Link Ventures, Asymmetric Capital Partners, Flybridge, and four other strategic investors, demonstrating strong market confidence in their autonomous software development platform.


Our Values

  1. We move Blitzy Fast: Time is both our company's and our client's most precious asset. We move fast and fearlessly to innovate internally and deliver exceptional software externally to our clients.
  2. We have a Championship Mindset: We operate like a professional sports team. We win as a team by holding ourselves and each other to high standards, collaborating in-person, and remaining focused on the mission.
  3. We have a Passion for Invention: We are inventors at heart. We value starting with best practices and open source, but we are pushing the frontier of what is possible.
  4. We Work for the Customer: We focus on delivering outsized value to the customers we work with and expanding those relationships to deep, meaningful partnerships.


What We Ask of Candidates

Please ask yourself if you are ready for a challenge before applying. Even in optimal conditions, Start-Ups are hard, and are always a lot of work. What you do week to week will change. If this feels exciting, not concerning, that's a good sign.

Jobs

2

Applix cover picture
Applix's logo

Applix

https://applix.ai
Founded
2022
Type
Product
Size
20-100
Stage
Bootstrapped

About the company

Applix is building an AI-native Manufacturing Operating System (mOS) designed to drive Triple Zero performance - Zero Defects, Zero Delays, Zero Waste.


The platform unifies scheduling, root cause analysis, digital work instructions, and supply chain visibility into one intelligent system that helps factories operate in real time, not in hindsight.


Headquartered in Austin with presence in Chicago and Hyderabad, Applix partners with global manufacturers to modernize shop-floor execution through applied AI.


About the Team

Applix brings together operators, engineers, and AI specialists with deep manufacturing and supply chain expertise. The team works closely with enterprise customers, deploying practical, factory-ready solutions that deliver measurable operational impact from day one.


Milestones

  • Founded in 2022
  • Built the industry’s first AI-native Manufacturing Operating System (mOS)
  • Partnering with leading global manufacturers
  • Growing industry presence with 4,000+ LinkedIn followers

Jobs

4

Verse cover picture
Verse's logo

Verse

https://versehq.ai
Founded
2026
Type
Product
Size
0-20
Stage
Bootstrapped

About the company

Jobs

1

Want to work at Quantiphi?
Quantiphi's logo
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs