Cutshort logo
Python Jobs in Bangalore (Bengaluru)

50+ Python Jobs in Bangalore (Bengaluru) | Python Job openings in Bangalore (Bengaluru)

Apply to 50+ Python Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.

icon
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
4 - 12 yrs
Best in industry
skill iconPython
SQL
ETL
Google Cloud Platform (GCP)
Windows Azure
+1 more

We are seeking a skilled Data Engineer to join the AI Platform Capabilities team supporting the UDP Uplift program.

In this role, you will design, build, and test standardized data and AI platform capabilities across a multi-cloud environment (Azure & GCP).

You will collaborate closely with AI use case teams to develop:

  • Scalable data pipelines
  • Reusable data products
  • Foundational data infrastructure

Your work will support advanced AI solutions such as:

  • GenAI
  • RAG (Retrieval-Augmented Generation)
  • Document Intelligence

Key Responsibilities

  • Design and develop scalable ETL/ELT pipelines for AI workloads
  • Build and optimize data pipelines for structured & unstructured data
  • Enable context processing & vector store integrations
  • Support streaming data workflows and batch processing
  • Ensure adherence to enterprise data models, governance, and security standards
  • Collaborate with DataOps, MLOps, Security, and business teams (LBUs)
  • Contribute to data lifecycle management for AI platforms

Required Skills

  • 5–7 years of hands-on experience in Data Engineering
  • Strong expertise in Python and advanced SQL
  • Experience with GCP and/or Azure cloud-native data services
  • Hands-on experience with PySpark / Spark SQL
  • Experience building data pipelines for ML/AI workloads
  • Understanding of CI/CD, Git, and Agile methodologies
  • Knowledge of data quality, governance, and security practices
  • Strong collaboration and stakeholder management skills

Nice-to-Have Skills

  • Experience with Vector Databases / Vector Stores (for RAG pipelines)
  • Familiarity with MLOps / GenAIOps concepts (feature stores, model registries, prompt management)
  • Exposure to Knowledge Graphs / Context Stores / Document Intelligence workflows
  • Experience with DBT (Data Build Tool)
  • Knowledge of Infrastructure-as-Code (Terraform)
  • Experience in multi-cloud deployments (Azure + GCP)
  • Familiarity with event-driven systems (Kafka, Pub/Sub) & API integrations

Ideal Candidate Profile

  • Strong data engineering foundation with AI/ML exposure
  • Experience working in multi-cloud environments
  • Ability to build production-grade, scalable data systems
  • Comfortable working in cross-functional, fast-paced environments
Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
4 - 7 yrs
Upto ₹30L / yr (Varies
)
skill iconPython
Software Testing (QA)
Test Automation (QA)
Large Language Models (LLM)
API

We are looking for a QA Engineer with hands-on experience in testing Generative AI systems, LLMs, and RAG pipelines. This role goes beyond traditional QA and focuses on evaluating non-deterministic AI outputs, testing agentic workflows, and ensuring AI safety, accuracy, and reliability in enterprise-grade AI services.


You will work on API-driven AI services such as Intelligent Document Processing and AI Gateways, ensuring they meet enterprise standards before deployment.


Key Responsibilities

  • Test and validate Generative AI applications, LLMs, and RAG-based systems
  • Evaluate AI outputs for accuracy, groundedness, coherence, and hallucination
  • Design and execute test strategies for multi-step agentic workflows
  • Perform API and integration testing for AI services
  • Build automated test pipelines using Python
  • Create synthetic datasets for testing AI systems
  • Conduct adversarial testing (prompt injection, safety, guardrails)
  • Integrate AI testing into CI/CD pipelines

Must-Have Skills

  • 5–7 years of experience in QA / Test Automation
  • Hands-on experience testing Generative AI / LLM-based applications
  • Strong programming skills in Python
  • Experience with RAG pipelines
  • Knowledge of LLM evaluation frameworks (RAGAS, TruLens, LangSmith or similar)
  • Strong experience in API testing (Postman, REST Assured, etc.)
  • Experience testing multi-agent workflows / agentic systems
  • Understanding of hallucination, prompt injection, and AI safety concepts

Good-to-Have Skills

  • Experience with GCP (Vertex AI) or Azure OpenAI
  • SQL / NoSQL knowledge for data validation
  • Experience in BFSI / Insurance domain
  • Performance testing of APIs and AI systems

Additional Information

  • Candidates without hands-on experience in testing Generative AI / LLM systems will not be considered
  • Immediate to 45 days notice period preferred 
Read more
Bengaluru (Bangalore)
2 - 5 yrs
₹20.4L - ₹24L / yr
skill iconPython
API
SQL
Systems design
Software deployment

Location: Bangalore

Experience: 2–5 years

Type: Full-time | On-site

Open Roles: 2

Start: Immediate

Why this role exists

Most systems work at a low scale.

Very few survive real production load, complex workflows, and enterprise edge cases.

We are building a platform that must:

  • Scale from 500K → 20M+ interactions/month
  • Handle complex insurance workflows reliably
  • Become easier to deploy as it grows, not harder

This role exists to build the backend foundation that makes this possible.

What you’ll do

You will not just write services.

You will design and own core platform systems.

1. Scale the platform without breaking architecture

  • Scale from 50K → 2M+ interactions/month
  • Ensure:
  • High availability
  • Low latency
  • Fault tolerance
  • Avoid large rewrites — build systems that evolve cleanly

2. Build the workflow automation (WA) engine

  • Design a flexible system with:
  • States
  • Stages
  • Cohorts
  • Dynamic workflows
  • Ensure workflows:
  • Handle edge cases reliably
  • Can be configured easily
  • Move from:
  • Hardcoded flows → configurable execution engine

3. Build the insurance-specific data layer

  • Design data models for:
  • Policy states
  • Claim workflows
  • Consent tracking
  • Ensure the system works across:
  • Multiple insurers
  • Multiple use cases
  • Build a platform-first data layer, not use-case-specific hacks

4. Make deployment and setup simple

  • Ensure workflows and data models are:
  • Easy to configure
  • Easy to launch
  • Reduce friction for:
  • Product teams
  • Deployment teams

5. Create a compounding data advantage

  • Build a data layer that:
  • Improves with every deployment
  • Captures structured signals
  • Ensure data becomes a long-term edge, not just storage

6. Own production reliability

  • Participate in on-call rotation across 3 engineers
  • Ensure:
  • Incidents are handled quickly
  • Root causes are fixed permanently
  • Build systems where reliability is shared, not individual

What success looks like

  • Platform scales to 2M+ interactions/month smoothly
  • Workflow engine supports complex, dynamic use cases
  • Data layer enables fast deployment across accounts
  • Edge cases are handled without constant firefighting
  • System becomes easier to use as it grows
  • Production issues are rare and predictable

Who you are

  • You have 2-5 years of backend engineering experience
  • You have built:
  • Scalable systems
  • Distributed services
  • You think in:
  • Systems
  • Data models
  • Trade-offs
  • You are comfortable owning:
  • Architecture
  • Production systems

What will make you stand out

  • Experience building:
  • Workflow engines
  • State machines
  • Data-heavy platforms
  • Strong understanding of:
  • System design
  • Distributed systems
  • Failure handling
  • Experience working in:
  • High-scale production environments

Why join

  • You will build the core backend of an AI platform
  • Your work directly impacts:
  • Scale
  • Reliability
  • Product capability
  • You will design systems that move from:
  • Use-case specific → platform-level infrastructure

What this role is not

  • Not just API development
  • Not limited to feature-level work
  • Not disconnected from production realities

What this role is

  • A system architect
  • A builder of scalable platforms
  • A driver of long-term technical advantage

One question to self-evaluate

Can you design backend systems that scale, handle edge cases, and become easier to use as they grow?


Read more
Bengaluru (Bangalore)
4 - 7 yrs
Best in industry
Selenium
skill iconJava
skill iconPython
Test Automation (QA)
Mobile App Testing (QA)
+2 more

Role Overview


As a Senior QA Engineer (Automation), you will drive product quality across all stages of development and deployment. You’ll take complete ownership of defining QA strategy, implementing robust automation frameworks, and ensuring every release meets our high standards of reliability, performance, and user delight.


This role is ideal for someone who thrives in a fast-paced startup environment, loves solving problems, and is passionate about building scalable and flawless user experiences.


Key Responsibilities

Define & Execute QA Strategy:

Develop and implement test strategies covering functional, regression, integration, and exploratory testing.


Automation Leadership:

Build and maintain scalable automation frameworks integrated into CI/CD pipelines to improve speed, reliability, and test coverage.


Collaborate Early:

Partner closely with Product and Engineering teams to ensure testable requirements and early QA involvement in the development cycle.


Release Readiness:

Own end-to-end release validation, including regression testing, defect triage, and final sign-off on product quality.


Quality Metrics & Reporting:

Define, track, and communicate key QA metrics (defect leakage, build health, test coverage) to drive data-backed improvements.


Performance & Security Testing:

Conduct basic performance and security validation to ensure system robustness.


Mentorship & Best Practices:

Guide junior QA engineers, promoting test design excellence, automation best practices, and continuous improvement.


Process Optimization:

Continuously enhance QA processes through retrospectives, automation expansion, and shift-left testing principles.


Documentation:

Maintain comprehensive documentation of test cases, strategies, bug reports, and quality incident postmortems.


What We’re Looking For

  • 5 - 10  years of QA experience in product-based startups, ideally in B2C environments.
  • Proven expertise in test automation (e.g., Selenium, Appium, Cypress, Playwright, etc.).
  • Strong understanding of CI/CD pipelines, API testing, and test design principles.
  • Hands-on experience with manual and exploratory testing.
  • Ability to handle multiple projects independently and drive them to completion.
  • High sense of ownership, accountability, and attention to detail.
  • Excellent communication and collaboration skills.
  • Willingness to work from the office (HSR Layout, Bangalore).


Why Join Us

  • Opportunity to impact millions of users in India’s devotional and spiritual space.
  • Work with a talented, passionate, and mission-driven team.
  • High ownership role with end-to-end accountability.
  • Fast-paced, collaborative, and growth-oriented culture.


Build seamless, trusted experiences that bring faith and technology together.



Read more
Optimo Capital

at Optimo Capital

2 candid answers
Ajinkya Pokharkar
Posted by Ajinkya Pokharkar
Bengaluru (Bangalore)
2 - 4 yrs
₹5.5L - ₹12L / yr
skill iconPython
skill iconReact.js
skill iconJavascript
RESTful APIs
skill iconPostgreSQL
+7 more

About us:

Optimo Capital is a newly established NBFC founded by Prashant Pitti, who is also a co-founder of EaseMyTrip (a billion-dollar listed startup that grew profitably without any funding).

Our mission is to serve the underserved MSME businesses in India with their credit needs. With less than 15% of MSMEs having access to formal credit, we aim to bridge this credit gap by employing a phygital model (physical branches + digital decision-making).

As a technology and data-first company, tech and data enthusiasts play a crucial role in building the infrastructure at Optimo, and help the company thrive.


What we offer:

Join our dynamic startup team as a Full Stack Developer and play a crucial role in web application & API developments, customer journeys, tech integrations, building robust credit risk and underwriting decision engines, cloud infrastructure, and more.

This is an exceptional opportunity to learn, grow, and make a significant impact in a fast-paced startup environment. We believe that the freedom and accountability to make decisions in technology, software, system architecture, and other design aspects bring out the best in you and help us build the best for the company.

This environment will not only offer you a steep learning curve but also allow you to experience the direct impact of your technological contributions. In addition, we offer industry-standard compensation.


What we look for:

We are looking for individuals with strong proficiency in Python, React, and Django. Any experience in a startup, front-end/back-end development, tech-integrations, or open-source contributions will be highly valued.

We focus not only on your skills but also on your attitude and your hunger to learn, grow, lead, and thrive—both as an individual and as part of a team. We encourage taking on challenges, learning new technologies, understanding, building, and implementing them within a short period of time. Your willingness to put in the extra effort to build the best systems will be highly appreciated.


Skills:

Excellent proficiency with the ability to write clean, robust, production-level code. Experience in designing, developing, and maintaining web apps and rule engines is required. At least one year of experience as a developer in any engineering / software-based role is required.


1) Frontend Development

  • JavaScript: Strong proficiency in JavaScript, including ES6+ features
  • React: Experience building complex user interfaces using React and its ecosystem (e.g., Redux, Context API)
  • HTML/CSS: Solid understanding of HTML5 and CSS3 for creating responsive and accessible web pages


2) Backend Development

  • Python: Proficiency in Python for server-side development
  • Django: Working knowledge in Django, Django Rest Framework
  • Flask (or FastAPI): Experience building RESTful APIs using Flask or FastAPI is a plus


3) REST APIs: A strong understanding of APIs is required, along with prior experience in API development or integration. Writing REST APIs from scratch is highly desirable.


4) Databases: A basic understanding of both relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., MongoDB) databases is required. Basic knowledge of database management, optimization, and query design is expected.


5) Git: Proficiency in Git is essential, with experience in branching, merging, pull requests, and conflict resolution. Experience in collaborative projects using Git is highly valued.


6) Good to have: 

  • Basic understanding of data pipelines/ETLs, dashboarding, and AWS is beneficial but not required.
  • Experience in building WhatsApp chat/flow journeys, Working with maps, and creating data layers (e.g., Google Maps API, Mapbox) is highly valued. (not mandatory)


What you'll be working on:

  1. Design and build systems focused on creating straight-through processes for lending (specifically property loans), from customer onboarding to disbursement, with an emphasis on accurate and efficient credit and risk assessment.
  2. Take projects from ideation to production, including web applications, rule engines, third-party API integrations, and other technology developments.
  3. Take initiative and ownership of engineering projects, ensuring a seamless user experience.
  4. Manage and coordinate the cloud infrastructure and application setup, including source code repositories, CI/CD pipelines, servers, and deployments.


Other Requirements:

  1. Availability for full-time work in Bangalore. Advantage for immediate joiners.
  2. Strong passion for technology and problem-solving.
  3. Ability to translate requirements into intuitive interfaces is highly appreciated 
  4. At least 1 year of industry experience in a technical role specifically as a developer is a must.
  5. Self-motivated and capable of working both independently and collaboratively.



If you are ready to embark on an exciting journey of growth, learning, and innovation, apply now to join our pioneering team in Bangalore.


Read more
Metron Security Private Limited
Chanchal Kale
Posted by Chanchal Kale
Pune, Bengaluru (Bangalore)
2.5 - 6 yrs
₹3L - ₹10L / yr
skill iconNodeJS (Node.js)
skill iconGo Programming (Golang)
skill iconPython
Data Structures
CI/CD
+1 more

Job Summary:


We are looking for a highly motivated and skilled Software Engineer to join our team.

This role requires a strong understanding of the software development lifecycle, proficiency in coding, and excellent communication skills.

The ideal candidate will be responsible for production monitoring, resolving minor technical issues, collecting client information, providing effective client interactions, and supporting our development team in resolving challenges



Key Responsibilities:


Client Interaction: Serve as the primary point of contact for client queries, provide excellent communication, and ensure timely issue resolution.

Issue Resolution: Troubleshoot and resolve minor issues related to software applications in a timely manner.

Information Collection: Gather detailed technical information from clients, understand the problem context, and relay the information to the development leads for further action.

Collaboration: Work closely with development leads and cross-functional teams to provide timely support and resolution for customer issues.

Documentation: Document client issues, actions taken, and resolutions for future reference and continuous improvement.

Software Development Lifecycle: Be involved in maintaining, supporting, and optimizing software through its lifecycle, including bug fixes and enhancements.

Automating Redundant Support Tasks: (good to have) Should be able to automate the redundant repetitive tasks Required Skills and Qualifications:



Mandatory Skills:


Expertise in at least one Object Oriented Programming language (Python, Java, C#, C++, Reactjs, Nodejs).

Good knowledge on Data Structure and their correct usage.

Open to learn any new software development skill if needed for the project.

Alignment and utilization of the core enterprise technology stacks and integration capabilities throughout the transition states.

Participate in planning, definition, and high-level design of the solution and exploration of solution alternatives.

Define, explore, and support the implementation of enablers to evolve solution intent, working directly with Agile teams to implement them.

Good knowledge on the implications.

Experience architecting & estimating deep technical custom solutions & integrations.



Added advantage:


You have developed software using web technologies.

You have handled a project from start to end.

You have worked in an Agile Development project and have experience of writing and estimating User Stories

Communication Skills: Excellent verbal and written communication skills, with the ability to clearly explain technical issues to non-technical clients.

Client-Facing Experience: Strong ability to interact with clients, gather necessary information, and ensure a high level of customer satisfaction.

Problem-Solving: Quick-thinking and proactive in resolving minor issues, with a focus on providing excellent user experience.

Team Collaboration: Ability to collaborate with development leads, engineering teams, and other stakeholders to escalate complex issues or gather additional technical support when required.



Preferred Skills:


Familiarity with Cloud Platforms and Cyber Security tools: Knowledge of cloud computing platforms and services (AWS, Azure, Google Cloud) and Cortex XSOAR, SIEM, SOAR, XDR tools is a plus.

Automation and Scripting: Experience with automating processes or writing scripts to support issue resolution is an advantage.



Read more
TalentXO
Bengaluru (Bangalore)
4 - 8 yrs
₹27L - ₹30L / yr
Camunda Developer
skill iconPython
Backend Development
Microservices
REST API
+2 more

Role & Responsibilities

We are looking for a hands-on Camunda Developer with strong experience in workflow orchestration and backend development. The ideal candidate should be able to design, build, and optimize end-to-end business processes using Camunda (preferably Camunda 8) and work closely with engineering and business teams to implement scalable and resilient workflows.

Key Responsibilities:

  • Translate business requirements into BPMN workflows using Camunda (preferably Camunda 8)
  • Design and implement end-to-end process orchestration across systems
  • Build and manage service integrations (REST APIs, event-driven systems)
  • Develop and maintain Zeebe workers / microservices (Python)
  • Collaborate with stakeholders to refine workflows and handle edge cases
  • Implement error handling, retries, and compensation mechanisms
  • Analyse and improve workflows for scalability, reliability, and performance
  • Ensure data consistency and idempotent process execution
  • Work with cross-functional teams including data and analytics for process observability

Ideal Candidate

  • Strong Senior Camunda Developer / Workflow Orchestration Engineer Profiles
  • Mandatory (Experience 1) – Must have 4+years of hands-on experience in backend development and workflow systems, demonstrable through production-grade work on business process automation or backend service development.
  • Mandatory (Experience 2) – Must have strong hands-on experience with Camunda and BPMN 2.0, including designing, building, and deploying end-to-end business process workflows in production.
  • Mandatory (Experience 3) – Must have hands-on experience with Zeebe workers and the Camunda 8 stack, built and maintained as part of real orchestration systems.
  • Mandatory (Experience 4) – Must have strong production-level coding skills in Python, used for building and maintaining Zeebe workers and microservices.
  • Mandatory (Experience 5) – Must have experience designing and working within microservices architecture and distributed systems, with clear understanding of service decomposition, inter-service communication, and distributed system failure modes.
  • Mandatory (Experience 6) – Must have hands-on experience building and consuming REST APIs and working with event-driven systems (message brokers, pub/sub, event streams).
  • Mandatory (Skills) – Must have strong debugging and problem-solving skills in production workflow environments, with specific examples of resolving complex issues such as stuck processes, race conditions, or data inconsistency bugs.
  • Preferred (Experience 1) – Exposure to cloud platforms (AWS / GCP / Azure) and experience with data platforms (e.g., Snowflake).
  • Preferred (Experience 2) – Understanding of finance-related workflows (billing, reconciliation, etc.).


Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Mumbai, Trivandrum
4 - 8 yrs
Upto ₹30L / yr (Varies
)
skill iconNodeJS (Node.js)
skill iconPython
Dialog Flow
rasa
yellow.ai
+1 more

Responsible for developing, enhancing, modifying, and maintaining chatbot applications in the Global Markets environment. The role involves designing, coding, testing, debugging, and documenting conversational AI solutions, along with supporting activities aligned to the corporate systems architecture.

You will work closely with business partners to understand requirements, analyze data, and deliver optimal, market-ready conversational AI and automation solutions.


Key Responsibilities

  • Design, develop, test, debug, and maintain chatbot and virtual agent applications
  • Collaborate with business stakeholders to define and translate requirements into technical solutions
  • Analyze large volumes of conversational data to improve chatbot accuracy and performance
  • Develop automation workflows for data handling and refinement
  • Train and optimize chatbots using historical chat logs and user-generated content
  • Ensure solutions align with enterprise architecture and best practices
  • Document solutions, workflows, and technical designs clearly

Required Skills

  • Hands-on experience in developing virtual agents (chatbots/voicebots) and Natural Language Processing (NLP)
  • Experience with one or more AI/NLP platforms such as:
  • Dialogflow, Amazon Lex, Alexa, Rasa, LUIS, Kore.AI
  • Microsoft Bot Framework, IBM Watson, Wit.ai, Salesforce Einstein, Converse.ai
  • Strong programming knowledge in Python, JavaScript, or Node.js
  • Experience training chatbots using historical conversations or large-scale text datasets
  • Practical knowledge of:
  • Formal syntax and semantics
  • Corpus analysis
  • Dialogue management
  • Strong written communication skills
  • Strong problem-solving ability and willingness to learn emerging technologies

Nice-to-Have Skills

  • Understanding of conversational UI and voice-based processing (Text-to-Speech, Speech-to-Text)
  • Experience building voice apps for Amazon Alexa or Google Home
  • Experience with Test-Driven Development (TDD) and Agile methodologies
  • Ability to design and implement end-to-end pipelines for AI-based conversational applications
  • Experience in text mining, hypothesis generation, and historical data analysis
  • Strong knowledge of regular expressions for data cleaning and preprocessing
  • Understanding of API integrations, SSO, and token-based authentication
  • Experience writing unit test cases as per project standards
  • Knowledge of HTTP, REST APIs, sockets, and web services
  • Ability to perform keyword and topic extraction from chat logs
  • Experience training and tuning topic modeling algorithms such as LDA and NMF
  • Understanding of classical Machine Learning algorithms and appropriate evaluation metrics
  • Experience with NLP frameworks such as NLTK and spaCy


Read more
Staffnixcom
Bengaluru (Bangalore)
4 - 8 yrs
₹30L - ₹40L / yr
skill iconPython
Backend testing

Strong Senior Camunda Developer / Workflow Orchestration Engineer Profiles

Mandatory (Experience 1) – Must have 4+years of hands-on experience in backend development and workflow systems, demonstrable through production-grade work on business process automation or backend service development.

Mandatory (Experience 2) – Must have strong hands-on experience with Camunda and BPMN 2.0, including designing, building, and deploying end-to-end business process workflows in production.

Mandatory (Experience 3) – Must have hands-on experience with Zeebe workers and the Camunda 8 stack, built and maintained as part of real orchestration systems.

Mandatory (Experience 4) – Must have strong production-level coding skills in Python, used for building and maintaining Zeebe workers and microservices.

Mandatory (Experience 5) – Must have experience designing and working within microservices architecture and distributed systems, with clear understanding of service decomposition, inter-service communication, and distributed system failure modes.

Mandatory (Experience 6) – Must have hands-on experience building and consuming REST APIs and working with event-driven systems (message brokers, pub/sub, event streams).

Mandatory (Skills) – Must have strong debugging and problem-solving skills in production workflow environments, with specific examples of resolving complex issues such as stuck processes, race conditions, or data inconsistency bugs.

Preferred

Read more
Bengaluru (Bangalore)
5 - 10 yrs
₹1L - ₹10L / yr
databricks
PySpark
Apache Spark
ETL
CI/CD
+10 more

Profile - Databricks Developer

Experience- 5+ years

Location- Bangalore (On site)

PF & BGV is Mandatory


Job Description: -

* Design, build, and optimize data pipelines and ETL/ELT workflows using Databricks and

Apache Spark (PySpark).

* Develop scalable, high performance data solutions using Spark distributed processing.

* Lead engineering initiatives focused on automation, performance tuning, and platform

modernization.

* Implement and manage CI/CD pipelines using Git-based workflows and tools such as

GitHub Actions or Jenkins.

* Collaborate with cross-functional teams to translate business needs into technical

solutions.

* Ensure data quality, governance, and security across all processes.

* Troubleshoot and optimize Spark jobs, Databricks clusters, and workflows.

* Participate in code reviews and develop reusable engineering frameworks.

* Should have knowledge of utilizing AI tools to improve productivity and support daily

engineering activities.

* Strong knowledge and hands-on experience in Databricks Genie, including prompt

engineering, workspace usage, and automation.

Required Skills & Experience:

* 5+ years of experience in Data Engineering or related fields.

* Strong hands-on expertise in Databricks (notebooks, Delta Lake, job orchestration).

* Deep knowledge of Apache Spark (PySpark, Spark SQL, optimization techniques).

* Strong proficiency in Python for data processing, automation, and framework

development.

* Strong proficiency in SQL, including complex queries, performance tuning, and analytical

functions.

* Strong knowledge of Databricks Genie and leveraging it for engineering workflows.

* Strong experience with CI/CD and Git-based development workflows.

* Proficiency in data modeling and ETL/ELT pipeline design.


* Experience with automation frameworks and scheduling tools.

* Solid understanding of distributed systems and big data concepts

Read more
Bootlabs Technologies Private Limited

at Bootlabs Technologies Private Limited

2 candid answers
1 recruiter
Aakanksha Soni
Posted by Aakanksha Soni
Mumbai, Bengaluru (Bangalore)
4 - 7 yrs
Best in industry
skill iconAmazon Web Services (AWS)
skill iconPython
ECS
AWS IAM
Amazon S3
+3 more

Job Title: AWS DevOps Engineer (MLOps)

We are looking for a highly skilled AWS + MLOps Engineer to design, build, and maintain scalable machine learning infrastructure and pipelines on AWS. The ideal candidate will have strong expertise in DevOps practices, cloud architecture, and MLOps frameworks, along with solid Python programming skills.

Job Description:

We are looking for an experienced AWS DevOps Engineer to join our team. You will be responsible for building and optimising CI/CD pipelines, managing AWS infrastructure, and automating tasks using AWS services.

Key Responsibilities:

  • CI/CD Pipelines: Develop CI/CD pipelines with AWS CodePipeline, build ECR images, and update services on ECS.
  • Automation: Create Python Lambda functions for automation and AWS Batch jobs for GPU processing.
  • Infrastructure Management: Manage AWS infrastructure using Terraform (IAM roles, RDSLambda, etc.) and deploy microservices on EKS with ALB Ingress.
  • Data Processing: Work with AWS Step Functions and EMR for data workflows; troubleshoot Spark jobs.
  • Microservices: Deploy ATLAS on ECS, and create AWS Glue crawlers for data integration.
  • Strong Experience with MLOps is an added advantage.

Required Skills:

  • Experience with AWS services (ECS, ECR, Lambda, Step Functions, EMR, Glue, etc.).
  • Proficient in CI/CDTerraform, and Python scripting.
  • Experience deploying EKS clusters and using AWS ALB for routing.
  • Strong troubleshooting skills with EMR and Spark.
  • Understanding/experience with AWS EMR, Sagemaker and Databricks would be added advantage

Preferred:

  • AWS Certification (DevOps, Solutions Architect, etc.).
  • Experience with microservices and GPU-intensive processes.
Read more
Global MNC serving 40+ Fortune 500 Companies

Global MNC serving 40+ Fortune 500 Companies

Agency job
Bengaluru (Bangalore)
5 - 8 yrs
₹20L - ₹26L / yr
Generative AI
Retrieval Augmented Generation (RAG)
skill iconMachine Learning (ML)
LangGraph
langchain
+11 more

Want to work on exciting GenAI projects for Fortune 500 companies across multiple sectors? Then read on..


About Company:

CSG is a multi-national company having a presence in 20 countries with 1600+ Engineers. Company works with more than 40 Fortune 500 customers such as Sony, Samsung, ABB, Thyssenkrup, Toyota, Mitsubishi and many more.


Job Description:

We are looking for a talented Generative AI Developer to join our dynamic AI/ML team. This position offers an exciting opportunity to leverage cutting-edge Generative AI (GenAI) technologies to drive innovation to solve real world problems. You will be responsible for developing and optimizing GenAI-based applications, implementing advanced techniques like Retrieval-Augmented Generation (RAG), RIG (Retrieval Interleaved Generation), Agentic Frameworks and vector databases. This is a collaborative role where you will work directly with customers cross-functional teams to design, implement, and optimize AI-driven solutions. Exposure to cloud-native AI platforms such as Amazon Bedrock and Microsoft Azure OpenAI is highly desirable.


Key Responsibilities

Generative AI Application Development:

Design, develop, and deploy GenAI-driven applications to address complex industrial challenges.

Implement Retrieval-Augmented Generation (RAG) and Agentic frameworks


Data Management & Optimization:

Design and optimize document chunking strategies tailored to specific datasets and use cases.

Build, manage, and optimize data embeddings for high-performance similarity searches across vector databases.


Collaboration & Integration:

Work closely with data engineers and scientists to integrate AI solutions into existing pipelines.

Collaborate with cross-functional teams to ensure seamless AI implementation.


Cloud & AI Platform Utilization:

Explore and implement best practices for utilizing cloud-native AI platforms, such as Amazon Bedrock and Azure OpenAI, to enhance solution delivery.

Continuous Learning & Innovation:

Stay updated with the latest trends and emerging technologies in the GenAI and AI/ML fields, ensuring our solutions remain cutting-edge.


Requirements:

The ideal candidate will have strong experience in Generative AI technologies, particularly in the areas of RAG, document chunking, and vector database management. They will be able to quickly adapt to evolving AI frameworks and leverage cloud-native platforms to create efficient, scalable solutions. You will be working in a fast-paced and collaborative environment, where innovation and the ability to learn and grow are key to success.

- 3 to 5 years of overall experience in software development, with 3 years focused on AI/ML.

- Minimum 2 years of experience specifically working with Generative AI (GenAI) technologies.

- Python, PySpark and SQL knowledge is necessary for tasks

- Proven ability to work in a collaborative, fast-paced, and innovative environment.


Technical Skills:

- Generative AI Frameworks & Technologies:

- Expertise in Generative AI frameworks, including prompt engineering, fine-tuning, and few-shot learning.

- Familiarity with frameworks such as T5 (Text-to-Text Transfer Transformation), LangChain, Lang Graph, Open-source tech stalk Ollama, Mistral, DeepSeek.

- Strong knowledge of Retrieval-Augmented Generation (RAG) for combining LLMs with external data retrieval systems.


Data Management:

- Experience in designing chunking strategies for different datasets.

- Expertise in data embedding techniques and experience with vector databases like Pinecone, ChromaDB etc

- Programming & AI/ML Libraries:

- Strong programming skills in Python.

- Experience with AI/ML libraries such as TensorFlow, PyTorch, and Hugging Face Transformers.


Cloud Platforms & Integration:

- Familiarity with cloud services for AI/ML workloads (AWS, Azure).

- Experience with API integration for AI services and building scalable applications.

- Certifications (Optional but Desirable):

- Certification in AI/ML (e.g., TensorFlow, AWS Certified Machine Learning Specialty).

- Certification or coursework in Generative AI or related technologies.

Read more
Product based company

Product based company

Agency job
Bengaluru (Bangalore)
4 - 9 yrs
₹12L - ₹13L / yr
skill icon.NET
ASP.NET
ASP.NET MVC
Microservices
FastAPI
+6 more

Technical Lead – Full Stack 

Work Location (WFO):

Nagar, Bengaluru, Karnataka

Interview Process:

L1 Interview – Face-to-Face at Office

Experience Required:

4-6 Years (Minimum1+ years in Technical Leadership role)

Budget:

Up to 13 LPA

Role Overview:

The candidate will lead the technical vision and architecture of a compliance platform by designing scalable, secure, and high-performance systems. The role involves driving full-stack development across .NET and open-source technologies, enabling unified AI Agent capabilities, Single Authentication (SSO), and a One-UI experience.

Key Responsibilities:

  • Define and own end-to-end architecture including micro-frontends, .NET services, FastAPI APIs, and microservices
  • Lead full-stack development using .NET and modern open-source technologies
  • Modernize legacy systems (ASP.NET, .NET Core, MS SQL Server) to cloud-native architecture
  • Design and implement AI Agents, SSO, and unified UI experiences
  • Manage sprint planning, backlogs, and collaborate with Product Owners
  • Implement CI/CD pipelines using Jenkins, GitHub Actions
  • Drive containerization and orchestration using Docker & Kubernetes
  • Ensure secure deployments and cloud infrastructure management
  • Establish engineering best practices, code reviews, and architecture governance
  • Mentor teams on Clean Architecture, SOLID principles, and DevOps practices

 

 

Required Skills:

  • ReactJS, FastAPI, Python, REST/GraphQL
  • ASP.NET, MVC, .NET Core, Entity Framework, MS SQL Server
  • Strong experience in Microservices Architecture
  • DevOps: CI/CD, Jenkins, GitOps, Docker, Kubernetes
  • Cloud Platforms: AWS / Azure / GCP
  • AI/ML & LLM tools: OpenAI, Llama, LangChain, etc.
  • Security: RBAC, API security, secrets management

Qualifications:

  • BE / BTech in Computer Science

 

Read more
Travel Tech - IPO company

Travel Tech - IPO company

Agency job
via Recruiting Bond by Pavan Kumar
Bengaluru (Bangalore)
12 - 16 yrs
₹80L - ₹130L / yr
Distributed Systems
Search systems
Pricing & Fare Engine
Booking & Ticketing
Airline Integrations
+47 more

Director of Engineering — Flights Platform

AI-First Travel Commerce · High-Scale Distributed Systems · Marketplace Infrastructure


🌏 The Problem Space

A flight search looks trivially simple. It is anything but.


Every query you fire triggers a choreography of distributed systems operating in real-time — integrating with a dozen airline GDS/NDC providers, computing dynamic fares across inventory buckets and fare rules, ranking thousands of itineraries by relevance and business intent, and returning a ranked, priced, bookable result set — all in under 100ms.


→ Millions of search queries per minute

→ <100ms end-to-end SLA with external API dependencies

→ High-value transactions — a bug here means a missed booking, not a failed render

→ Pricing errors erode trust faster than any other failure mode


We are rebuilding the Flights platform as a real-time commerce engine for Bharat — AI-native from day zero, built to power both B2C consumer journeys and high-stakes B2B enterprise corridors.


This is a once-in-a-decade opportunity to build national-scale flight infrastructure from first principles.

🧠 What You Will Own

You will own the full Flights platform — systems, architecture, and the teams that build them.


Core System Domains:

•Search Systems — high-throughput, low-latency query pipelines returning ranked, bookable options

•Pricing & Fare Engine — dynamic pricing logic, fare rules, promotional overlays, and real-time validation

•Booking & Ticketing — transaction-critical flows requiring strict consistency, idempotency, and zero data loss

•Airline Integrations — managing unreliable external GDS/NDC APIs with retries, circuit-breakers, and reconciliation

•Post-Booking Flows — cancellations, modifications, refunds — correctness at the margin is non-negotiable


Platform Scope:

•High-scale APIs serving consumer apps, B2B enterprise clients, and third-party partners

•Event-driven state machines managing booking workflows across async boundaries

•Observability and reliability infrastructure across all mission-critical flows


Team Scope:

•Lead 15–30+ engineers across multiple product and platform teams

•Manage Engineering Managers and Principal/Staff engineers

•Own hiring, org design, and technical direction


⚙️ Core Engineering Challenges

This role is fundamentally about making the right trade-offs under uncertainty — at scale.


Latency vs. Accuracy — when do you serve a cached fare vs. call a live airline API?

Availability vs. Consistency — graceful degradation at booking time vs. strict price validation

Cost vs. Performance — when is an external API call worth it vs. a cache hit?

Scalability vs. Simplicity — the best system is the one your team can reason about under incident


🤖 AI-First Engineering

AI is not an afterthought. It is load-bearing architecture.

•LLM-powered pricing intelligence — dynamic fare prediction and demand signals

•RAG pipelines for fare rules, refund policy, and support automation

•Agentic booking resolution workflows — autonomous exception handling at scale

•MCP-based orchestration layers for multi-provider integration


⚖️ Key Responsibilities

Architecture & Distributed Systems

•Design and evolve sub-100ms distributed query systems serving millions of concurrent searches

•Build fault-tolerant booking pipelines with strong consistency and durability guarantees

•Drive Kafka-based event architectures for booking state management


Reliability & Observability

•Own 99.99%+ availability for booking and pricing systems

•Build deep observability — metrics, distributed tracing, structured logging, SLOs/SLAs

•Lead post-incident reviews and drive systemic reliability improvements


Business Partnership

•Partner with Product, Revenue, and Partnerships to translate commercial goals into architecture

•Influence platform roadmap, supplier strategy, and long-term technical investment


🛠️ Technology Stack

Backend: Java · Kotlin · Go · Python

Architecture: Microservices · Event-Driven (Kafka) · gRPC

Data: Redis · Aerospike · DynamoDB · Elasticsearch

Cloud: AWS (EKS, EC2, S3)

Observability: Prometheus · Grafana · OpenTelemetry


👤 Who You Are

•12–16 years in backend/distributed systems; 5+ Years in an Engineering Leadership role, led teams of 15–50 engineers

•Built and scaled large B2C + B2B platforms — Travel Tech, FinTech, or high-scale Consumer

•Deep expertise in real-time systems, marketplace dynamics, and external API integration

•Tier-I institute background strongly preferred (IIT / IIIT / NIT / IISC / BITS / VIT / SRM — CSE/ISE)


🚀 Why This Matters

Build national-scale infrastructure for 1.4 billion people

Sit at the intersection of AI · distributed systems · marketplace economics

Define the future of travel commerce in India — from architecture to product



Read more
Thingularity

Thingularity

Agency job
via Thomasmount Consulting by Shirin Shahana
Bengaluru (Bangalore)
4 - 8 yrs
₹18L - ₹20L / yr
skill iconPython
SQL
ETL

Job Summary

We are seeking a skilled Data Engineer with 4+ years of experience in building scalable data pipelines and working with modern data platforms. The ideal candidate should have strong expertise in Python, SQL, and cloud-based data solutions, with hands-on experience in ETL/ELT processes and data warehousing.

Key Responsibilities

  • Design, build, and maintain scalable data pipelines using Python
  • Develop and optimize ETL/ELT workflows for data ingestion and transformation
  • Work with structured and unstructured data from multiple sources
  • Build and manage data warehouses/data lakes
  • Perform data validation, cleansing, and quality checks
  • Optimize SQL queries and improve data processing performance
  • Collaborate with data analysts, data scientists, and business teams
  • Implement data governance, security, and best practices
  • Monitor pipelines and troubleshoot production issues

Required Skills

  • Strong programming experience in Python (Pandas, NumPy, PySpark preferred)
  • Excellent SQL skills (joins, window functions, performance tuning)
  • Experience with ETL tools like Informatica, Talend, or DBT
  • Hands-on experience with cloud platforms (Azure / AWS / GCP)
  • Experience in data warehousing solutions like Snowflake, Redshift, BigQuery
  • Knowledge of workflow orchestration tools like Apache Airflow
  • Familiarity with version control tools like Git

Preferred Skills

  • Experience with Big Data technologies (Spark, Hadoop)
  • Knowledge of streaming tools like Kafka
  • Exposure to CI/CD pipelines and DevOps practices
  • Experience in data modeling (Star/Snowflake schema)
  • Understanding of APIs and data integration


Read more
Bengaluru (Bangalore)
4 - 10 yrs
₹1L - ₹10L / yr
skill icon.NET
SSO
ASP.NET
ASP.NET MVC
MySQL
+16 more

Dear Candidates,


We have an urgent requirement for a Technical Lead – Full Stack role based in Bangalore. Please find the details below:


Work Location (WFO):

Nagar, Bengaluru, Karnataka


Interview Process:

L1 Interview – Face-to-Face at Office


Experience Required:

4-6 Years (Minimum1+ years in Technical Leadership role)


Role Overview:

The candidate will lead the technical vision and architecture of a compliance platform by designing scalable, secure, and high-performance systems. The role involves driving full-stack development across .NET and open-source technologies, enabling unified AI Agent capabilities, Single Authentication (SSO), and a One-UI experience.

Key Responsibilities:

  • Define and own end-to-end architecture including micro-frontends, .NET services, FastAPI APIs, and microservices
  • Lead full-stack development using .NET and modern open-source technologies
  • Modernize legacy systems (ASP.NET, .NET Core, MS SQL Server) to cloud-native architecture
  • Design and implement AI Agents, SSO, and unified UI experiences
  • Manage sprint planning, backlogs, and collaborate with Product Owners
  • Implement CI/CD pipelines using Jenkins, GitHub Actions
  • Drive containerization and orchestration using Docker & Kubernetes
  • Ensure secure deployments and cloud infrastructure management
  • Establish engineering best practices, code reviews, and architecture governance
  • Mentor teams on Clean Architecture, SOLID principles, and DevOps practices

Required Skills:

  • ReactJS, FastAPI, Python, REST/GraphQL
  • ASP.NET, MVC, .NET Core, Entity Framework, MS SQL Server
  • Strong experience in Microservices Architecture
  • DevOps: CI/CD, Jenkins, GitOps, Docker, Kubernetes
  • Cloud Platforms: AWS / Azure / GCP
  • AI/ML & LLM tools: OpenAI, Llama, LangChain, etc.
  • Security: RBAC, API security, secrets management

Qualifications:

  • BE / BTech in Computer Science
Read more
Euphoric Thought Technologies
Bengaluru (Bangalore)
3 - 5 yrs
₹6L - ₹10L / yr
skill iconPython
skill iconDjango
skill iconPostgreSQL
RESTful APIs

Job Summary

We are looking for a skilled Python Developer with 3 years of experience to join our team in Bangalore. The ideal candidate should have strong expertise in Python, Django, and PostgreSQL, along with a good understanding of backend development. Knowledge of Java will be an added advantage.


Key Responsibilities

Develop, test, and maintain scalable backend applications using Python and Django

Design and manage databases using PostgreSQL

Write clean, efficient, and reusable code

Collaborate with cross-functional teams to define, design, and ship new features

Debug and resolve technical issues and optimize application performance

Participate in code reviews and ensure best coding practices


Required Skills

Strong experience in Python

Hands-on experience with Django framework

Good knowledge of PostgreSQL database

Understanding of REST APIs and web services

Familiarity with version control systems (e.g., Git)


Good to Have

Basic knowledge of Java

Experience with cloud platforms or deployment processes

Understanding of front-end technologies is a plus


Qualifications

Bachelor’s degree in Computer Science, Engineering, or related field


Additional Requirements

Immediate joiners or candidates with short notice period preferred

Strong problem-solving and analytical skills

Good communication and teamwork abilities

Read more
Euphoric Thought Technologies
Bengaluru (Bangalore)
3 - 4 yrs
₹6L - ₹10L / yr
skill iconPython
skill iconDjango
skill iconPostgreSQL

Job Summary

We are looking for a skilled Python Developer with 3 years of experience to join our

team in Bangalore. The ideal candidate should have strong expertise in Python,

Django, and PostgreSQL, along with a good understanding of backend development.

Knowledge of Java will be an added advantage.


Key Responsibilities

 Develop, test, and maintain scalable backend applications using Python and

Django

 Design and manage databases using PostgreSQL

 Write clean, efficient, and reusable code

 Collaborate with cross-functional teams to define, design, and ship new

features

 Debug and resolve technical issues and optimize application performance

 Participate in code reviews and ensure best coding practices

 Required Skills

 Strong experience in Python

 Hands-on experience with Django framework

 Good knowledge of PostgreSQL database

 Understanding of REST APIs and web services

 Familiarity with version control systems (e.g., Git)

 Good to Have

 Basic knowledge of Java

 Experience with cloud platforms or deployment processes

 Understanding of front-end technologies is a plus

Qualifications

 Bachelor’s degree in Computer Science, Engineering, or related field

 Additional Requirements

 Immediate joiners or candidates with short notice period preferred

 Strong problem-solving and analytical skills

 Good communication and teamwork abilities

Read more
INI8 Labs
Shwetha K
Posted by Shwetha K
HSR layout, Bengaluru (Bangalore)
6 - 10 yrs
₹18L - ₹30L / yr
skill iconPython
skill iconDjango
skill iconMongoDB
skill iconAmazon Web Services (AWS)
skill iconGo Programming (Golang)
+1 more

Full-Stack Developer (Backend-Focused)

We are seeking a seasoned Full-Stack Developer with strong expertise in backend engineering using Python and Golang. In this role, you will take ownership of backend systems while contributing to the development of modern, responsive frontend interfaces. The focus will be on building secure, scalable, and high-performance applications, with emphasis on API development, database engineering, and cloud deployment.

Key Responsibilities

  • Develop and enhance backend services using Python frameworks such as Django or FastAPI
  • Design, build, and maintain RESTful APIs and microservices
  • Work extensively with relational and NoSQL databases, including PostgreSQL, MySQL, and MongoDB
  • Collaborate with frontend developers to integrate user-facing elements with backend logic
  • Implement efficient, secure, and scalable application architectures
  • Troubleshoot and resolve software defects across different environments
  • Optimize performance and reliability of backend services
  • Write clean, maintainable, and well-tested code following best practices
  • Contribute to DevOps activities, including CI/CD pipelines and containerization

Required Skills & Qualifications

  • 6+ years of experience in full-stack or backend-focused development
  • Strong proficiency in Python with hands-on experience in frameworks like Django or FastAPI
  • Solid understanding of SQL and NoSQL databases, including data modeling and query optimization
  • Familiarity with modern frontend technologies such as React, Vue, or Angular
  • Experience with Docker, Kubernetes, and at least one cloud platform (AWS, Azure, or GCP) is preferred
  • Strong understanding of system design, distributed systems, and microservices architecture
  • Experience with Git and CI/CD automation pipelines
  • Excellent problem-solving skills and ability to work collaboratively


Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Bengaluru (Bangalore)
5 - 8 yrs
₹10L - ₹30L / yr
databricks
skill iconPython
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Supervised learning

What You’ll Be Doing:

  • Design and develop advanced AI/ML models to solve complex business problems
  • Work closely with cross-functional teams including data engineers and domain experts
  • Perform exploratory data analysis, data cleaning, and model development
  • Translate business challenges into data-driven solutions and actionable insights
  • Drive innovation in advanced analytics and AI/ML capabilities
  • Communicate model insights effectively to both technical and non-technical stakeholders

What We’re Looking For:

  • 5+ years of experience in AI/ML model development
  • Strong foundation in mathematics, probability, and statistics
  • Proficiency in Python and exposure to Azure Machine Learning / Databricks
  • Experience with supervised & unsupervised learning techniques
  • Domain exposure to Energy / Oil & Gas value chain (preferred)
  • Strong problem-solving, stakeholder management, and communication skills


Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Mumbai, Bengaluru (Bangalore)
5 - 10 yrs
Upto ₹45L / yr (Varies
)
Agentic AI
skill iconPython
RESTful APIs
Google Vertex AI
Gemini (Google AI)
+1 more

This role is responsible for architecting and implementing the Agentic capabilities of the PHI ecosystem. The engineer will lead the development of multi-agent systems, enabling seamless interoperability between AI agents, internal tools, and external services.

The position requires a strong focus on AI safety, secure agent orchestration, and tool-connected AI systems capable of executing complex workflows within the health insurance domain.


1. Agent Orchestration

  • Build and manage autonomous AI agents using Agent Development Kit (ADK) and Vertex AI Agent Engine.
  • Design and implement multi-agent workflows capable of handling complex tasks.

2. Interoperability

  • Implement the Model Context Protocol (MCP) to enable connectivity between:
  • AI agents
  • Internal PHI tools
  • External services and APIs.

3. Multimodal Development

  • Build real-time, bidirectional audio applications using the Gemini Live API.
  • Integrate image generation models and support multimodal AI capabilities.

4. Safety Engineering

  • Implement AI safety layers to protect sensitive healthcare data.
  • Use Model Armor and Cloud DLP API to:
  • Sanitize prompts
  • Prevent exposure of PII/PHI data
  • Enforce secure AI interactions.

5. Agent-to-Agent (A2A) Communication

  • Configure remote agent connectivity using the A2A SDK.
  • Enable cross-agent collaboration and workflow orchestration.

Must-Have Skills

  • Advanced proficiency with Agent Development Kit (ADK).
  • Strong experience with Vertex AI Agent Engine.
  • Hands-on experience with Model Context Protocol (MCP).
  • Experience implementing Agent-to-Agent (A2A) workflows using the A2A SDK.
  • Expertise in Google Gen AI SDK for Python.
  • Experience building multimodal AI applications.
  • Proven experience implementing AI safety layers, including:
  • Model Armor
  • Cloud DLP API

Good-to-Have Skills (Foundation)

Data & Analytics

  • BigQuery optimization techniques, including:
  • Partitioning
  • Clustering
  • Denormalization for performance and cost optimization.

Streaming & Real-Time Pipelines

  • Experience building real-time data pipelines using:
  • Google Pub/Sub
  • BigQuery streaming pipelines
Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Mumbai
3 - 5 yrs
Upto ₹33L / yr (Varies
)
Agentic AI
skill iconPython
RESTful APIs
Google Vertex AI
Gemini (Google AI)
+1 more

We are seeking a Senior Machine Learning Engineer to support the development and deployment of advanced AI capabilities within the PHI ecosystem.

This role focuses on the execution of Generative AI tasks, including model integration and agent deployment. The candidate will be responsible for building RAG-based workflows and ensuring AI interactions remain grounded and accurate using Google Cloud AI tools.


Key Responsibilities

1. GenAI Integration

  • Develop and maintain integrations with Gemini 1.5 Pro and Flash models
  • Use the Google Gen AI SDK for Python to build and manage model integrations

2. Agent Deployment

  • Assist in deploying AI agents to Vertex AI Agent Engine
  • Work with the Agent Development Kit (ADK) for agent lifecycle management

3. RAG & Embeddings

  • Generate and manage text and multimodal embeddings
  • Support semantic search and Retrieval-Augmented Generation (RAG) pipelines

4. Testing & Quality

  • Run evaluation scripts to verify model output quality
  • Ensure models follow grounding and response accuracy guidelines

Must-Have Skills

  • Strong Python programming
  • Experience working with REST APIs
  • Hands-on experience with Vertex AI Studio
  • Experience working with Gemini APIs
  • Understanding of Agentic AI concepts
  • Familiarity with ADK CLI
  • Experience or understanding of RAG architecture
  • Knowledge of embedding generation

Good-to-Have Skills (Foundation):

BigQuery

  • Basic SQL knowledge
  • Experience with data loading
  • Ability to debug and troubleshoot queries

Data Streaming

  • Familiarity with Google Pub/Sub
  • Understanding of synthetic data generation

Visualization

  • Basic reporting and dashboards using Looker Studio
Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
5 - 10 yrs
Upto ₹40L / yr (Varies
)
skill iconPython
RESTful APIs
Microservices

As a Backend Engineer, you will be a core member of the Platform Implementation Team, responsible for building the robust, scalable, and secure backend infrastructure for a multi-cloud enterprise Data & AI platform.


You will design and develop high-performance microservices, RESTful APIs, and event-driven architectures that serve as the backbone for enterprise-wide applications.

Working closely with Platform Engineers, Data Modelers, and UI teams, you will ensure seamless data flow between core business systems (CRM, ERP) and the platform, enabling the rollout of critical business services across multiple global Local Business Units (LBUs).



Backend Development

  • Design and develop scalable backend services and microservices
  • Build and maintain RESTful APIs for enterprise applications
  • Define and maintain API contracts using OpenAPI/Swagger

Platform & System Integration

  • Enable seamless integration between enterprise systems (CRM, ERP) and the platform
  • Support data flow across multiple global business units

Event-Driven Architecture

  • Implement asynchronous processing and event-driven systems
  • Work with message brokers and streaming platforms

Cross-Functional Collaboration

  • Collaborate with platform engineers, data modelers, and frontend teams
  • Contribute to architecture discussions and backend design decisions

Must-Have Skills

Experience

  • 5–7 years of hands-on experience in backend software engineering
  • Experience building enterprise-grade backend systems

Core Programming

Strong proficiency in at least one backend language:

  • Python
  • Node.js
  • Java

Strong understanding of:

  • Object-oriented programming (OOP)
  • Functional programming principles

API & Microservices

  • Extensive experience building RESTful APIs
  • Experience designing microservices architectures
  • Ability to define API contracts using OpenAPI / Swagger

Cloud Infrastructure

Hands-on experience with cloud platforms:

  • Google Cloud Platform (GCP)
  • Microsoft Azure

Examples of services:

  • Cloud Functions
  • Cloud Run
  • Azure App Services

Database Management

Experience with both Relational and NoSQL databases

Relational:

  • PostgreSQL
  • Cloud SQL

NoSQL:

  • Schema design
  • Complex querying
  • Performance optimization

Event-Driven Architecture

Experience with asynchronous processing and message brokers:

  • GCP Pub/Sub
  • Apache Kafka
  • RabbitMQ

Security & Authentication

Strong understanding of:

  • OAuth 2.0
  • JWT authentication
  • Role-Based Access Control (RBAC)
  • Data encryption

Software Engineering Best Practices

  • Writing clean, maintainable code
  • Version control using Git
  • Writing unit and integration tests
  • Familiarity with CI/CD pipelines
  • Containerization using Docker

Good-to-Have Skills

AI & LLM Integration

  • Experience integrating Generative AI models
  • Exposure to:
  • OpenAI
  • Vertex AI
  • LLM gateways
  • Retrieval-Augmented Generation (RAG)

Frontend Exposure

Basic familiarity with frontend frameworks such as:

  • React
  • Next.js
  • Angular

Understanding how backend APIs integrate with UI applications

Advanced Data Stores

Experience with:

  • Vector databases (Pinecone, Milvus)
  • Knowledge graphs

Domain Knowledge

  • Experience in Life Insurance or BFSI sector
  • Understanding of enterprise data governance and compliance standards
Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
6 - 10 yrs
Upto ₹38L / yr (Varies
)
skill iconPython
Generative AI
Microservices
RESTful APIs
skill iconMongoDB
+3 more

We are seeking a highly skilled Senior Backend Developer with deep expertise in Python and FastAPI to join our team. This role focuses on building high-performance, scalable backend services capable of handling high request volumes while integrating advanced LLM technologies.


The ideal candidate will design robust distributed systems, implement efficient data storage solutions, and ensure enterprise-grade security within an Azure-based infrastructure. This is a great opportunity to work on AI/ML integrations and mission-critical applications requiring high performance and reliability.


Key Responsibilities:


Backend Development

  • Design and maintain high-performance backend services using Python and FastAPI
  • Implement advanced FastAPI features such as dependency injection, middleware, and async programming
  • Write comprehensive unit tests using pytest
  • Design and maintain Pydantic schemas

High-Concurrency Systems

  • Implement asynchronous code for high-volume request processing
  • Apply concurrency patterns and atomic operations to ensure efficient system performance

Data & Storage

  • Optimize MongoDB operations
  • Implement Redis caching strategies (TTL, performance tuning, caching patterns)

Distributed Systems

  • Implement rate limiting, retry logic, failover mechanisms, and region routing
  • Build microservices and event-driven architectures
  • Work with EventHub, Blob Storage, and Databricks

AI/ML Integration

  • Integrate OpenAI API, Gemini API, and Claude API
  • Manage LLM integrations using LiteLLM
  • Optimize AI service usage within the Azure ecosystem

Security

  • Implement JWT authentication
  • Manage API keys and encryption protocols
  • Implement PII masking and data security mechanisms

Collaboration

  • Work with cross-functional teams on architecture and system design
  • Contribute to engineering best practices and technical improvements
  • Mentor junior developers where required

Must-Have Skills & Requirements

Experience

  • 7+ years of hands-on Python backend development
  • Bachelor’s degree in Computer Science, Engineering, or related field
  • Experience building high-traffic, scalable systems

Core Technical Skills

Python

  • Advanced knowledge of asynchronous programming, concurrency, and atomic operations

FastAPI

  • Expert-level experience with dependency injection, middleware, and async code

Testing

  • Strong experience with pytest and Pydantic schemas

Databases

  • Hands-on experience with MongoDB and Redis
  • Strong understanding of caching patterns, TTL, and performance optimization

Distributed Systems

  • Experience with rate limiting, retry logic, failover mechanisms, high concurrency processing, and region routing

Microservices

  • Experience building microservices and event-driven systems
  • Exposure to EventHub, Blob Storage, and Databricks

Cloud

  • Strong experience working in Azure environments

AI Integration

  • Familiarity with OpenAI API, Gemini API, Claude API, and LiteLLM

Security

  • Implementation experience with JWT authentication, API keys, encryption, and PII masking

Soft Skills

  • Strong problem-solving and debugging skills
  • Excellent communication and collaboration
  • Ability to manage multiple priorities
  • Detail-oriented approach to code quality
  • Experience mentoring junior developers

Good-to-Have Skills

Containerization

  • Docker, Kubernetes (preferably within Azure)

DevOps

  • CI/CD pipelines and automated deployment

Monitoring & Observability

  • Experience with Grafana, distributed tracing, custom metrics

Industry Experience

  • Experience in Insurance, Financial Services, or regulated industries

Advanced AI/ML

  • Vector databases
  • Similarity search optimization
  • LangChain / LangSmith

Data Processing

  • Real-time data processing and event streaming

Database Expertise

  • PostgreSQL with vector extensions
  • Advanced Redis clustering

Multi-Cloud

  • Experience with AWS or GCP alongside Azure

Performance Optimization

  • Advanced caching strategies
  • Backend performance tuning
Read more
VegaStack
Careers VegaStack
Posted by Careers VegaStack
Bengaluru (Bangalore)
0 - 0 yrs
₹10000 - ₹15000 / mo
skill iconNextJs (Next.js)
skill iconPython
skill iconDjango
skill icontailwindcss
TypeScript
+4 more

Who We Are

We're a DevOps and Automation company based in Bengaluru, India. We have successfully delivered over 170 automation projects for 65+ global businesses, including Fortune 500 companies that entrust us with their most critical infrastructure and operations. We're bootstrapped, profitable, and scaling rapidly by consistently solving real, impactful problems.

What We Value

  • Ownership: As part of our team, you're responsible for strategy and outcomes, not just completing assigned tasks.
  • High Velocity: We move fast, iterate faster, and amplify our impact, always prioritizing quality over speed.

Who we seek

We are looking for a Fullstack Developer Intern to join our Engineering team. You’ll build and improve internal products. This is a hands-on internship focused on learning by shipping. Your ultimate goal will be to build highly responsive and innovative AI based software solutions that meet our business needs.

We're looking for individuals who genuinely care, ship fast, and are driven to make a significant impact.

🌏 Job Location: Bengaluru (Work From Office)

What You Will Be Doing

  • Build user-facing features using Next.js and TypeScript.
  • Convert designs into responsive UI using Tailwind CSS and reusable components.
  • Work with APIs to integrate frontend with backend services.
  • Implement common product workflows: authentication, forms, dashboards, tables, and navigation.
  • Fix bugs, write clean code, and improve performance.
  • Collaborate in a PR-based workflow on GitHub.
  • Write and maintain documentation for the features you ship.
  • Learn and apply best practices: component structure, state management, error handling, accessibility basics.

What We’re Looking For

  • Basic to intermediate experience with JavaScript and NextJS.
  • Familiarity with TypeScript basics.
  • Comfortable with HTML/CSS and responsive design, Tailwind CSS is a plus.
  • Understanding of how APIs work and how to consume them from the frontend.
  • Strong Git knowledge.
  • Strong learning mindset, ownership, and attention to detail.

Benefits

  • Work directly with founders and the leadership team.
  • Drive projects that create real business impact, not busywork.
  • Gain practical skills that traditional education misses.
  • Experience rapid growth as you tackle meaningful challenges.
  • Fuel your career journey with continuous learning and advancement paths.
  • Thrive in a workplace where collaboration powers innovation daily.


Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Bengaluru (Bangalore)
5 - 7 yrs
₹4L - ₹12L / yr
skill iconPython
Large Language Models (LLM)
FastAPI
Windows Azure
CI/CD

👉 Job Title: Senior Backend Developer

🌟 Experience: 5-7 Years

💡Location: Bangalore

👉 Notice Period :- Immediate Joiners

💡 Work Mode - 5 Days work from Office

( Candidate Serving notice period are preffered)


Role Summary

We are seeking a Senior Backend Developer with strong expertise in Python and FastAPI to build scalable, high-performance backend systems integrated with LLM technologies on Azure. The role involves designing distributed systems, optimizing data pipelines, and ensuring secure, enterprise-grade applications.


Key Responsibilities

  • Develop backend services using Python & FastAPI (async, middleware)
  • Build high-concurrency, scalable systems and microservices
  • Work with Azure services and event-driven architectures
  • Optimize MongoDB & Redis for performance
  • Integrate LLM APIs (OpenAI, Gemini, Claude)
  • Implement security (JWT, encryption, API management)

Mandatory Skills (Top 3)

  1. Strong Python backend development with FastAPI
  2. Hands-on experience with Microsoft Azure cloud
  3. Experience in building scalable distributed/microservices systems


Good to Have

  • Docker, Kubernetes, CI/CD
  • LLM frameworks (LangChain, vector DBs)
  • Monitoring tools and real-time data processing


Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Bengaluru (Bangalore)
5 - 7 yrs
₹4L - ₹12L / yr
skill iconJava
skill iconPython
skill iconNodeJS (Node.js)
Windows Azure
Google Cloud Platform (GCP)
+3 more

👉 Job Title: Backend Developer

🌟 Experience: 5-7 Years

💡Location: Bangalore

👉 Notice Period :- Immediate Joiners

💡 Work Mode - 5 Days work from Office

( Candidate Serving notice period are preffered)


Role Summary

We are looking for a Backend Engineer to join the Platform Implementation Team, responsible for building scalable, secure, and high-performance backend systems for a multi-cloud Data & AI platform. You will design microservices, develop REST APIs, and enable seamless data integration across enterprise systems like CRM and ERP.


💫 Key Responsibilities

✅ Design and develop scalable microservices and RESTful APIs

✅ Build event-driven architectures for asynchronous processing

✅ Integrate backend systems with cloud platforms (GCP/Azure)

✅ Ensure secure, reliable, and optimized data handling

✅ Collaborate with cross-functional teams (UI, Data, Platform)

✅ Follow best practices in coding, testing, CI/CD, and containerization


💫 Mandatory Skills (Top 3)

✅ Strong backend programming experience (Python / Node.js / Java)

✅ Expertise in API development & Microservices architecture

✅ Hands-on experience with Cloud platforms (GCP or Azure)





Read more
TalentXO
tabbasum shaikh
Posted by tabbasum shaikh
Bengaluru (Bangalore)
4 - 7 yrs
₹34L - ₹40L / yr
skill iconPython
LLM
OpenAI
Gemini
RAG
+5 more

Role & Responsibilities

As a Senior GenAI Engineer you will own the AI layer of our product — building the features that make Zenskar intelligent. This is not a research role and not a prompt-engineering role. You will build production AI systems that enterprise clients depend on, which means reliability, observability, and rigorous evals matter as much as the AI capability itself. You own the full vertical — the model, the pipeline, and the UI.

  • Build and own CS Copilot — a real-time assistant for customer success teams, spanning STT pipelines, live transcription, and LLM-powered suggestions
  • Build LLM-powered document understanding features — extracting structured, reliable data from unstructured enterprise documents
  • Own AI feature UIs end-to-end — you build the interface, not just the model integration layer
  • Design and maintain an eval framework — define what 'working' means for each AI feature and catch regressions before users do
  • Drive model selection and integration decisions — choosing the right provider and approach for each use case, managing latency and cost
  • Own AI platform reliability — observability, fallback behaviour, and graceful degradation when models fail
  • Work closely with product, customer success, and the full-stack engineer — AI features only matter if they are usable and trusted by real users

THE IMPACT YOU'LL MAKE-

  • You will define what AI means at Zenskar — the features you ship will be the most visible and differentiated parts of the product
  • CS Copilot, if done well, changes how enterprise customer success teams operate every single day — this is a high-stakes, high-visibility surface
  • You will establish the engineering culture around AI reliability at Zenskar — evals, observability, and disciplined iteration
  • Your work will directly accelerate enterprise deals — AI features are increasingly a buying criterion for our clients
  • You will be the person who brings engineering rigour to a domain where most companies ship demos and call it a feature

Ideal Candidate

  • Strong Senior GenAI / AI Backend Engineer Profiles
  • Mandatory (Experience 1) – Must have 4+ years of total software development experience, with at least 2+ years working on AI/LLM-based features in production
  • Mandatory (Experience 2) – Must have strong backend engineering experience using Python (FastAPI / Django preferred) and building production-grade systems
  • Mandatory (Experience 3) – Must have hands-on experience building LLM-based applications, including OpenAI / Gemini / similar models in real projects
  • Mandatory (Experience 4) – Must have experience with RAG (Retrieval Augmented Generation) including chunking, embeddings, and retrieval pipelines
  • Mandatory (Experience 5) – Must have experience designing end-to-end AI pipelines, including chaining, tool usage, structured outputs, and handling failure cases
  • Mandatory (Experience 6) – Must have experience building agentic AI systems (multi-step workflows, tool orchestration like LangGraph / CrewAI or custom agents)
  • Mandatory (Experience 7) – Must have strong coding and system design skills, not just prompt engineering or experimentation
  • Mandatory (Experience 8) – Must have experience shipping AI features in production, not just POCs or research projects
  • Mandatory (Experience 9) – Must have experience working with APIs, backend services, and integrations
  • Mandatory (Experience 10) – Must have understanding of AI system reliability, including latency, cost optimization, fallback handling, and basic eval thinking
  • Mandatory (Company) – Product companies / startups, preferably Series A to Series D
  • Mandatory (Note) - Candidate's overall experience should not be more than 7 Yrs
  • Mandatory (Tech Stack) – Strong in Python + AI/LLM ecosystem, experience with modern AI tooling and frameworks
  • Mandatory (Exclusion) – Reject profiles that are only Prompt Engineers, Data Scientists, or Frontend Engineers without strong backend + system building experience
  • Preferred (Skill) – Experience with fine-tuning (LoRA / QLoRA) or open-source model deployment (vLLM / Ollama)
  • Preferred (Frontend) – Basic ability to build or contribute to frontend (React or similar)
  • Highly Preferred (Education) – Candidates from Tier-1 institutes (IITs, BITS, NITs, IIITs, top global universities)


Read more
Zeuron.AI

at Zeuron.AI

1 candid answer
Kavitha Rajan
Posted by Kavitha Rajan
Bengaluru (Bangalore)
1 - 2 yrs
₹11L - ₹12L / yr
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Computer Vision
skill iconFlutter
Embedded C
+2 more

Job Title: Software/Hardware Engineer (IIT/NIT)

Location: Bangalore

Website: https://www.zeuron.ai

Experience: 1 Year

CTC: ₹12 LPA


About the Company

Zeuron.ai is a Bangalore-based deep-tech startup founded in 2019, focused on building brain-inspired computing and AI-driven healthcare solutions. The company combines neuroscience, AI, and gaming to create innovative digital therapeutics and neurotechnology platforms for improving brain health, rehabilitation, and overall well-being.

About the Role

We are looking for a highly motivated Software/Hardware Engineer from premier institutes (IIT/NIT) with strong fundamentals and a passion for building scalable and efficient systems. This role offers an opportunity to work on cutting-edge technology and solve real-world problems.

 

Key Responsibilities

Design, develop, and optimize software/hardware solutions

Work on system architecture, debugging, and performance improvements

Collaborate with cross-functional teams (product, design, operations)

Participate in code reviews, testing, and deployment processes

Contribute to innovation and continuous improvement initiatives

 

Requirements

B.Tech/M.Tech from IITs/NITs (Computer Science, Electronics, Electrical, or related fields)

1 year of experience (internships/project experience considered)

Strong programming skills (C/C++/Python/Java) or hardware fundamentals (embedded systems, VLSI, circuit design)

Good understanding of data structures, algorithms, and system design

Problem-solving mindset with strong analytical skills


Preferred Skills

Experience with embedded systems, IoT, or product development

Knowledge of cloud platforms or system-level programming

Good in Computer vision, Flutter, JavaScript, AI/ML

Read more
Bengaluru (Bangalore)
5 - 10 yrs
₹1L - ₹8L / yr
databricks
ETL
PySpark
Apache Spark
CI/CD
+7 more

Profile - Databricks Developer

Experience- 5+ years

Location- Bangalore (On site)

PF & BGV is Mandatory


Job Description: -


* Design, build, and optimize data pipelines and ETL/ELT workflows using Databricks and Apache Spark (PySpark).

* Develop scalable, high performance data solutions using Spark distributed processing.

* Lead engineering initiatives focused on automation, performance tuning, and platform modernization.

* Implement and manage CI/CD pipelines using Git-based workflows and tools such as GitHub Actions or Jenkins.

* Collaborate with cross-functional teams to translate business needs into technical solutions.

* Ensure data quality, governance, and security across all processes.

* Troubleshoot and optimize Spark jobs, Databricks clusters, and workflows.

* Participate in code reviews and develop reusable engineering frameworks.

* Should have knowledge of utilizing AI tools to improve productivity and support daily engineering activities.

* Strong knowledge and hands-on experience in Databricks Genie, including prompt engineering, workspace usage, and automation


. Required Skills & Experience:

* 5+ years of experience in Data Engineering or related fields.

* Strong hands-on expertise in Databricks (notebooks, Delta Lake, job orchestration).

* Deep knowledge of Apache Spark (PySpark, Spark SQL, optimization techniques).

* Strong proficiency in Python for data processing, automation, and framework development.

* Strong proficiency in SQL, including complex queries, performance tuning, and analytical functions.

* Strong knowledge of Databricks Genie and leveraging it for engineering workflows.

* Strong experience with CI/CD and Git-based development workflows. * Proficiency in data modeling and ETL/ELT pipeline design.

* Experience with automation frameworks and scheduling tools.

* Solid understanding of distributed systems and big data concepts

Read more
Hashone Career
Madhavan I
Posted by Madhavan I
Bengaluru (Bangalore)
3 - 8 yrs
₹20L - ₹28L / yr
SQL
skill iconPython
AtScale

Summary:

Data Engineer/Analytics Engineer with experience in semantic layer modeling using AtScale, building scalable data pipelines, and delivering high-performance analytics solutions on cloud platforms.




 Responsibilities

• Build and maintain ETL/ELT pipelines for large-scale data

• Develop semantic models, cubes, and metrics in AtScale

• Optimize query performance and BI dashboards

• Integrate data platforms (Snowflake, Databricks, BigQuery)

• Collaborate with analysts and business teams




 Skills

• SQL, Python/Scala

• Data modeling (star schema, OLAP)

• AtScale (semantic layer)

• Spark, dbt, Airflow

• BI tools (Tableau, Power BI, Looker)

• AWS / GCP / Azure



 Experience

• 3–8+ years in data/analytics engineering

• Experience with enterprise data platforms and BI systems

Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
4 - 12 yrs
Upto ₹45L / yr (Varies
)
MLOps
skill iconPython
databricks
Windows Azure
skill iconAmazon Web Services (AWS)

We are seeking a skilled and passionate ML Engineer with 3+ years of experience to join our team. The ideal candidate will be instrumental in developing, deploying, and maintaining machine learning models, with a strong focus on MLOps practices.

This role requires hands-on experience with Azure cloud services, Databricks, and MLflow to build robust and scalable ML solutions.


Responsibilities

  • Design, develop, and implement machine learning models and algorithms to solve complex business problems.
  • Collaborate with data scientists to transition models from research and development into production-ready systems.
  • Build and maintain scalable data pipelines for ML model training and inference using Databricks.
  • Implement and manage the ML model lifecycle using MLflow, including experiment tracking, model versioning, and model registry.
  • Deploy and manage ML models in production environments on Azure, leveraging services such as:
  • Azure Machine Learning
  • Azure Kubernetes Service (AKS)
  • Azure Functions
  • Support MLOps workloads by automating model training, evaluation, deployment, and monitoring processes.
  • Ensure the reliability, performance, and scalability of ML systems in production.
  • Monitor model performance, detect model drift, and implement retraining strategies.
  • Collaborate with DevOps and Data Engineering teams to integrate ML solutions into existing infrastructure and CI/CD pipelines.
  • Document model architecture, data flows, and operational procedures.

Qualifications

Education

  • Bachelor’s or Master’s degree in Computer Science, Engineering, Statistics, or a related quantitative field.

Experience

  • Minimum 3+ years of professional experience as an ML Engineer or in a similar role.

Required Skills

  • Strong proficiency in Python for data manipulation, machine learning, and scripting.
  • Hands-on experience with machine learning frameworks, such as:
  • Scikit-learn
  • TensorFlow
  • PyTorch
  • Keras
  • Demonstrated experience with MLflow for:
  • Experiment tracking
  • Model management
  • Model deployment
  • Proven experience working with Microsoft Azure cloud services, specifically:
  • Azure Machine Learning
  • Azure Databricks
  • Related compute and storage services
  • Solid experience with Databricks for:
  • Data processing
  • ETL pipelines
  • ML model development
  • Strong understanding of MLOps principles and practices, including:
  • CI/CD for ML
  • Model versioning
  • Model monitoring
  • Model retraining
  • Experience with containerization and orchestration technologies, including:
  • Docker
  • Kubernetes (especially AKS)
  • Familiarity with SQL and data warehousing concepts.
  • Experience working with large datasets and distributed computing frameworks.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and collaboration skills.

Nice-to-Have Skills

  • Experience with other cloud platforms (AWS or GCP).
  • Knowledge of big data technologies such as Apache Spark.
  • Experience with Azure DevOps for CI/CD pipelines.
  • Familiarity with real-time inference patterns and streaming data.
  • Understanding of Responsible AI principles, including fairness, explainability, and privacy.

Certifications (Preferred)

  • Microsoft Certified: Azure AI Engineer Associate
  • Databricks Certified Machine Learning Associate (or higher) 
Read more
Verse
Ravi K
Posted by Ravi K
Bengaluru (Bangalore)
2 - 5 yrs
₹15L - ₹20L / yr
skill iconPython
FastAPI
skill iconPostgreSQL
Neo4J
LangGraph

Founding Engineer (Bangalore)


The problem:

Business enterprises overpay vendors - on every batch of invoices, on every month because the data that would catch lives in different systems. We are building an AI agent that processes invoices end-to-end, reasons across all the relevant sources, flags genuine discrepancies, and acts - without a human having to investigate each one.


What you will own

Everything engineering. Schema design to deployment to the 2am fix when something breaks in production. There is no tech lead above you. There is no platform team. There is the architecture, you, and the founders. Concretely, this means building:

  • A multi-stage agentic pipeline that takes a vendor invoice and produces a structured decision - fully autonomous for clear cases, escalating to human review for genuinely ambiguous ones. We use LangGraph, but if you've built equivalent systems with Temporal, Prefect, or custom state machines with LLM orchestration, that works
  • An LLM-powered extraction layer that handles real invoices - scanned PDFs, stamped documents, inconsistent layouts - and returns structured output
  • A graph data model that connects invoices to various sources and can traverse those relationships to detect discrepancies
  • ERP connectors, GST validation logic, and a write-back layer that closes the loop


What we need

  • Strong Python. Async FastAPI, clean service boundaries, tests that actually catch bugs. You have shipped Python backends that handled real production load
  • Solid Postgres. Complex queries, schema design, migrations without downtime, row-level security for multi-tenant data. pgvector is a plus - if not, you pick it up fast
  • LLM API experience in production. You have called an LLM API for something that real users depended on. You know about structured output, retry logic, cost management, prompt versioning. A side project counts if it was genuinely deployed
  • Comfort with graph data models. You understand when a graph is the right structure and when it is not. You do not need deep Neo4j production experience - you need to understand graph relationships conceptually and be willing to learn Cypher. It is a 2-day ramp for the right person
  • Working knowledge of deployment. Deployed and operated production workloads on GCP. Cloud Run, Cloud SQL, Cloud Storage, Redis — you're comfortable across the stack. If you've done it on AWS, the translation isn't hard, but GCP is where we are
  • You own things. Not "I contributed to" - you designed it, shipped it, and fixed it when it broke. That pattern needs to be visible in your history


Good to have, not mandatory

  • Built an agentic pipeline with multiple stages
  • Any fintech, P2P domain experience - even tangential
  • Worked at a startup with under 20 people
  • Has a GitHub, blog, or writeup that shows how you think about a hard technical problem


What you get

  • The hardest engineering problem you would have worked on. This is not CRUD with an LLM bolted on
  • Real ownership. First engineering hire. Your architectural decisions will be in this product five years from now
  • Equity that matters. ESOP - Open to discussion. We are pre-seed - this is a bet, not a guarantee. We will not pretend otherwise
  • No meetings tax. You work directly with the founders. The product is specified clearly. You know what you are building and why


Honest about stage: We do not have a production ready infra yet. We have a complete architecture specification and a working prototype. If you need the stability of an established engineering org, this is not the right moment. If you want to build something real from zero and own a meaningful piece of it, it is.


The founders

One of us has spent 20 years building revenue and operational engines at companies where there was no playbook - part of the pilot team that established the world's largest search company's direct sales operations in India, managed global operations for a global mobile advertising platform, scaled a B2C platform to become one of India’s leading edtech platforms and most recently worked on building an enterprise Agentic Voice AI platform. The other has spent 15 years taking AI from demo to production in domains where failure is expensive - voice, lending, and conversational systems across a Series D conversational AI company, a major telco, a Big 4, and a leading NBFC.


Two IIT/IIM alumni who have both watched AI work in enterprise, and know exactly what it takes to get it there. We are not building this product because it sounds interesting. We are building it because we have both sat across the table from CFOs who know they are losing margin and have no tool capable of doing anything about it.

Read more
Improving
Rohini Jadhav
Posted by Rohini Jadhav
Bengaluru (Bangalore)
5 - 8 yrs
₹25L - ₹35L / yr
skill iconPython
skill iconKubernetes
skill iconJenkins
CI/CD
skill iconDocker
+1 more

What are we looking for?

  • You have a good understanding and work experience in AKS, Kubernetes, and EKS.
  • You are able to manage multi region clusters for disaster recovery.
  • You have a good understanding of AWS stack.
  • You have experience of production level in Kubernetes. 
  • You are comfortable coding/programming and can do so whenever required. 
  • You have worked with programmable infrastructure in some way - Built a CI/CD pipeline, Provisioned infrastructure programmatically or Provisioned monitoring and logging infrastructure for large sets of machines.
  • You love automating things, sometimes even what seems like you can’t automate - such as one of our engineers used Ansible to set up the Ubuntu workstation and runs a playbook every time something has to be installed.
  • You don’t throw around words such as “high availability” or “resilient systems” without understanding at least their basics. Because you know that words are easy to talk about but there is a fair amount of work to build such a system in practice.
  • You love coaching people - about the 12-factor apps or the latest tool that reduced your time of doing a task by X times and so on. You lead by example when it comes to technical work and community.
  • You understand the areas you have worked on very well but, you are curious about many systems that you may not have worked on and want to fiddle with them.
  • You know that understanding applications and the runtime technologies gives you a better perspective - you never looked at them as two different things.

What you will be learning and doing?

  1. You will be working with customers trying to transform their applications and adopt cloud-native technologies. The technologies used will be Kubernetes, Prometheus, Service Mesh, Distributed tracing and public cloud technologies or on-premise infrastructure.
  2. The problems and solutions are continuously evolving in space but fundamentally you will be solving problems with simplest and scalable automation.
  3. You will be building open source tools for problems that you think are common across customers and industry. No one ever benefited from re-inventing the wheel, did they? 
  4. You will be hacking around open source projects, understand their capabilities, limitations and apply the right tool for the right job.
  5. You will be educating the customers - from their operations engineers to developers on scalable ways to build and operate applications in modern cloud-native infrastructure.
Read more
Bengaluru (Bangalore)
2 - 4 yrs
₹21L - ₹28L / yr
Artificial Intelligence (AI)
skill iconPython

Strong Junior GenAI / AI Backend Engineer Profiles

Mandatory (Experience 1) – Must have 1.5+ years of full time expeirence in software development using LLMs (OpenAI / Gemini / similar) in projects (internship / full-time / strong personal projects)

Mandatory (Experience 2) – Must have strong coding skills in Python and hands-on backend development experience (FastAPI / Django preferred)

Mandatory (Experience 3) – Must have built or contributed to AI/LLM-based applications, such as chatbots, copilots, document processing tools, etc.

Mandatory (Experience 4) – Must have basic understanding of RAG concepts (embeddings, vector DBs, retrieval)

Mandatory (Experience 5) – Must have experience building APIs or backend services and integrating with external systems

Mandatory (Experience 6) – Must have AI/LLM projects clearly mentioned in CV (with what was built, not just tools used)

Mandatory (Experience 7) – Must have worked with modern development tools (Git, APIs, basic cloud exposure)

Mandatory (Tech Stack) – Strong in Python + basic AI/LLM ecosystem

Mandatory (Company) – Product companies / Funded startups (Series A / B / high-growth environments)

Mandatory (Education) - Tier-1 institutes (IITs, BITS, IIITs), Can be skipped if from Top notch product companies

Mandatory (Exclusion) - Avoid candidates with Only Prompt Engineers, from pure Data Science / ML theory background without backend coding, Frontend-heavy engineers.

Read more
Talent Pro
Bengaluru (Bangalore)
4 - 7 yrs
₹37L - ₹48L / yr
Artificial Intelligence (AI)
skill iconPython

Strong Senior GenAI / AI Backend Engineer Profiles

Mandatory (Experience 1) – Must have 4+ years of total software development experience, with at least 2+ years working on AI/LLM-based features in production

Mandatory (Experience 2) – Must have strong backend engineering experience using Python (FastAPI / Django preferred) and building production-grade systems

Mandatory (Experience 3) – Must have hands-on experience building LLM-based applications, including OpenAI / Gemini / similar models in real projects

Mandatory (Experience 4) – Must have experience with RAG (Retrieval Augmented Generation) including chunking, embeddings, and retrieval pipelines

Mandatory (Experience 5) – Must have experience designing end-to-end AI pipelines, including chaining, tool usage, structured outputs, and handling failure cases

Mandatory (Experience 6) – Must have experience building agentic AI systems (multi-step workflows, tool orchestration like LangGraph / CrewAI or custom agents)

Mandatory (Experience 7) – Must have strong coding and system design skills, not just prompt engineering or experimentation

Mandatory (Experience 8) – Must have experience shipping AI features in production, not just POCs or research projects

Mandatory (Experience 9) – Must have experience working with APIs, backend services, and integrations

Mandatory (Experience 10) – Must have understanding of AI system reliability, including latency, cost optimization, fallback handling, and basic eval thinking

Mandatory (Company) – Product companies / startups, preferably Series A to Series D

Mandatory (Note) - Candidate's overall experience should not be more than 7 Yrs

Mandatory (Tech Stack) – Strong in Python + AI/LLM ecosystem, experience with modern AI tooling and frameworks

Mandatory (Exclusion) – Reject profiles that are only Prompt Engineers, Data Scientists, or Frontend Engineers without strong backend + system building experience

Read more
AI-powered content creation and automation platform

AI-powered content creation and automation platform

Agency job
via Uplers by Shrishti Singh
Bengaluru (Bangalore)
2 - 4 yrs
₹15L - ₹28L / yr
skill iconPython
skill iconNodeJS (Node.js)
TypeScript
Artificial Intelligence (AI)
Generative AI
+2 more

Software Engineer

Onsite - HSR Bangalore

6 Days work from Office (Flexible working hours)


Product is a PowerPoint AI assistant used by consulting companies and Fortune 500 teams. A typical professional spends 1 to 3 hours creating one slide. With Product company, they create a v1 of their entire deck in 10 minutes, and make changes like “turn this table to a chart” in seconds directly within PowerPoint.

In the next 2 years, our goal at company is to forever change the way business presentations are made.


Who are we?

  • small, strong team of 5
  • founders are CS graduates from IIT Kharagpur with a specialisation in AI
  • work 6 days a week from our office in HSR Layout in Bangalore
  • funded by Y Combinator and other amazing investors
  • used by consulting companies and Fortune 500 teams


Your responsibilities (in order)

  • Design, implement, test, and deploy full features
  • Design and implement a robust infrastructure to enable rapid development and automated testing
  • Look at usage data to iterate on features


What we’re looking for

  • Undergraduate or master's in Computer Science or equivalent degree
  • 2+ years of backend or DevOps software engineering experience
  • Experience with TypeScript (JavaScript) or Python


You’ll be a good fit if

  • You want to work on a product that can change the way a very large number of people work
  • The chaos of high growth and things breaking is exciting to you
  • You are a workaholic, looking to upskill faster than most people think is possible. This role is not a good fit for you if you’re looking to prioritise work-life balance.
  • You prefer working in-person with other smart people who are excited and passionate about what they’re building
  • You love solving very hard problems at a rapid pace. We discuss timelines in days or weeks, so you’ll constantly be expected to ship really high-quality work.



Perks

  • Comprehensive health insurance for you and dependents
  • Workstation enhancements
  • Subscriptions to AI tools such as Cursor, ChatGPT, etc.

(If there's anything else we can do to make your work more enjoyable, just ask)


If you are interested in proceeding, we would be happy to move your profile to the next stage of the evaluation process.

Kindly share the following details to help us take this forward :


  • Current CTC (Fixed + Variable):
  • Expected CTC:
  • Notice Period (If currently serving, please mention your Last Working Day)
  • Details of any active offers in hand (if applicable)
  • Expected/Available Date of Joining (if applicable)
  • Attach Updated CV:
  • Attach Github Link / Leet code link or other:
  • Current Location:
  • Preffered Location:
  • Reason for job Change:
  • Reason for relocation (if applicable):
  • Are you comfortable with 6 days wfo (flexible working hours)?( Yes / No):

Read more
oil and Gas Industry (petroleum refinery)

oil and Gas Industry (petroleum refinery)

Agency job
via First Tek, Inc. by David Ingale
Bengaluru (Bangalore)
8 - 12 yrs
₹15L - ₹25L / yr
skill iconPython
MLOps
skill iconMachine Learning (ML)
API
CI/CD
+5 more

🔹 Role: Python Engineer – Python & MLOps

📍 Location: Bellandur, Bangalore

🕐 Work Timings: 01:30 PM – 10:30 PM

🏢 Work Mode: Monday (WFH), Tuesday–Friday (WFO)

📅 Experience: 8-12 Years (Ideal: 8-10 Years)

🔹 Role Overview

This role focuses on building and maintaining a production-grade AI/ML platform. You will work on scalable Python systems, MLOps pipelines, APIs, and CI/CD workflows in an enterprise environment.

🔹 Key Responsibilities

✔ Develop production-grade Python applications using OOP principles

✔ Build and enhance MLOps pipelines (training, validation, deployment)

✔ Design and optimize REST APIs with OpenAI/Swagger

✔ Implement async programming for high-performance systems

✔ Work on CI/CD pipelines (Azure Pipelines / GitHub Actions)

✔ Ensure clean, testable, and maintainable code (PyTest, TDD)

🔹 Required Skills

✔ Strong Python (OOP, modular design)

✔ MLOps & CI/CD pipeline experience

✔ REST API development

✔ Async programming (async/await, concurrency)

✔ Pandas / Polars & Scikit-learn

✔ JSON Schema–driven development

✔ Testing using PyTest

🔹 Nice to Have

➕ Azure ML SDK

➕ Pydantic

➕ Azure Cosmos DB

➕ Experience with large enterprise platforms

Read more
enParadigm

at enParadigm

2 candid answers
3 products
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
2 - 4 yrs
Upto ₹16L / yr (Varies
)
skill iconJava
skill iconPython
skill iconNodeJS (Node.js)
skill iconGo Programming (Golang)
skill iconPHP
+4 more

We are looking for a Full Stack Developer to build scalable software solutions and contribute across the entire software development lifecycle—from conception to deployment.

You will work closely with cross-functional teams and should be comfortable with both front-end and back-end technologies, modern frameworks, and third-party libraries. If you enjoy building visually appealing, functional applications and thrive in Agile environments, we’d love to connect.


Current Technologies Used

  • Backend: FastAPI (active), PHP (legacy), Java (legacy)
  • Frontend: Svelte, TypeScript, JavaScript

Experience with Python and PHP is a plus, but not mandatory.


Role Responsibilities

  • Collaborate with development teams and product managers to ideate software solutions
  • Design client-side and server-side architecture
  • Build visually appealing front-end applications
  • Develop and manage efficient databases and applications
  • Write effective and scalable APIs
  • Test software for responsiveness and performance
  • Troubleshoot, debug, and upgrade systems
  • Implement security and data-protection measures
  • Build mobile-responsive features and applications
  • Create and maintain technical documentation

Candidate Requirements:


Education

  • B.Tech / BE in Computer Science, Statistics, or a relevant field

Experience

  • 2–4 years as a Full Stack Developer or in a similar role

Location

  • Bangalore (Hybrid)

Skill Set – Role Based

  • Experience building web applications
  • Familiarity with common technology stacks
  • Knowledge of front-end languages and libraries:
  • HTML, CSS, JavaScript, XML, jQuery
  • Knowledge of back-end languages and frameworks:
  • Java, Python, PHP
  • Angular, React, Svelte, Node.js
  • Familiarity with:
  • Databases: PostgreSQL, MySQL, MongoDB
  • Web servers: Apache
  • UI/UX principles

Skill Set – Behavioural

  • Excellent communication and teamwork skills
  • Strong attention to detail
  • Good organizational skills
  • Analytical mindset


Read more
enParadigm

at enParadigm

2 candid answers
3 products
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
2 - 4 yrs
Upto ₹10L / yr (Varies
)
skill iconJava
skill iconPython
Selenium Web driver
cypress
playwright

Job Description:


Test Design & Execution

Design and execute detailed, well-structured test plans, test cases, and test scenarios to ensure high-quality product releases.


Automation Development

Develop and maintain automated test scripts for functional and regression testing using tools such as Selenium, Cypress, or Playwright.


Defect Management

Identify, log, and track defects through to resolution using tools like Jira, ensuring minimal impact on production releases.


API & Backend Testing

Conduct API testing using Postman, perform backend validation, and execute database testing using SQL/Oracle.


Collaboration

Work closely with developers, product managers, and UX designers in an Agile/Scrum environment to embed quality across the SDLC.


CI/CD Integration

Integrate automated test suites into CI/CD pipelines using platforms such as Jenkins or Azure DevOps.


Required Skills & Experience

  • Minimum 2+ years of experience in Software Quality Assurance or Automation Testing.
  • Hands-on experience with Selenium WebDriver, Cypress, or Playwright.
  • Proficiency in at least one programming/scripting language: Java, Python, or JavaScript.
  • Strong experience in functional, regression, integration, and UI testing.
  • Solid understanding of SQL for data validation and backend testing.
  • Familiarity with Git for version control, Jira for defect tracking, and Postman for API testing.


Desirable Skills

  • Experience in mobile application testing (Android/iOS).
  • Exposure to performance testing tools such as JMeter.
  • Experience working with cloud platforms like AWS or Azure.
Read more
Mid Size Product Engineering Services Company

Mid Size Product Engineering Services Company

Agency job
via Vidpro Consultancy Services by Vidyadhar Reddy
Remote, Bengaluru (Bangalore), Chennai, Hyderabad
20 - 26 yrs
₹65L - ₹120L / yr
skill iconVue.js
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconJavascript
+18 more

This role will report to the Chief Technology Officer


You Will Be Responsible For


* Driving decision-making on enterprise architecture and component-level software design to our software platforms' timely build and delivery.

* Leading a team in building a high-performing and scalable SaaS product.

* Conducting code reviews to maintain code quality and follow best practices

* DevOps practice development on promoting automation, including asset creation, enterprise strategy definition, and training teams

* Developing and building microservices leveraging cloud services

* Working on application security aspects

* Driving innovation within the engineering team, translating product roadmaps into clear development priorities, architectures, and timely release plans to drive business growth.

* Creating a culture of innovation that enables the continued growth of individuals and the company

* Working closely with Product and Business teams to build winning solutions

* Led talent management, including hiring, developing, and retaining a world-class team


Ideal Profile


* You possess a Degree in Engineering or a related field and have at least 20+ years of experience as a Software Engineer, with a 10+ years of experience leading teams and at least 4 Years of experience in building a SaaS / Fintech platform.

* Proficiency in MERN / Java / Full Stack.

* Led a team in optimizing the performance and scalability of a product

* You have extensive experience with DevOps environment and CI/CD practices and can train teams.

* You're a hands-on leader, visionary, and problem solver with a passion for excellence.

* You can work in fast-paced environments and communicate asynchronously with geographically distributed teams.


What's on Offer?


* Exciting opportunity to drive the Engineering efforts of a reputed organisation

* Work alongside &amp; learn from best in class talent

* Competitive compensation + ESOPs

Read more
Mercari, Inc

at Mercari, Inc

2 candid answers
1 video
Ashwin S
Posted by Ashwin S
Bengaluru (Bangalore)
6 - 9 yrs
Best in industry
skill iconMachine Learning (ML)
PyTorch
TensorFlow
NumPy
skill iconPython
+2 more

Introduction

About Us:


Mercari is a Japan-based C2C marketplace company founded in 2013 with the mission to “Create value in a global marketplace where anyone can buy & sell.” From being the first tech unicorn from Japan before its IPO in 2018 we have come a long way towards becoming a global player and continuously and diligently work towards our transformation journey with a strong focus on our mission.

Since its inception, Mercari Group has worked to grow its services, investing in both our people and technology. Over time Mercari has expanded from being the top player in the C2C marketplace in Japan to new geographies like the U.S. We have also successfully launched new businesses such as Merpay, which is a mobile payment service platform with a vision to create a society where anyone can realize their dreams through a new ecosystem centered not only on payment service but also on credit. Today, Mercari Group is made up of multiple subsidiary businesses including logistics, B2C platform, blockchain, and sports team management.


For our services to be utilized by people worldwide; however, there is still a mountain of work ahead of us. This endeavor naturally requires the capability of the best talent and minds, and that is exactly the reason for us to launch the India Center of Excellence. With your help, we will continue to take on the world stage and strive to grow into a successful global tech company.


Our Culture:

To achieve our mission at Mercari, our organization and each of our employees share the same values and perspectives. Our individual guidelines for action are defined by our four values: Go Bold, All for One, Be a Pro and Move Fast. Our organization is also shaped by our four foundations: Sustainability, Diversity & Inclusion, Trust & Openness, and Well-being for Performance. Regardless of how big Mercari gets, the culture will remain essential to achieving our mission and something we want to preserve throughout our organization. We invite you to read the Mercari Culture Doc which summarizes the behaviors and mindset shared by Mercari and its employees. We continue to build an environment where all of our members of diverse backgrounds are accepted and recognized, and where they can thrive while holding dear to Mercari’s culture.


Work Responsibilities

  • Machine learning engineers working in the Recommendation domain develop the functions and services of the marketplace app Mercari through the development and maintenance of machine learning systems like Recommender systems while leveraging necessary infrastructure and companywide platform tools. 
  • Mercari is actively applying advanced machine learning technology to provide a more convenient, safer, and more enjoyable marketplace. Machine learning engineers use the cloud and Kubernetes to operate and improve machine learning systems.


Bold Challenges

  • We are looking for people who are interested in our services, mission, and values, and want to work where engineers can go bold, use the latest technology, make autonomous decisions, and take on challenges at a rapid pace.
  • Develop and optimize machine learning algorithms and models to enhance recommendation system to improve discovery experience of users
  • Collaborate with cross-functional teams and product stakeholders to gather requirements, design solutions, and implement features that improve user engagement
  • Conduct data analysis and experimentation with large-scale data sets to identify patterns, trends, and insights that drive the refinement of recommendation algorithms
  • Utilize machine learning frameworks and libraries to deploy scalable and efficient recommendation solutions.
  • Monitor system performance and conduct A/B testing to evaluate the effectiveness of features.
  • Continuously research and stay updated on advancements in AI/machine learning techniques and recommend innovative approaches to enhance recommendation capabilities.


Minimum Requirements:

  • Over 5-9 years of professional experience in end-to-end development of large-scale ML systems in production
  • Strong experience demonstrating development and delivery of end-to-end machine learning solutions starting from experimentation to deploying models, including backend engineering and MLOps, in large scale production systems.
  • Experience using common machine learning frameworks (e.g., TensorFlow, PyTorch) and libraries (e.g., scikit-learn, NumPy, pandas)
  • Deep understanding of machine learning and software engineering fundamentals
  • Basic knowledge and skills related to monitoring system, logging, and common operations in production environment
  • Communication skills to carry out projects in collaboration with multiple teams and stakeholders


Preferred skills:

  • Experience developing Recommender systems utilizing large-scale data sets
  • Basic knowledge of enterprise search systems and related stacks (e.g. ELK)
  • Functional development and bug fixing skills necessary to improve system performance and reliability
  • Experience with technology such as Docker and Kubernetes
  • Experience with cloud platforms (AWS, GCP, Microsoft Azure, etc.)
  • Microservice development and operation experience with Docker and Kubernetes
  • Utilizing deep learning models/LLMs in production
  • Experience in publications at top-tier peer-reviewed conferences or journals


Employment Status

Full-time

Office

Bangalore

Hybrid workstyle

  • We believe in high performance and professionalism. We work from office for 2 days/week and work from home 3 days/week
  • To build a strong & highly-engaged organization in India, we highly encourage everyone to work from our Bangalore office, especially during the initial office setup phase
  • We will continue to review and update the policy to address future organizational needs

Work Hours

  • Full flextime (no core time)

*Flexible to choose working hours other than team common meetings

Media


Owned Media

  • Mercari Engineering Portal
  • AI at Mercari portal
  • Mercan - Introduces the people that make Mercari
  • Mercari US Blog

Related Articles

  • Development Platforms and Platformers: On Rising to the Global Standard Ken Wakasa, Mercari CTO | mercan
  • “I'm Not a Talented Engineer” Insists the Member-Turned-Manager Revamping Our Internal CS Tool | mercan
  • Personalize to globalize:How Mercari is reshaping their app, their company, and the world | mercan
  • The Providers of the Safe and Secure Mercari Experience: The TnS Team, Introduced by Its Members! | mercan
Read more
Searce Inc

at Searce Inc

3 recruiters
Srishti Dani
Posted by Srishti Dani
Mumbai, Pune, Bengaluru (Bangalore)
7 - 10 yrs
Best in industry
Data migration
Datawarehousing
ETL
SQL
Google Cloud Platform (GCP)
+7 more

Lead Data Engineer


What are we looking for

real solver?

Solver? Absolutely. But not the usual kind. We're searching for the architects of the audacious & the pioneers of the possible. If you're the type to dismantle assumptions, re-engineer ‘best practices,’ and build solutions that make the future possible NOW, then you're speaking our language.


Your Responsibilities

What you will wake up to solve.

  • Lead Technical Design & Data Architecture: Architect and lead the end-to-end development of scalable, cloud-native data platforms. You’ll guide the squad on critical architectural decisions—choosing between Batch vs. Streaming or ETL vs. ELT—while remaining 100% hands-on, contributing high-quality, production-grade code.
  • Build High-Velocity Data Pipelines: Drive the implementation of robust data transports and ingestion frameworks using Python, SQL, and Spark. You will build integration layers that connect heterogeneous sources (SaaS, RDBMS, NoSQL) into unified, high-availability environments like BigQuery, Snowflake, or Redshift.
  • Mentor & Elevate the Squad: Foster a culture of technical excellence by mentoring and inspiring a team of data analysts and engineers. Lead deep-dive code reviews, promote best-practice data modeling (Star/Snowflake schema), and ensure the squad adopts modern engineering standards like CI/CD for data.
  • Drive AI-Ready Data Strategy: Be the expert in designing data foundations optimized for AI and Machine Learning. You will champion the use of GCP (Dataflow, Pub/Sub, BigQuery) and AWS (Lambda, Glue, EMR) to create "clean room" environments that fuel advanced analytics and generative AI models.
  • Partner with Clients as a Technical DRI: Act as the Directly Responsible Individual for client success. Translate ambiguous business questions into elegant data services, manage project deliverables using Agile methodologies, and ensure that the data provided is accurate, consistent, and mission-critical.
  • Troubleshoot & Optimize for Scale: Own the reliability of the reporting layer. You will proactively monitor pipelines, troubleshoot complex transformation bottlenecks, and propose ways to improve platform performance and cost-efficiency.
  • Innovate and Build Reusable IP: Spearhead the creation of reusable data frameworks, custom operators, and transformation libraries that accelerate future projects and establish Searce’s unique technical advantage in the market.


Welcome to Searce


The AI-Native tech consultancy that's rewriting the rules.

Searce is an AI-native, engineering-led, modern tech consultancy that empowers clients to futurify their business by delivering intelligent, impactful, real business outcomes. Searce solvers co-innovate with clients as their trusted transformational partners ensuring sustained competitive advantage. Searce clients realize smarter, faster, better business outcomes delivered by AI-native Searce solver squads. 


Functional Skills 

the solver personas.

  • The Data Architect: This persona deconstructs ambiguous business goals into scalable, elegant data blueprints. They don't just move data; they design the foundation—from schema design to partitioning strategies—that allows data scientists and analysts to thrive, foreseeing technical bottlenecks and making pragmatic trade-offs.
  • The Player-Coach: As a hands-on leader, this persona leads from the front by writing exemplary, production-grade SQL and Python while simultaneously mentoring and elevating the skills of the squad. Their success is measured by the team's ability to deliver high-quality, maintainable code and their growth as engineers.
  • The Pragmatic Innovator: This individual balances a passion for modern data tech (like Generative AI and Real-time Streaming) with a sharp focus on business outcomes. They champion new tools where they add real value but are disciplined enough to choose stable, cost-effective solutions to meet deadlines and deliver robust products.
  • The Client-Facing Technologist: This persona acts as the crucial technical bridge between the data squad and the client. They build trust by listening actively, explaining complex data concepts (like data latency or idempotency) in simple terms, and demonstrating how engineering decisions align with the client’s strategic goals.
  • The Quality Craftsman: This individual possesses an unwavering commitment to data integrity and treats data engineering as a craft. They are the guardian of the reporting layer, advocating for robust testing, data validation frameworks, and clean, modular code to ensure the long-term reliability of the data platform.


Experience & Relevance 

  • Engineering Depth: 7-10 years of professional experience in end-to-end data product development. You have a portfolio that proves your ability to build complex, high-velocity pipelines for both Batch and Streaming workloads.
  • Cloud-Native Fluency: Deep, hands-on experience designing and deploying scalable data solutions on at least one major cloud platform (AWS, GCP, or Azure). You are comfortable navigating the nuances of EMR, BigQuery, or Synapse at scale.
  • AI-Native Workflow: You don’t just build for AI; you build with AI. You must be proficient in using AI coding assistants (e.g., GitHub Copilot) to accelerate your delivery and have a track record of building the data foundations required for Generative AI.
  • Architectural Portfolio: Evidence of leading 2-3 large-scale transformations—including platform migrations, data lakehouse builds, or real-time analytics architectures.
  • Client-Facing Acumen: You have direct experience in a consultative, client-facing role. You can confidently translate a CEO’s business vision into a Lead Engineer’s technical specification without losing anything in translation.


Join the ‘real solvers’

ready to futurify?

If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’. Don’t Just Send a Resume. Send a Statement.


Read more
Searce Inc

at Searce Inc

3 recruiters
Jatin Gereja
Posted by Jatin Gereja
Bengaluru (Bangalore), Mumbai, Pune
10 - 18 yrs
Best in industry
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Enterprise Data Warehouse (EDW)
Data modeling
Big Data
+9 more

Director - Data engineering


What are we looking for

real solver?

Solver? Absolutely. But not the usual kind. We're searching for the architects of the audacious & the pioneers of the possible. If you're the type to dismantle assumptions, re-engineer ‘best practices,’ and build solutions that make the future possible NOW, then you're speaking our language.


Your Responsibilities

what you will wake up to solve.

1. Delivery & Tactical Rigor

  • Methodology Implementation: Implement and manage a unified, 'DataOps-First' methodology for data engineering delivery (ETL/ELT pipelines, Data Modeling, MLOps, Data Governance) within assigned business units. This ensures predictable outcomes and trusted data integrity by reducing architecture variability at the project level.
  • Operational Stewardship: Drive initiatives to optimize team utilization and enhance operational efficiency within the practice. You manage the commercial success of your squads, ensuring data delivery models (from migration to modern data stack implementation) are executed profitably, scalably, and cost-effectively.
  • Execution & Technical Resolution
  • Technical Escalation: Serve as the primary escalation point for delivery issues, personally leading the resolution of complex data integration bottlenecks and pipeline failures to protect client timelines and data reliability standards.
  • Quality Enforcement
  • Quality Oversight: Execute and monitor technical data quality standards, ensuring engineering teams adhere to strict policies regarding data lineage, automated quality checks (observability), security/privacy compliance (GDPR/CCPA/PII), and active catalog management.

2. Strategic Growth & Practice Scaling

  • Talent & Scaling Execution: Execute the strategy for data engineering talent acquisition and development within your business units. Implement objective metrics to assess and grow the 'Data-Native' DNA of your teams, ensuring squads are consistently equipped to handle petabyte-scale environments and high-impact delivery.
  • Offerings Alignment: Drive the adoption of standardized regional offerings (e.g., Modern Data Platform, Data Mesh, Lakehouse Implementation). Ensure your teams leverage the profitable frameworks defined by the practice to accelerate time-to-insight and eliminate architectural fragmentation in client environments.
  • Innovation & IP Development: Lead the practical integration of Vector Databases and LLM-ready architectures into project delivery. Champion the hands-on development of IP and reusable accelerators (e.g., automated ingestion engines) that improve delivery speed and enhance data availability across your portfolio.

3. Leadership & Unit Management

  • Unit Leadership: Directly lead, mentor, and manage the Engineering Managers and Lead Architects within your business unit. Hold your teams accountable for project-level operational consistency, technical talent development, and strict adherence to the practice's data governance standards.
  • Stakeholder Communication: Clearly articulate the business unit’s operational performance, technical quality metrics, and delivery progress to the C-suite Stakeholders and regional client leadership, bridging the gap between technical execution and business value.
  • Ecosystem Alignment: Maintain strong technical relationships with key partner contacts (Snowflake, Databricks, AWS/GCP). Align team delivery capabilities with current product roadmaps and ensure squad-level participation in training, certifications, and partner-led enablement opportunities.


Welcome to Searce

The ‘process-first’, AI-native modern tech consultancy that's rewriting the rules.

We don’t do traditional.

As an engineering-led consultancy, we are dedicated to relentlessly improving the real business outcomes. Our solvers co-innovate with clients to futurify operations and make processes smarter, faster & better.


Functional Skills

1. Delivery Management & Operational Excellence

  • Methodology Execution: Expert capability in implementing and enforcing a unified delivery methodology (DataOps, Agile, Mesh Principles) within specific business units. Proven track record of auditing squad-level adherence to ensure consistency across the project lifecycle.
  • Operational Performance: High proficiency in managing day-to-day operational metrics, including squad utilization, resource forecasting, and productivity tracking. Skilled at optimizing team performance to meet profitability and efficiency targets.
  • SOW & Risk Mitigation: Proven experience in operationalizing Statement of Work (SOW) requirements and identifying technical delivery risks early. Expert at mitigating scope creep and data-specific bottlenecks (e.g., latency, ingestion gaps) before they impact client outcomes.
  • Technical Escalation Leadership: Demonstrated ability to lead "war room" efforts to resolve complex pipeline failures or data integrity issues. Skilled at providing clear, rapid remediation plans and communicating technical status directly to regional stakeholders.

2. Architectural Implementation & Technical Oversight

  • Modern Stack Proficiency: Deep, hands-on expertise in implementing Cloud-Native architectures (Lakehouse, Data Mesh, MPP) on Snowflake, Databricks, or hyperscalers. Ability to conduct deep-dive architectural reviews and course-correct design decisions at the squad level to ensure scalability.
  • Operationalizing Governance: Proven experience in embedding data quality and observability (completeness, freshness, accuracy) directly into the CI/CD pipeline. Responsible for technical enforcement of regulatory compliance (GDPR/PII) and maintaining the integrity of data catalogs across active projects.
  • Applied Domain Expertise: Practical experience leading the delivery of high-growth solutions, specifically Generative AI infrastructure (RAG, Vector DBs), Real-Time Streaming, and large-scale platform migrations with a focus on zero-downtime execution.
  • DataOps & Engineering Standards: Expert-level mastery of DataOps, including the setup and management of orchestration frameworks (Airflow, Dagster) and Infrastructure as Code (IaC). You ensure that automation is a baseline requirement, not an afterthought, for all delivery teams.

3. Unit Management & Commercial Execution

  • Unit & Team Management: Proven success in leading and mentoring Engineering Managers and Lead Architects. Responsible for the operational metrics, technical output, and career development of the business unit's talent pool.
  • Offerings Implementation & Scoping: Expertise in translating service offerings (e.g., Data Maturity Assessments, Lakehouse Builds) into accurate project scopes, technical estimates, and resource plans to ensure delivery is both profitable and competitive.
  • Talent Growth & Mentorship: Functional ability to implement growth frameworks for data engineering roles. Focus on hands-on coaching and scaling high-performance technical talent to meet the demands of complex, petabyte-scale environments.
  • Partner Enablement: Functional competence in managing regional technical relationships with major partners (Snowflake, Databricks, GCP/AWS). Drives squad-level certifications, joint technical enablement, and alignment with partner product roadmaps.

Tech Superpowers

  • Modern Data Architect – Reimagines business with the Modern Data Stack (MDS) to deliver data mesh implementations, insights, & real value to clients.
  • End-to-End Ecosystem Thinker – Builds modular, reusable data products across ingestion, transformation (ETL/ELT), governance, and consumption layers.
  • Distributed Compute Savant – Crafts resilient, high-throughput architectures that survive petabyte-scale volume and data skew without breaking the bank.
  • Governance & Integrity Guardian – Embeds data quality, complete lineage, and privacy-by-design (GDPR/PII) into every table, view, and pipeline.
  • AI-Ready Orchestrator – Engineers pipelines that bridge structured data with Unstructured/Vector stores, powering RAG models and Generative AI workflows.
  • Product-Minded Strategist – Balances architectural purity with time-to-insight; treats every dataset as a measurable "Data Product" with clear ROI.
  • Pragmatic Stack Curator – Chooses the simplest tools that compound reliability; fluent in SQL, Python, Spark, dbt, and Cloud Warehouses.
  • Builder @ Heart – Writes, reviews, and optimizes queries daily; proves architectures with cost-performance benchmarks, not slideware. Business-first, data-second, outcome focused technology leader.

Experience & Relevance

  • Executive Experience: Minimum 10+ years of progressive experience in data engineering and analytics, with at least 3 years in a Senior Manager or Director -level role managing multiple technical teams and owning significant operational and efficiency metrics for a large data service line.
  • Delivery Standardization: Demonstrated success in defining and implementing globally consistent, repeatable delivery methodologies (DataOps/Agile Data Warehousing) across diverse teams.
  • Architectural Depth: Must retain deep, current expertise in Modern Data Stack architectures (Lakehouse, MPP, Mesh) and maintain the ability to personally validate high-level architectural and data pipeline design decisions.
  • Operational Leadership: Proven expertise in managing and scaling large professional services organizations, demonstrated ability to optimize utilization, resource allocation, and operational expense.
  • Domain Expertise: Strong background in Enterprise Data Platforms, Applied AI/ML, Generative AI integration, or large-scale Cloud Data Migration.
  • Communication: Exceptional executive-level presentation and negotiation skills, particularly in communicating complex operational, data quality, and governance metrics to C-level stakeholders.

Join the ‘real solvers’

ready to futurify?

If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’. Don’t Just Send a Resume. Send a Statement.

Read more
Background is in Oil&Gas

Background is in Oil&Gas

Agency job
via First Tek, Inc. by David Ingale
Bengaluru (Bangalore)
8 - 10 yrs
₹15L - ₹30L / yr
Apache Spark
databricks
Delta lake
CI/CD
skill iconPython
+5 more

Role: Sr. Azure Data Engineer

Experience: 8–10 Years

Work Timings: 1:30 PM – 10:30 PM IST

Location: Bellandur Bengaluru (Work from Office)

Company: Chevron

Employment Type: 6- 12 months Contract

 

Role Overview

We are seeking an experienced Senior Data Engineer to design and deliver scalable cloud data solutions on Azure. The ideal candidate will have strong expertise in Databricks, PySpark, and modern data architectures, with exposure to energy domain standards like OSDU.

Key Responsibilities

  • Architect and design robust Azure-based data solutions using Databricks, ADLS, and PaaS services
  • Define and implement scalable data Lakehouse architectures aligned with OSDU standards
  • Build and manage end-to-end data pipelines for batch and real-time processing using PySpark
  • Establish data governance frameworks including metadata, lineage, security, and access control
  • Implement DevOps best practices (CI/CD, Azure Pipelines, GitHub, automated deployments)
  • Collaborate with stakeholders to translate business needs into technical solutions
  • Develop and maintain architecture documentation, solution patterns, and standards
  • Provide technical leadership and mentorship to engineering teams
  • Optimize solutions for performance, cost, reliability, and security
  • Ensure alignment with enterprise architecture and compliance standards
  • Drive adoption of modular and reusable cloud data components

Required Skills & Qualifications

Core Technical Skills

  • Azure Databricks, Apache Spark (PySpark), Delta Lake, Unity Catalog
  • Azure Data Lake Storage (ADLS), Azure Data Factory, Synapse Analytics
  • Strong experience in Python-based data engineering
  • Data pipeline development (batch + real-time)

Architecture & Advanced Skills

  • Data Lakehouse architecture and distributed systems
  • Microservices, APIs, and integration frameworks
  • OSDU (Open Subsurface Data Universe) or similar energy data models

DevOps & Tools

  • CI/CD tools: Azure Pipelines, GitHub Actions
  • Infrastructure as Code: Terraform or similar

Other Skills

  • Data governance, security, compliance, and cost optimization
  • Strong analytical and problem-solving skills
  • Excellent communication and stakeholder management


Read more
Srijan Technologies

at Srijan Technologies

6 recruiters
Devendra Singh
Posted by Devendra Singh
Bengaluru (Bangalore), Delhi, Gurugram, Noida, Ghaziabad, Faridabad
4 - 6 yrs
₹15L - ₹26L / yr
skill iconPython
skill iconReact.js
Generative AI (GenAI)

About US:-

We turn customer challenges into growth opportunities.

Material is a global strategy partner to the world’s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences.


We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve.


Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners.

 

Experience Range: 4-8 Years

Role: Full Stack Developer


Duties: 

As Full Stack Engineer, you will work in small teams in a highly collaborative way, use the latest technologies and enjoy seeing the direct impact from your work. Our highly skilled system architects and development managers configure software packages and build custom applications, creating the foundation for rapid and cost-effective implementation of systems that maximize value from day one. Our development teams are small, flexible and employ agile methodologies to quickly provide our consultants with the solutions they need. We combine the latest open source technologies together with traditional Enterprise software products. 

 

The Role: 

 

We create both rapid prototypes, usually in 2 to 3 weeks, as well as full-scale applications typically within 2 to 3 months, by working collaboratively and iteratively through design and development to deliver fully functioning web-based and mobile applications that meet business goals. Our Front-End Developers contribute to the architecture across the technology stack, from database to native apps. 


Skills: 

Minimum of 5–9 years of experience, with a proven record of hands-on software development in at least one of the following languages: Java, C#, C/C++, Python, JavaScript, Ruby, plus modern frontend proficiency in React and TypeScript. Demonstrated ownership of delivering end-to-end solutions (from design through production support), with strong proactivity in identifying opportunities, anticipating risks, and driving improvements without waiting for direction. 

Significant experience designing, implementing, and operating Web Services and APIs (REST, SOAP, RPC, RMI) including API monitoring/observability and performance tuning. Solid understanding of network communication protocols (HTTP, TCP/IP, UDP, SMTP, DNS) and distributed system behaviors. 

Capable of applying best coding practices, design patterns, and evaluating tradeoffs in complex, microservices-based architectures. Well versed in cloud computing (AWS), automated testing, CI/CD, and DevOps tooling; comfortable owning reliability, scalability, and operational excellence. Bonus: hands-on knowledge of Terraform (infrastructure as code). 

Experience with relational data stores (MySQL, SQL Server, Oracle) and non-relational technologies, with strong proficiency in MongoDB (schema design, indexing, performance optimization), plus exposure to Elasticsearch, Cassandra, and related ecosystems. Strong professional experience with frameworks such as Node.js, AngularJS, Spring, Guice, and expertise building mobile, responsive/adaptive applications. 

First-hand understanding of Agile development methodologies, with a commitment to engineering excellence (e.g., DRY, TDD, CI) and pragmatic delivery. 


Non-Technical: First and foremost, passionate about technology, especially AI and emerging/disruptive technologies, and excited about translating innovation into real product impact. Strong command of English (verbal and written), excellent interpersonal skills, and a highly collaborative mindset, able to partner effectively across engineering, product, design, and stakeholders. Sound problem-solving ability to quickly process complex information and communicate it clearly and simply. Demonstrated leadership/mentorship, accountability, and a self-starter attitude suited to environments that foster entrepreneurial thinking. 


 What We Offer 

  •  Professional Development and Mentorship.
  •  Hybrid work mode with remote friendly workplace. (6 times in a row Great Place To Work Certified).
  •  Health and Family Insurance.
  •  40+ Leaves per year along with maternity & paternity leaves.
  •  Wellness, meditation and Counselling sessions.


Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Bengaluru (Bangalore)
4 - 10 yrs
₹10L - ₹30L / yr
skill iconPython
SQL
Spark
skill iconAmazon Web Services (AWS)
Amazon S3
+13 more

Job Title : AWS Data Engineer

Experience : 4+ Years

Location : Bengaluru (HSR – Hybrid, 3 Days WFO)

Notice Period : Immediate Joiner


💡 Role Overview :

We are looking for a skilled AWS Data Engineer to design, build, and scale modern data platforms. The role involves working with AWS-native services, Python, Spark, and DBT to deliver secure, scalable, and high-performance data solutions in an Agile environment.


🔥 Mandatory Skills :

Python, SQL, Spark, AWS (S3, Glue, EMR, Redshift, Athena, Lambda), DBT, ETL/ELT pipeline development, Airflow/Step Functions, Data Lake (Parquet/ORC/Iceberg), Terraform & CI/CD, Data Governance & Security


🚀 Key Responsibilities :

  • Design, build, and optimize ETL/ELT pipelines using Python, DBT, and AWS services
  • Develop and manage scalable data lakes on S3 using formats like Parquet, ORC, and Iceberg
  • Build end-to-end data solutions using Glue, EMR, Lambda, Redshift, and Athena
  • Implement data governance, security, and metadata management using Glue Data Catalog, Lake Formation, IAM, and KMS
  • Orchestrate workflows using Airflow, Step Functions, or AWS-native tools
  • Ensure reliability and automation via CloudWatch, CloudTrail, CodePipeline, and Terraform
  • Collaborate with data analysts and data scientists to deliver actionable insights
  • Work in an Agile environment to deliver high-quality data solutions

✅ Mandatory Skills :

  • Strong Python (including AWS SDKs), SQL, Spark
  • Hands-on experience with AWS data stack (S3, Glue, EMR, Redshift, Athena, Lambda)
  • Experience with DBT and ETL/ELT pipeline development
  • Workflow orchestration using Airflow / Step Functions
  • Knowledge of data lake formats (Parquet, ORC, Iceberg)
  • Exposure to DevOps practices (Terraform, CI/CD)
  • Strong understanding of data governance and security best practices
  • Minimum 4–7 years in Data Engineering (3+ years on AWS)

➕ Good to Have :

  • Understanding of Data Mesh architecture
  • Experience with platforms like Data.World
  • Exposure to Hadoop / HDFS ecosystems

🤝 What We’re Looking For :

  • Strong problem-solving and analytical skills
  • Ability to work in a collaborative, cross-functional environment
  • Good communication and stakeholder management skills
  • Self-driven and adaptable to fast-paced environments

📝 Interview Process :

  1. Online Assessment
  2. Technical Interview
  3. Fitment Round
  4. Client Round
Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Bengaluru (Bangalore)
5 - 12 yrs
₹10L - ₹32L / yr
skill iconPython
Azure OpenAI
databricks
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
+6 more

Job Title : Azure Data Scientist (AI/ML)

Experience : 5 to 10 Years

Location : Bengaluru

Work Mode : Hybrid (4 Days WFO, Tue to Fri – Non-Negotiable)

Notice Period : Immediate Joiner


💡 Role Overview :

We are looking for a highly skilled Azure Data Scientist with strong expertise in AI/ML, Python, and cloud-based data platforms. The role involves building scalable ML solutions, working on GenAI & RAG use cases, and delivering business impact through data-driven insights.


🔥 Mandatory Skills :

Python, Azure Machine Learning, Databricks, AI/ML model development (5+ yrs), Statistics & Probability, EDA & Data Modeling, Machine Learning algorithms, GenAI/RAG experience


✅ Key Responsibilities :

  • Design, develop, and deploy AI/ML models to solve complex business problems
  • Perform Exploratory Data Analysis (EDA) for data cleaning, discovery, and insights
  • Build and optimize ML pipelines using Azure Machine Learning & Databricks
  • Work on GenAI applications, RAG implementations, and advanced analytics solutions
  • Collaborate with data engineers, business stakeholders, and domain experts
  • Translate complex data into actionable business insights
  • Manage model lifecycle (development, validation, deployment, monitoring)
  • Communicate model outputs and insights to technical & non-technical stakeholders
  • Drive innovation and contribute to AI/ML best practices and strategy

🧠 Required Skills (Must Have) :

  • Strong experience in Python (ML/AI development)
  • Hands-on with Azure Machine Learning & Databricks
  • Deep understanding of Mathematics, Probability, and Statistics
  • Expertise in Machine Learning & Data Science methodologies
  • Experience in EDA, data visualization, and model development
  • Exposure to GenAI, RAG, and ML application development
  • Minimum 5+ years of experience in AI/ML model development
  • Strong problem-solving and analytical skills

➕ Good to Have :

  • Experience with MLOps practices
  • Domain knowledge in Energy / Oil & Gas value chain
  • Experience in data visualization tools
  • Team collaboration or mentoring experience

🤝 What We’re Looking For :

  • Strong communication & stakeholder management skills
  • Ability to work in a cross-functional, global team environment
  • Self-driven, adaptable, and innovation-focused mindset

📝 Interview Process :

  1. Geektrust Assessment (Assemble)
  2. Technical Interview
  3. Fitment Round
  4. Client Round
Read more
A leading data & analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage

A leading data & analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage

Agency job
via HyrHub by Neha Koshy
Bengaluru (Bangalore)
8 - 10 yrs
₹14L - ₹20L / yr
skill iconPython
skill iconReact.js
skill iconAmazon Web Services (AWS)
Architecture
skill iconLeadership
+1 more

Responsibilities:

  • Lead architecture, technical decisions, and ensure code quality, scalability, and performance
  • Develop backend systems using Python & SQL; build APIs and optimize databases
  • Work with frontend (React/Angular) and API-driven architectures
  • Integrate AI/ML models and support analytics/LLM-based solutions
  • Manage cloud deployments (Azure/AWS) and implement CI/CD practices
  • Ensure system reliability, monitoring, and production readiness
  • Mentor team members, conduct reviews, and collaborate with cross-functional teams
Read more
Euphoric Thought Technologies
Bengaluru (Bangalore)
6 - 8 yrs
₹12L - ₹22L / yr
skill iconJava
skill iconPython
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
Agile/Scrum
+4 more

Key Responsibilities:

  • Lead and mentor a team of Java and Python developers, providing technical guidance and fostering a culture of continuous learning and improvement.
  • Oversee the design, development, and implementation of high-performance, scalable, and secure software solutions for the financial services industry.
  • Collaborate with product managers and architects to translate business requirements into technical specifications and ensure alignment with overall product strategy.
  • Drive the adoption of best practices in software development, including code reviews, testing, and continuous integration/continuous deployment (CI/CD).
  • Manage project timelines and resources effectively, ensuring on-time and within-budget delivery of projects.
  • Identify and mitigate technical risks, proactively addressing potential issues and ensuring the stability and reliability of our platforms.
  • Stay abreast of emerging technologies and trends in Java, Python, and related fields, and evaluate their potential application to our products and services.
  • Contribute to the development of technical documentation and training materials.

Required Skillset:

  • Demonstrated expertise in Java and Python development, with a strong understanding of object-oriented principles, design patterns, and data structures.
  • Proven ability to lead and mentor a team of software engineers, fostering a collaborative and high-performing environment.
  • Experience in designing and developing scalable, high-performance, and secure software solutions.
  • Strong understanding of software development methodologies, including Agile and Waterfall.
  • Excellent communication, interpersonal, and problem-solving skills.
  • Ability to work effectively in a fast-paced, dynamic environment.
  • Bachelor's or Master's degree in Computer Science or a related field.
  • Experience with relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra).
  • Experience with cloud platforms (e.g., AWS, Azure, GCP) is a plus.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort