Cutshort logo
Python Jobs in Bangalore (Bengaluru)

50+ Python Jobs in Bangalore (Bengaluru) | Python Job openings in Bangalore (Bengaluru)

Apply to 50+ Python Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.

icon
Germany-Headquartered Fast-Growing IT Consulting Company

Germany-Headquartered Fast-Growing IT Consulting Company

Agency job
Bengaluru (Bangalore)
4 - 6 yrs
₹20L - ₹25L / yr
skill iconPython
Embedded C++
skill iconC
skill iconGo Programming (Golang)
Bash
+7 more

Location: Bengaluru, Kadugodi

Experience: 4-6 years

About company:

Client is a Germany-headquarted IT consulting and service organization. With over 25 years of expertise and global presence, we are committed to customer excellence and focused in addressing niche areas of product engineering, process consulting and software development in automotive, railways, production automation, data management and business IT domains.


Key Responsibilities:

  • Develop or enhance features to meet industry standards, safety regulations, and project specifications.
  • Collaborate with Business stakeholders to understand Business Requirements
  • Work closely with hardware engineers, QA, and Scrum Master to integrate software solutions into embedded systems.
  • Identify Problems and resolve technical issues within embedded systems, making critical decisions on system architecture and software design.
  • Strive towards Improving Processes, system performance, optimize code and innovate in software design.
  • Work closely with vendors to design and implement edge AI solutions


Requirements:

  • Must have done B.Tech/B.E preferably in ECE stream
  • Must have Proficiency in Python/C/C++, Go Lang, Scripting in Bash
  • Must have Strong Fundamentals on Embedded Development Life cycle
  • Must have Strong knowledge on Embedded Linux, Unix/Linux commands, RTOS and SQL
  • Sound knowledge of CAN/J1939 protocol, Sensor Data Processing and Telemetry
  • Experience with tools like JIRA and Agile/Scrum methodology
  • Excellent communication skills and ability to collaborate with cross-functional teams.
  • Ability to work on multiple projects and prioritize work effectively
  • Ability to work independently and as a team member
  • Strong analytical and problem-solving skills


Nice to Have:

  • Understanding of ADAS, Driver Monitoring Systems
  • Experience with embedded video coupled with edge AI


Read more
Byteridge
Bengaluru (Bangalore), Chennai, Hyderabad, Pune, Noida, Gurugram, Mumbai
5 - 7 yrs
₹16L - ₹25L / yr
skill iconPython
skill iconAmazon Web Services (AWS)
CUDA
GPU computing
Amazon EC2
+6 more

You will be at the forefront of Byteridge's AI infrastructure capabilities, helping customers unlock the full potential of foundation models through expert-level deployment on GPU infrastructure.

This highly technical role requires deep expertise in machine learning infrastructure, GPU optimization, and production ML systems, combined with the ability to translate complex technical concepts into customer success.

What You'll Do

Model Deployment & Optimization

• Lead end-to-end deployments of large language models on AWS infrastructure for strategic

customers

• Design and implement training, fine-tuning, and inference pipelines using Amazon SageMaker AI

• Optimize model performance through GPU-level tuning, kernel optimization, and infrastructure

configuration

• Deploy models on diverse GPU architectures including NVIDIA and AWS custom silicon (Trainium,

Inferentia)

Infrastructure Architecture & Performance

• Architect scalable ML infrastructure using SageMaker AI Inference, HyperPod, and distributed

training frameworks

• Implement CUDA-level optimizations and custom kernels for improved model performance

• Design storage and networking architectures optimized for high-throughput ML workloads

• Troubleshoot and resolve complex performance bottlenecks at the GPU driver and kernel level

Customer Engagement & Technical Leadership

• Partner with AWS AI Specialist Solution Architects and customer ML teams to understand model

requirements and deployment constraints

• Provide technical guidance on model selection, fine-tuning strategies, and production best practices

• Conduct performance benchmarking and cost optimization analysis for ML workloads

• Share field insights with AWS product teams to influence infrastructure and service roadmaps


What We're Looking For

Core Qualifications

• Bachelor's degree in Computer Science, Engineering, or equivalent practical experience (Master's or

PhD preferred)

• 5+ years of experience in machine learning infrastructure, model deployment, or GPU computing

• Strong programming skills in Python and experience with ML frameworks (PyTorch, TensorFlow, JAX)• Deep understanding of LLM architectures, training methodologies, and inference optimization

Technical Expertise (High-Level Alignment)

• Hands-on experience training, fine-tuning, or deploying large language models in production

• Proficiency with GPU programming, CUDA, and kernel-level optimization techniques

• Experience with distributed training frameworks and multi-GPU/multi-node orchestration

• Strong knowledge of AWS core services: EC2 (GPU instances), S3, EFS, VPC, and networking


Preferred Experience

• Direct experience with Amazon SageMaker AI (Training, Inference, HyperPod) or equivalent ML

platforms

• Understanding of GPU architectures (NVIDIA A100, H100) and AWS custom silicon (Trainium,

Inferentia)

• Experience with model compression techniques (quantization, pruning, distillation)

• Knowledge of MLOps practices, model monitoring, and production ML system design

• Background in high-performance computing, distributed systems, or systems programming

Essential Attributes

• Ability to dive deep into technical problems and debug complex infrastructure issues

• Strong analytical skills with data-driven approach to optimization

• Excellent communication skills to explain complex technical concepts to diverse audiences

• Comfortable working in ambiguous, fast-paced environments with evolving requirements

• Ownership mindset with ability to drive projects from architecture to production


Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
6 - 10 yrs
Upto ₹38L / yr (Varies
)
skill iconPython
Generative AI
Microservices
RESTful APIs
skill iconMongoDB
+3 more

We are seeking a highly skilled Senior Backend Developer with deep expertise in Python and FastAPI to join our team. This role focuses on building high-performance, scalable backend services capable of handling high request volumes while integrating advanced LLM technologies.


The ideal candidate will design robust distributed systems, implement efficient data storage solutions, and ensure enterprise-grade security within an Azure-based infrastructure. This is a great opportunity to work on AI/ML integrations and mission-critical applications requiring high performance and reliability.


Key Responsibilities:


Backend Development

  • Design and maintain high-performance backend services using Python and FastAPI
  • Implement advanced FastAPI features such as dependency injection, middleware, and async programming
  • Write comprehensive unit tests using pytest
  • Design and maintain Pydantic schemas

High-Concurrency Systems

  • Implement asynchronous code for high-volume request processing
  • Apply concurrency patterns and atomic operations to ensure efficient system performance

Data & Storage

  • Optimize MongoDB operations
  • Implement Redis caching strategies (TTL, performance tuning, caching patterns)

Distributed Systems

  • Implement rate limiting, retry logic, failover mechanisms, and region routing
  • Build microservices and event-driven architectures
  • Work with EventHub, Blob Storage, and Databricks

AI/ML Integration

  • Integrate OpenAI API, Gemini API, and Claude API
  • Manage LLM integrations using LiteLLM
  • Optimize AI service usage within the Azure ecosystem

Security

  • Implement JWT authentication
  • Manage API keys and encryption protocols
  • Implement PII masking and data security mechanisms

Collaboration

  • Work with cross-functional teams on architecture and system design
  • Contribute to engineering best practices and technical improvements
  • Mentor junior developers where required

Must-Have Skills & Requirements

Experience

  • 7+ years of hands-on Python backend development
  • Bachelor’s degree in Computer Science, Engineering, or related field
  • Experience building high-traffic, scalable systems

Core Technical Skills

Python

  • Advanced knowledge of asynchronous programming, concurrency, and atomic operations

FastAPI

  • Expert-level experience with dependency injection, middleware, and async code

Testing

  • Strong experience with pytest and Pydantic schemas

Databases

  • Hands-on experience with MongoDB and Redis
  • Strong understanding of caching patterns, TTL, and performance optimization

Distributed Systems

  • Experience with rate limiting, retry logic, failover mechanisms, high concurrency processing, and region routing

Microservices

  • Experience building microservices and event-driven systems
  • Exposure to EventHub, Blob Storage, and Databricks

Cloud

  • Strong experience working in Azure environments

AI Integration

  • Familiarity with OpenAI API, Gemini API, Claude API, and LiteLLM

Security

  • Implementation experience with JWT authentication, API keys, encryption, and PII masking

Soft Skills

  • Strong problem-solving and debugging skills
  • Excellent communication and collaboration
  • Ability to manage multiple priorities
  • Detail-oriented approach to code quality
  • Experience mentoring junior developers

Good-to-Have Skills

Containerization

  • Docker, Kubernetes (preferably within Azure)

DevOps

  • CI/CD pipelines and automated deployment

Monitoring & Observability

  • Experience with Grafana, distributed tracing, custom metrics

Industry Experience

  • Experience in Insurance, Financial Services, or regulated industries

Advanced AI/ML

  • Vector databases
  • Similarity search optimization
  • LangChain / LangSmith

Data Processing

  • Real-time data processing and event streaming

Database Expertise

  • PostgreSQL with vector extensions
  • Advanced Redis clustering

Multi-Cloud

  • Experience with AWS or GCP alongside Azure

Performance Optimization

  • Advanced caching strategies
  • Backend performance tuning
Read more
Tap Invest
Anusree TP
Posted by Anusree TP
Bengaluru (Bangalore)
1 - 2 yrs
₹3L - ₹5L / yr
SQL
skill iconPython
pandas
skill iconData Analytics
Business Analysis

As an Analyst at Tap Invest, you’ll turn data into decisions. You’ll work with teams across

Product, Ops, Marketing, and Sales to uncover insights, solve real business problems, and

drive strategy.

This role is for someone who is comfortable working with data independently and can

support business teams with reliable analysis and reporting.

Key Responsibilities

● Gather, organize, and clean data from various sources including databases,

spreadsheets, and external sources to ensure accuracy and completeness.

● Write SQL queries to pull, validate, and clean data from production databases.

● Build and maintain dashboards, and generate KPI reports. Track performance

against targets and identify areas for optimization.

● Analyze user funnels and investment patterns to surface actionable insights.

● Prepare and present clear, concise reports and visualizations to communicate

findings and recommendations to stakeholders across teams.

● Document data definitions, metrics, and assumptions clearly for consistency and

reuse.

What We’re Looking For

● 1 to 2 years of experience in Data Analytics, Business Analytics, or a similar role.

● Comfortable writing in SQL and validating queries.

● Solid with Excel / Google Sheets (pivot tables, lookups, charts).

● Genuine curiosity about how businesses use data to make decisions.

● Experience with scripts for data automations.

● Prior projects involving production datasets.

Nice to Have

● Familiarity with pandas or any data manipulation library for advanced automations.

● Interest in capital markets, bonds, fixed income or FinTech.

● Exposure to AI tools

Read more
Bengaluru (Bangalore)
5 - 8 yrs
₹38L - ₹45L / yr
Node.js
skill iconPython
Field Engineer
Forward Deployed
skill iconDocker
+1 more

Role & Responsibilities

Own the Client’s Outcome:

  • Embed with enterprise customers – on-site and remotely – to understand their supply chain operations, data estate, and what success actually looks like for their business.
  • Scope and design technical solutions for messy, real-world logistics problems – with a clear line to measurable impact: cost per delivery, SLA performance, empty kilometres.
  • Own the full deployment lifecycle: architecture through go-live through steady-state. You’re accountable for the outcome, not just the code.

Build and Ship:

  • Design, build, and maintain backend services in Node.js or Python that power routing, planning, and execution at enterprise scale.
  • Build and own the integrations connecting Locus to client ERPs, TMS, WMS, and OMS platforms – these integrations are often the riskiest part of a deployment.
  • Write production code that runs under real load. If it isn’t in production, it hasn’t shipped.

Be the Technical Interface with the Client:

  • Run architecture reviews, lead integration workshops, and represent Locus in executive steering meetings. You need to be credible at every level of the client organisation.
  • Bring field learnings back into the product and platform teams. Some of Locus’s best features started as a client workaround.
  • Push back when a client request would compromise platform integrity – and propose a better alternative.

Show Up On-Site:

  • Travel to client sites – domestic and international, up to ~30% of the time – for kick-offs, integration sprints, go-lives, and post-live reviews.
  • Build the kind of relationship where the client’s ops lead calls you directly when something goes wrong at 2am, not a support ticket.
  • Be comfortable wherever the work is: a warehouse floor, a logistics control tower, a C-suite boardroom.

Make the Next Deployment Easier:

  • Document architecture decisions, integration patterns, and deployment playbooks – every engagement should make the next one faster.
  • Work closely with Product, Customer Success, and Platform Engineering. Share what you’re seeing in the field; don’t wait to be asked.
  • Mentor junior FDEs and raise the technical bar across the team.

Ideal Candidate

  • Strong Forward Deployed / Field Engineer
  • Mandatory (Experience 1): Must have 5+ years of backend engineering experience with hands-on coding in Node.js or Python, building production-grade systems
  • Mandatory (Experience 2): Must have minimum 2+ years in client-facing / deployment-heavy roles, where they worked directly with enterprise customers
  • Mandatory (Experience 3): Must have experience shipping and owning production systems end-to-end: From design → build → deployment → post-production support
  • Mandatory (Tech Skills 1 - Backend & Systems): Strong in: Node.js or Python (must-have), Building scalable backend services
  • Mandatory (Tech Skills 2 - Integrations): Must have experience with: Enterprise integrations (APIs, third-party systems), Systems like ERP / TMS / WMS / OMS
  • Mandatory (Tech Skills 3 - Data & Messaging): Hands-on with: Relational + NoSQL databases, Event streaming / queues (Kafka / RabbitMQ or similar)
  • Mandatory (Tech Skills 4 - Cloud & Deployment): Experience with: Cloud platforms (AWS / GCP / Azure), Docker + Kubernetes (or containerised deployments)
  • Mandatory (Company): Top Product companies / Startups / SaaS / platform companies


Read more
ThoughtsCrest Software

at ThoughtsCrest Software

1 candid answer
Ariba Khan
Posted by Ariba Khan
Bengaluru (Bangalore)
6 - 15 yrs
Best in industry
Agentic AI
Generative AI
Large Language Models (LLM)
skill iconPython
skill iconMachine Learning (ML)

About the Role

We are looking for a hands-on AI Agentic Lead to drive Agentic AI implementations on the Lyzr platform and lead in-house Agentic AI infusion into our products. This role is ideal for someone who combines strong technical depth with product thinking and has experience taking AI solutions from concept to deployment.


What We Are Looking For

  • 6+ years to 15 years of overall experience
  • At least 2 years of Agentic AI experience with product deployment exposure
  • Strong experience in designing, building, and deploying AI agents/workflows for real business use cases
  • Ability to lead architecture, development, deployment, and optimization of agentic solutions
  • Strong problem-solving, ownership, and stakeholder-handling skills
  • Interested to work in BENGALURU - WFO only.


Key Responsibilities

  • Lead end-to-end delivery of Agentic AI solutions on the Lyzr platform
  • Drive Agentic AI adoption across in-house products
  • Design multi-agent workflows, orchestration patterns, tool usage, memory, guardrails, and evaluation approaches
  • Work closely with product, business, and engineering teams to identify high-impact AI use cases
  • Build scalable, production-ready solutions with focus on reliability, performance, and business value
  • Mentor the team and shape best practices for Agentic AI delivery


Preferred Skills

  • Hands-on experience with LLMs, AI agents, RAG, orchestration frameworks, prompt design, tool calling, and evaluation
  • Exposure to production deployments, monitoring, debugging, and optimization of AI systems
  • Experience integrating AI into enterprise products/platforms
  • ML background is a plus, but not mandatory


Why Join Us

  • Opportunity to work on live Agentic AI implementations
  • Play a key role in building next-generation AI capabilities for both client solutions and internal products
  • High ownership, strong growth opportunity, and direct impact on product direction
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Shrutika SaileshKumar
Posted by Shrutika SaileshKumar
Bengaluru (Bangalore)
6 - 8 yrs
Best in industry
Snowflake
Data Transformation Tool (DBT)
SQL
Snow flake schema
skill iconPython
+1 more

JD - 

 

We are looking for a strong Data Engineer having hands on experience in building pipelines using Snowflake and DBT.

Key Responsibilities:

  • Develop, maintain, and optimize data pipelines using DBT and SQL on Snowflake DB.
  • Collaborate with data analysts, QA and business teams to build scalable data models.
  • Implement data transformations, testing, and documentation within the DBT framework.
  • Work on Snowflake for data warehousing tasks, including data ingestion, query optimization, and performance tuning.
  • Use Python (preferred) for automation, scripting, and additional data processing as needed.

Required Skills:

  • 6+ years of experience in building data engineering pipelines.
  • Strong hands-on expertise with DBT and advanced SQL.
  • Experience working with modern columnar/MPP data warehouses, preferably Snowflake.
  • Knowledge of Python for data manipulation and workflow automation (preferred).
  • Good understanding of data modeling concepts, ETL/ELT processes, and best practice.
Read more
reodev
Richa Kukar
Posted by Richa Kukar
Bengaluru (Bangalore)
3 - 6 yrs
₹12L - ₹38L / yr
skill iconGo Programming (Golang)
skill iconPython

Backend Engineer at Reo.Dev : Job Description

[Disclaimer: This is a longish read. However, we felt you might be interested to read in detail, about what you could be doing for the next 5ish years 😊]

Job Function: Backend Engineer

Experience: 2 – 4 years [number of years of experience is not a filter]

Salary and Incentives: Open for discussion

Location: Bangalore, India [Hybrid work - Remote + Office]

👋 Meet Reo.Dev

  • Reo.Dev was founded in January 2023. So we are quite young 😊
  • Reo was started by Achintya, Gaurav and Piyush – All of them have successfully built companies before [more on the Founding team below]
  • We are building a Revenue Operating System for the Developer Focussed Companies (Think of us like a 6sense.com for Dev Focussed Companies).
  • What we are building is quite innovative. Currently, no other company offers the capabilities Reo.Dev is building
  • We recently closed our Seed round with top early stage investors (not disclosed yet)
Read more
TIFIN FINTECH INDIA

at TIFIN FINTECH INDIA

1 candid answer
1 recruiter
Vrishali Mishra
Posted by Vrishali Mishra
Bengaluru (Bangalore)
3 - 5 yrs
₹10L - ₹30L / yr
skill iconPython
skill iconGo Programming (Golang)
Generative AI
Prompt engineering

About TIFIN

TIFIN is an AI-first fintech platform transforming wealth management through data science, machine learning, and intelligent automation. With strong global backing and a rapidly growing India hub, TIFIN is building scalable, next-gen financial products used by global institutions.


Role Overview

We are looking for a Senior Software Engineer with strong backend and AI integration experience to build scalable, high-performance systems. This role involves working closely with product, data science, and AI teams to develop intelligent platforms leveraging modern technologies and LLMs.


Key Responsibilities

  • Design, develop, and scale backend systems and APIs using Golang and Python
  • Build and integrate AI-driven features, including prompt-based workflows (Claude or similar LLMs)
  • Work with MongoDB and Elasticsearch for high-performance data handling and search capabilities
  • Optimize system performance, scalability, and reliability
  • Collaborate with cross-functional teams (Product, AI/ML, Data Engineering)
  • Contribute to architecture decisions and best engineering practices
  • Write clean, maintainable, and production-grade code


Required Skills & Experience

  • 3–5 years of experience in backend engineering
  • Strong proficiency in Golang and/or Python
  • Hands-on experience with MongoDB and Elasticsearch
  • Experience working with LLMs / AI tools (Claude, OpenAI, etc.) and prompt engineering
  • Good understanding of REST APIs, microservices architecture, and distributed systems
  • Strong problem-solving and debugging skills


Good to Have

  • Experience in fintech / SaaS platforms
  • Exposure to AI/ML pipelines or data platforms
  • Knowledge of cloud platforms (AWS/GCP/Azure)
  • Familiarity with CI/CD and DevOps practices



Read more
TalentXO
Bengaluru (Bangalore)
6 - 9 yrs
₹36L - ₹45L / yr
skill iconPython
TypeScript
skill iconNodeJS (Node.js)
skill iconReact.js
fullstack profile
+2 more

Role & Responsibilities

We are currently seeking a Senior Engineer to join our Financial Services team, contributing to the design and development of scalable system.

The Ideal Candidate Will Be Able To-

  • Take ownership of delivering performant, scalable and high quality cloud based software, both frontend and backend side.
  • Mentor team members to develop in line with product requirements.
  • Collaborate with Senior Architect for design and technology choices for product development roadmap.
  • Do code reviews.

Ideal Candidate

  • Strong Software Engineer fullstack profile using NodeJS / Python and React
  • Mandatory (Experience) - Must have 6+ YOE in Software Development using Python OR NodeJS (For backend) & React (For frontend)
  • Mandatory (Core Skills 1): Must have strong experience in working on Typescript
  • Mandatory (Core Skills 2): Must have experience in message based systems like Kafka, RabbitMq, Redis
  • Mandatory (Core Skills 3): Databases - PostgreSQL & NoSQL databases like MongoDB
  • Mandatory (Company) - Product Companies Only
  • Mandatory (Education) - B.Tech or Dual degree (Btech and Mtech or Integrated Msc/MS) from Tier 1 Engineering Institutes. Candidates from other institutions will not be considered unless they come from top-tier product companies
  • Mandatory (Note) : This role is a hybrid role (2 days WFO)
  • Preferred (Experience): Experience in Fin-Tech, Payment, POS and Retail products is highly preferred
  • Preferred (Mentoring): Experience in mentoring, coaching the team.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Shrutika SaileshKumar
Posted by Shrutika SaileshKumar
Bengaluru (Bangalore)
4 - 8 yrs
₹14L - ₹28L / yr
SQL
skill iconPython
Informatica
Data Transformation Tool (DBT)

Job Description:

We are looking for a skilled Database Developer with strong hands-on experience in SQL + Informatica and programming knowledge in Java or Python. The ideal candidate will design, develop, and maintain robust ETL pipelines and database solutions while collaborating with cross-functional teams to support business data needs and analytics initiatives. 

Key Responsibilities: 

  • Design, develop, and optimize SQL queries, stored procedures, triggers, and views for high performance and scalability. 
  • Develop and maintain ETL workflows using Informatica PowerCenter (or Informatica Cloud). 
  • Integrate and automate data flows between systems using Java or Python for custom scripts and applications. 
  • Perform data analysis, validation, and troubleshooting to ensure data accuracy and consistency across systems. 
  • Work closely with business analysts, data engineers, and application teams to understand data requirements and translate them into efficient database solutions. 
  • Implement performance tuning, query optimization, and indexing strategies for large datasets. 
  • Maintain data security, compliance, and documentation of ETL and database processes. 

Required Skills & Experience: 

  • Bachelor’s degree in Computer Science, Information Technology, or related field. 
  • 5–8 years of hands-on experience as a SQL Developer or ETL Developer
  • Strong proficiency in SQL (Oracle, SQL Server, or PostgreSQL). 
  • Hands-on experience with Informatica PowerCenter / Informatica Cloud
  • Programming experience in Java or Python (for automation, data integration, or API handling). 
  • Good understanding of data warehousing concepts, ETL best practices, and performance tuning
  • Experience working with version control systems (e.g., Git) and Agile/Scrum methodologies. 

Good to Have: 

  • Exposure to cloud data platforms (AWS, Azure, or GCP). 
  • Familiarity with Unix/Linux scripting
  • Experience in data modeling and data governance frameworks.

 

Read more
INI8 Labs
Shwetha K
Posted by Shwetha K
Bengaluru (Bangalore)
4.5 - 8 yrs
₹20L - ₹38L / yr
skill iconPython
Microservices
Test Automation (QA)
API
API testing

Job Title: Test Automation Engineer Location: Bangalore Experience: 6+ years Immediate Joiners are Preferred. We're building systems where correctness, performance, and reliability are non-negotiable. We need an engineer who treats testing as a first-class discipline - not a checklist activity. About the role: ● This is not a test-case writing role. You'll own the entire approach to testing — architecture, tooling, and outcomes. ● You'll work across backend-heavy, distributed systems where failures are nuanced and the stakes are real. ● You'll have direct access to leadership, no layers, and genuine influence over engineering quality standards. What we're looking for: ● Ownership mindset.You own outcomes, not just test cases. You identify gaps without being asked. ● Engineering depth.You design test systems, not just scripts. You think in architectures. ● Systems intuition.You understand how distributed systems fail at scale — not just on the happy path. ● Observability fluency.You're comfortable with logs, metrics, and tracing to debug failures in production-like environments. ● Self-direction.You figure things out and move. You don't wait for instructions. ● AI-augmented workflow.You use AI tools intelligently to accelerate your work — not as a substitute for thinking What you'll work on: ● End-to-end test automation for backend-heavy, distributed systems ● Building test frameworks for APIs, microservices, and event-driven architectures ● Load testing, failure scenario simulation, and edge-case validation ● Deep CI/CD pipeline integration — testing as a continuous engineering activity ● Kernel, firmware, and hardware-level validation(advantageous, not required) Tech Stack & Expectations: ● Languages: Python / Go (or strong scripting expertise) ● Frameworks: PyTest, custom frameworks, or similar ● Infrastructure: Docker, Kubernetes ● Systems: Distributed systems, APIs, async/event-driven architectures ● Databases: PostgreSQL, Redis ● Messaging: Kafka, NATS, RabbitMQ ● Observability: Logging, metrics, tracing ● Bonus: Experience with kernel-level or firmware-level testing What you will get: ● Full ownership over how testing is designed and implemented — your decisions stick ● Hard, interesting problems on systems where quality genuinely matters ● Direct access to leadership with no bureaucratic layers ● Early-stage influence — your work defines engineering quality standards here ● Compensation calibrated to your level of expertise 

Read more
Leading provider of Capital Market solutions in India

Leading provider of Capital Market solutions in India

Agency job
via HyrHub by Neha Koshy
Bengaluru (Bangalore)
4 - 7 yrs
₹12L - ₹18L / yr
skill iconPython
skill iconGo Programming (Golang)
skill iconDocker
skill iconKubernetes
Object Oriented Programming (OOPs)
+2 more

Core Responsibilities:

  • Design & Development: Architect and implement scalable backend services and APIs using Python or Golang, ensuring high performance, resilience, and extensibility.
  • System Ownership: Take end-to-end ownership of critical modules, from design and development to deployment and support.
  • Technical Leadership: Conduct design and code reviews, enforce best practices, and mentor junior engineers to raise the team’s technical bar.
  • Collaboration: Work closely with product managers, architects, and other engineers to translate business requirements into technical solutions.
  • Performance & Reliability: Troubleshoot complex issues in production systems, identify root causes, and design sustainable long-term solutions.
  • Innovation: Evaluate new technologies, contribute to proof-of-concepts, and recommend tools that can improve developer productivity.
  • Process Improvement: Drive initiatives to improve coding standards, CI/CD pipelines, and automated testing practices.
  • Knowledge Sharing: Document designs, create technical guides, and share insights with the broader engineering team.


Experience and Expertise:

  • 4–7 years of backend development experience with Python or Golang.
  • Strong expertise in designing, developing, and scaling microservices and distributed systems.
  • Solid understanding of concurrency, multi-threading, and performance optimization.
  • Proficiency with databases (SQL/NoSQL), caching systems (Redis, Memcached), and messaging systems (Kafka, RabbitMQ, etc.).
  • Hands-on experience with Linux development, Docker, and Kubernetes.
  • Familiarity with cloud platforms (AWS/GCP/Azure) and related services.
  • Strong debugging, profiling, and optimization skills for production-grade systems.
  • Experience with AI-powered development tools is a strong plus; familiarity with concepts like 'agentic coding' for workflow automation or 'context engineering' for leveraging LLMs in system design is highly desirable.


Skills:

  • Strong problem-solving ability, with experience handling complex technical challenges.
  • Ability to lead technical initiatives and mentor junior engineers.
  • Excellent communication skills to collaborate with cross-functional teams and articulate trade-offs.
  • Self-motivated, proactive, and able to operate independently while aligning with team goals.
  • Passionate about engineering culture, quality, and developer productivity.


Read more
Leading provider of Capital Market solutions in India

Leading provider of Capital Market solutions in India

Agency job
via HyrHub by Neha Koshy
Bengaluru (Bangalore)
2 - 4 yrs
₹8L - ₹12L / yr
skill iconPython
skill iconGo Programming (Golang)
Linux/Unix
skill iconDocker
skill iconKubernetes
+3 more

Core Responsibilities:

  • Design, develop, and maintain backend services and APIs using Python or Golang.
  • Write high-quality, testable, and maintainable code with a focus on performance and scalability.
  • Implement automated tests and contribute to CI/CD pipelines.
  • Collaborate with product, QA, and DevOps teams for end-to-end feature delivery.
  • Troubleshoot production issues and provide timely resolutions.
  • Participate in design and architecture discussions to improve system efficiency.
  • Contribute to improving development processes, coding standards, and best practices.


Experience and Expertise:

  • 2–4 years of experience in backend development with Python or Golang.
  • Solid understanding of RESTful APIs, microservices, and distributed systems.
  • Strong knowledge of data structures, algorithms, and OOPS principles.
  • Hands-on experience with relational and/or NoSQL databases.
  • Familiarity with Linux development, Docker, and basic cloud concepts (AWS/GCP/Azure).
  • Proficiency with Git and version control workflows.
  • Familiarity with AI-powered development tools or exposure to projects involving large language models (LLMs) is a plus.


Skills:

  • Strong analytical and debugging skills with the ability to solve complex problems.
  • Good communication and collaboration skills across teams.
  • Ability to work independently with minimal supervision while being a strong team player.
  • Growth mindset – eagerness to learn new technologies and improve continuously.


Read more
Leading provider of Capital Market solutions in India

Leading provider of Capital Market solutions in India

Agency job
via HyrHub by Neha Koshy
Bengaluru (Bangalore)
1 - 2 yrs
₹2L - ₹7L / yr
skill iconPython
skill iconGo Programming (Golang)
skill iconDocker
skill iconKubernetes
Linux/Unix
+3 more

Core Responsibilities:

  • Design, develop, and maintain backend services using Python or Golang.
  • Write clean, efficient, and well-documented code following best practices.
  • Build and consume RESTful APIs and microservices.
  • Collaborate with QA, DevOps, and product teams for smooth feature delivery.
  • Participate in peer code reviews and technical discussions.
  • Debug and fix issues, ensuring system stability and performance.
  • Continuously learn and apply new technologies and tools in backend development.


Experience and Expertise:

  • 0–2 years of software development experience (internships or projects acceptable).
  • Proficiency in at least one backend programming language (Python or Golang).
  • Strong understanding of object-oriented programming and software fundamentals.
  • Knowledge of data structures, algorithms, and database concepts.
  • Familiarity with Linux-based development environments.
  • Exposure to Git and version control workflows.


Skills:

  • Strong analytical and problem-solving ability.
  • Willingness to learn, adapt, and take ownership.
  • Effective communication and teamwork skills.
  • Curiosity for emerging technologies, including AI-driven development, backend technologies, distributed systems, and modern engineering practices.
Read more
Bengaluru (Bangalore), Pune, Chennai
1 - 3 yrs
₹3L - ₹4L / yr
skill iconPython
Shell Scripting
IP Networking
Application Deployment

Application Deployment Engineers / Deployment Engineer – Video Analytics / CCTV Solutions / Application Implementation Engineer


Company Name

Paralaxiom Technologies Private Limited

Company Website

https://www.vast.vision/

https://www.linkedin.com/company/paralaxiom


Company details

Paralaxiom Technologies is involved in deep learning algorithms to develop video analytics-based security and compliance applications. They offer OCR products and image classification tools enhanced by machine learning algorithms and robust statistical analysis. We are among the earliest practitioners of AI software and we have world-class credentials in these technologies. They offer products like Paralaxiom VAST(Video Analytics and Surveillance Toolkit) and Paralaxiom AMPLE(Paralaxiom natural language processing platform).


In today's world, all premises be it manufacturing, hospitals, offices, hotels, cities, airports, shops, warehouses etc. are covered with CCTV Cameras. Continuous monitoring through dedicated command center or e-surveillance is proving to be ineffective as well as costly to manage.

We have pioneered the use of AI / ML technologies to headlessly live monitor CCTV cameras to generate very accurate alerts, alarms & insights and deliver them directly to the right stakeholders for quick, proactive action.

We have worked closely with hundreds of customers from diverse industries, AI Hardware Partners, CCTV OEMs, VMSs, System Integrators & Consultants to bring to the world VAST, an enterprise-ready Video Surveillance as a Service (VSaaS) solution.


Location: Pune / Bangalore / Chennai

Mode of Working: Work From Office 

Days of Working: 5 Days a week


Responsibilities

Position Overview:

Paralaxiom is a video analytics and machine vision company with its VAST product line a path-breaking product for safety, security, and operations for CCTV operations.

We are looking for application engineers for this product line.

Experience: 1 -2 years

Key Responsibilities:

1) Gathering information from customers on their needs and understanding how VAST software matches their desires

2) Design the solution and installation of the software

3 ) Making sure the VAST software continues to work properly after maintenance and testing

4) Take notes of all aspects of the application for future upgrades and maintenance

5) Troubleshooting the software

6) Training the end users

7) Excellent Knowledge of Python and shell scripting

8) Working knowledge of IP networking and troubleshooting

We need someone with 1-2 years of experience with the following skillset:

Great communication skills

Debugging and analytical skills

Knowledge of hardware and software integration will be a plus

Knowledge of Camera NVR will be a plus

Hands-on system and functional testing will be a plus


Interview process: 3 Video + Final Discussion - F2F


Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
4 - 12 yrs
Best in industry
skill iconPython
SQL
ETL
Google Cloud Platform (GCP)
Windows Azure
+1 more

We are seeking a skilled Data Engineer to join the AI Platform Capabilities team supporting the UDP Uplift program.

In this role, you will design, build, and test standardized data and AI platform capabilities across a multi-cloud environment (Azure & GCP).

You will collaborate closely with AI use case teams to develop:

  • Scalable data pipelines
  • Reusable data products
  • Foundational data infrastructure

Your work will support advanced AI solutions such as:

  • GenAI
  • RAG (Retrieval-Augmented Generation)
  • Document Intelligence

Key Responsibilities

  • Design and develop scalable ETL/ELT pipelines for AI workloads
  • Build and optimize data pipelines for structured & unstructured data
  • Enable context processing & vector store integrations
  • Support streaming data workflows and batch processing
  • Ensure adherence to enterprise data models, governance, and security standards
  • Collaborate with DataOps, MLOps, Security, and business teams (LBUs)
  • Contribute to data lifecycle management for AI platforms

Required Skills

  • 5–7 years of hands-on experience in Data Engineering
  • Strong expertise in Python and advanced SQL
  • Experience with GCP and/or Azure cloud-native data services
  • Hands-on experience with PySpark / Spark SQL
  • Experience building data pipelines for ML/AI workloads
  • Understanding of CI/CD, Git, and Agile methodologies
  • Knowledge of data quality, governance, and security practices
  • Strong collaboration and stakeholder management skills

Nice-to-Have Skills

  • Experience with Vector Databases / Vector Stores (for RAG pipelines)
  • Familiarity with MLOps / GenAIOps concepts (feature stores, model registries, prompt management)
  • Exposure to Knowledge Graphs / Context Stores / Document Intelligence workflows
  • Experience with DBT (Data Build Tool)
  • Knowledge of Infrastructure-as-Code (Terraform)
  • Experience in multi-cloud deployments (Azure + GCP)
  • Familiarity with event-driven systems (Kafka, Pub/Sub) & API integrations

Ideal Candidate Profile

  • Strong data engineering foundation with AI/ML exposure
  • Experience working in multi-cloud environments
  • Ability to build production-grade, scalable data systems
  • Comfortable working in cross-functional, fast-paced environments
Read more
Bengaluru (Bangalore)
2 - 5 yrs
₹20.4L - ₹24L / yr
skill iconPython
API
SQL
Systems design
Software deployment

Location: Bangalore

Experience: 2–5 years

Type: Full-time | On-site

Open Roles: 2

Start: Immediate

Why this role exists

Most systems work at a low scale.

Very few survive real production load, complex workflows, and enterprise edge cases.

We are building a platform that must:

  • Scale from 500K → 20M+ interactions/month
  • Handle complex insurance workflows reliably
  • Become easier to deploy as it grows, not harder

This role exists to build the backend foundation that makes this possible.

What you’ll do

You will not just write services.

You will design and own core platform systems.

1. Scale the platform without breaking architecture

  • Scale from 50K → 2M+ interactions/month
  • Ensure:
  • High availability
  • Low latency
  • Fault tolerance
  • Avoid large rewrites — build systems that evolve cleanly

2. Build the workflow automation (WA) engine

  • Design a flexible system with:
  • States
  • Stages
  • Cohorts
  • Dynamic workflows
  • Ensure workflows:
  • Handle edge cases reliably
  • Can be configured easily
  • Move from:
  • Hardcoded flows → configurable execution engine

3. Build the insurance-specific data layer

  • Design data models for:
  • Policy states
  • Claim workflows
  • Consent tracking
  • Ensure the system works across:
  • Multiple insurers
  • Multiple use cases
  • Build a platform-first data layer, not use-case-specific hacks

4. Make deployment and setup simple

  • Ensure workflows and data models are:
  • Easy to configure
  • Easy to launch
  • Reduce friction for:
  • Product teams
  • Deployment teams

5. Create a compounding data advantage

  • Build a data layer that:
  • Improves with every deployment
  • Captures structured signals
  • Ensure data becomes a long-term edge, not just storage

6. Own production reliability

  • Participate in on-call rotation across 3 engineers
  • Ensure:
  • Incidents are handled quickly
  • Root causes are fixed permanently
  • Build systems where reliability is shared, not individual

What success looks like

  • Platform scales to 2M+ interactions/month smoothly
  • Workflow engine supports complex, dynamic use cases
  • Data layer enables fast deployment across accounts
  • Edge cases are handled without constant firefighting
  • System becomes easier to use as it grows
  • Production issues are rare and predictable

Who you are

  • You have 2-5 years of backend engineering experience
  • You have built:
  • Scalable systems
  • Distributed services
  • You think in:
  • Systems
  • Data models
  • Trade-offs
  • You are comfortable owning:
  • Architecture
  • Production systems

What will make you stand out

  • Experience building:
  • Workflow engines
  • State machines
  • Data-heavy platforms
  • Strong understanding of:
  • System design
  • Distributed systems
  • Failure handling
  • Experience working in:
  • High-scale production environments

Why join

  • You will build the core backend of an AI platform
  • Your work directly impacts:
  • Scale
  • Reliability
  • Product capability
  • You will design systems that move from:
  • Use-case specific → platform-level infrastructure

What this role is not

  • Not just API development
  • Not limited to feature-level work
  • Not disconnected from production realities

What this role is

  • A system architect
  • A builder of scalable platforms
  • A driver of long-term technical advantage

One question to self-evaluate

Can you design backend systems that scale, handle edge cases, and become easier to use as they grow?


Read more
Bengaluru (Bangalore)
4 - 7 yrs
Best in industry
Selenium
skill iconJava
skill iconPython
Test Automation (QA)
Mobile App Testing (QA)
+2 more

Role Overview


As a Senior QA Engineer (Automation), you will drive product quality across all stages of development and deployment. You’ll take complete ownership of defining QA strategy, implementing robust automation frameworks, and ensuring every release meets our high standards of reliability, performance, and user delight.


This role is ideal for someone who thrives in a fast-paced startup environment, loves solving problems, and is passionate about building scalable and flawless user experiences.


Key Responsibilities

Define & Execute QA Strategy:

Develop and implement test strategies covering functional, regression, integration, and exploratory testing.


Automation Leadership:

Build and maintain scalable automation frameworks integrated into CI/CD pipelines to improve speed, reliability, and test coverage.


Collaborate Early:

Partner closely with Product and Engineering teams to ensure testable requirements and early QA involvement in the development cycle.


Release Readiness:

Own end-to-end release validation, including regression testing, defect triage, and final sign-off on product quality.


Quality Metrics & Reporting:

Define, track, and communicate key QA metrics (defect leakage, build health, test coverage) to drive data-backed improvements.


Performance & Security Testing:

Conduct basic performance and security validation to ensure system robustness.


Mentorship & Best Practices:

Guide junior QA engineers, promoting test design excellence, automation best practices, and continuous improvement.


Process Optimization:

Continuously enhance QA processes through retrospectives, automation expansion, and shift-left testing principles.


Documentation:

Maintain comprehensive documentation of test cases, strategies, bug reports, and quality incident postmortems.


What We’re Looking For

  • 5 - 10 years of QA experience in product-based startups, ideally in B2C environments.
  • Proven expertise in test automation (e.g., Selenium, Appium, Cypress, Playwright, etc.).
  • Strong understanding of CI/CD pipelines, API testing, and test design principles.
  • Hands-on experience with manual and exploratory testing.
  • Ability to handle multiple projects independently and drive them to completion.
  • High sense of ownership, accountability, and attention to detail.
  • Excellent communication and collaboration skills.
  • Willingness to work from the office (HSR Layout, Bangalore).


Why Join Us

  • Opportunity to impact millions of users in India’s devotional and spiritual space.
  • Work with a talented, passionate, and mission-driven team.
  • High ownership role with end-to-end accountability.
  • Fast-paced, collaborative, and growth-oriented culture.


Build seamless, trusted experiences that bring faith and technology together.


Read more
Optimo Capital

at Optimo Capital

2 candid answers
Ajinkya Pokharkar
Posted by Ajinkya Pokharkar
Bengaluru (Bangalore)
2 - 4 yrs
₹5.5L - ₹12L / yr
skill iconPython
skill iconReact.js
skill iconJavascript
RESTful APIs
skill iconPostgreSQL
+7 more

About us:

Optimo Capital is a newly established NBFC founded by Prashant Pitti, who is also a co-founder of EaseMyTrip (a billion-dollar listed startup that grew profitably without any funding).

Our mission is to serve the underserved MSME businesses in India with their credit needs. With less than 15% of MSMEs having access to formal credit, we aim to bridge this credit gap by employing a phygital model (physical branches + digital decision-making).

As a technology and data-first company, tech and data enthusiasts play a crucial role in building the infrastructure at Optimo, and help the company thrive.


What we offer:

Join our dynamic startup team as a Full Stack Developer and play a crucial role in web application & API developments, customer journeys, tech integrations, building robust credit risk and underwriting decision engines, cloud infrastructure, and more.

This is an exceptional opportunity to learn, grow, and make a significant impact in a fast-paced startup environment. We believe that the freedom and accountability to make decisions in technology, software, system architecture, and other design aspects bring out the best in you and help us build the best for the company.

This environment will not only offer you a steep learning curve but also allow you to experience the direct impact of your technological contributions. In addition, we offer industry-standard compensation.


What we look for:

We are looking for individuals with strong proficiency in Python, React, and Django. Any experience in a startup, front-end/back-end development, tech-integrations, or open-source contributions will be highly valued.

We focus not only on your skills but also on your attitude and your hunger to learn, grow, lead, and thrive—both as an individual and as part of a team. We encourage taking on challenges, learning new technologies, understanding, building, and implementing them within a short period of time. Your willingness to put in the extra effort to build the best systems will be highly appreciated.


Skills:

Excellent proficiency with the ability to write clean, robust, production-level code. Experience in designing, developing, and maintaining web apps and rule engines is required. At least one year of experience as a developer in any engineering / software-based role is required.


1) Frontend Development

  • JavaScript: Strong proficiency in JavaScript, including ES6+ features
  • React: Experience building complex user interfaces using React and its ecosystem (e.g., Redux, Context API)
  • HTML/CSS: Solid understanding of HTML5 and CSS3 for creating responsive and accessible web pages


2) Backend Development

  • Python: Proficiency in Python for server-side development
  • Django: Working knowledge in Django, Django Rest Framework
  • Flask (or FastAPI): Experience building RESTful APIs using Flask or FastAPI is a plus


3) REST APIs: A strong understanding of APIs is required, along with prior experience in API development or integration. Writing REST APIs from scratch is highly desirable.


4) Databases: A basic understanding of both relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., MongoDB) databases is required. Basic knowledge of database management, optimization, and query design is expected.


5) Git: Proficiency in Git is essential, with experience in branching, merging, pull requests, and conflict resolution. Experience in collaborative projects using Git is highly valued.


6) Good to have: 

  • Basic understanding of data pipelines/ETLs, dashboarding, and AWS is beneficial but not required.
  • Experience in building WhatsApp chat/flow journeys, Working with maps, and creating data layers (e.g., Google Maps API, Mapbox) is highly valued. (not mandatory)


What you'll be working on:

  1. Design and build systems focused on creating straight-through processes for lending (specifically property loans), from customer onboarding to disbursement, with an emphasis on accurate and efficient credit and risk assessment.
  2. Take projects from ideation to production, including web applications, rule engines, third-party API integrations, and other technology developments.
  3. Take initiative and ownership of engineering projects, ensuring a seamless user experience.
  4. Manage and coordinate the cloud infrastructure and application setup, including source code repositories, CI/CD pipelines, servers, and deployments.


Other Requirements:

  1. Availability for full-time work in Bangalore. Advantage for immediate joiners.
  2. Strong passion for technology and problem-solving.
  3. Ability to translate requirements into intuitive interfaces is highly appreciated 
  4. At least 1 year of industry experience in a technical role specifically as a developer is a must.
  5. Self-motivated and capable of working both independently and collaboratively.



If you are ready to embark on an exciting journey of growth, learning, and innovation, apply now to join our pioneering team in Bangalore.


Read more
Metron Security Private Limited
Chanchal Kale
Posted by Chanchal Kale
Pune, Bengaluru (Bangalore)
2.5 - 6 yrs
₹3L - ₹10L / yr
skill iconNodeJS (Node.js)
skill iconGo Programming (Golang)
skill iconPython
Data Structures
CI/CD
+1 more

Job Summary:


We are looking for a highly motivated and skilled Software Engineer to join our team.

This role requires a strong understanding of the software development lifecycle, proficiency in coding, and excellent communication skills.

The ideal candidate will be responsible for production monitoring, resolving minor technical issues, collecting client information, providing effective client interactions, and supporting our development team in resolving challenges



Key Responsibilities:


Client Interaction: Serve as the primary point of contact for client queries, provide excellent communication, and ensure timely issue resolution.

Issue Resolution: Troubleshoot and resolve minor issues related to software applications in a timely manner.

Information Collection: Gather detailed technical information from clients, understand the problem context, and relay the information to the development leads for further action.

Collaboration: Work closely with development leads and cross-functional teams to provide timely support and resolution for customer issues.

Documentation: Document client issues, actions taken, and resolutions for future reference and continuous improvement.

Software Development Lifecycle: Be involved in maintaining, supporting, and optimizing software through its lifecycle, including bug fixes and enhancements.

Automating Redundant Support Tasks: (good to have) Should be able to automate the redundant repetitive tasks Required Skills and Qualifications:



Mandatory Skills:


Expertise in at least one Object Oriented Programming language (Python, Java, C#, C++, Reactjs, Nodejs).

Good knowledge on Data Structure and their correct usage.

Open to learn any new software development skill if needed for the project.

Alignment and utilization of the core enterprise technology stacks and integration capabilities throughout the transition states.

Participate in planning, definition, and high-level design of the solution and exploration of solution alternatives.

Define, explore, and support the implementation of enablers to evolve solution intent, working directly with Agile teams to implement them.

Good knowledge on the implications.

Experience architecting & estimating deep technical custom solutions & integrations.



Added advantage:


You have developed software using web technologies.

You have handled a project from start to end.

You have worked in an Agile Development project and have experience of writing and estimating User Stories

Communication Skills: Excellent verbal and written communication skills, with the ability to clearly explain technical issues to non-technical clients.

Client-Facing Experience: Strong ability to interact with clients, gather necessary information, and ensure a high level of customer satisfaction.

Problem-Solving: Quick-thinking and proactive in resolving minor issues, with a focus on providing excellent user experience.

Team Collaboration: Ability to collaborate with development leads, engineering teams, and other stakeholders to escalate complex issues or gather additional technical support when required.



Preferred Skills:


Familiarity with Cloud Platforms and Cyber Security tools: Knowledge of cloud computing platforms and services (AWS, Azure, Google Cloud) and Cortex XSOAR, SIEM, SOAR, XDR tools is a plus.

Automation and Scripting: Experience with automating processes or writing scripts to support issue resolution is an advantage.



Read more
TalentXO
Bengaluru (Bangalore)
4 - 8 yrs
₹27L - ₹30L / yr
Camunda Developer
skill iconPython
Backend Development
Microservices
REST API
+2 more

Role & Responsibilities

We are looking for a hands-on Camunda Developer with strong experience in workflow orchestration and backend development. The ideal candidate should be able to design, build, and optimize end-to-end business processes using Camunda (preferably Camunda 8) and work closely with engineering and business teams to implement scalable and resilient workflows.

Key Responsibilities:

  • Translate business requirements into BPMN workflows using Camunda (preferably Camunda 8)
  • Design and implement end-to-end process orchestration across systems
  • Build and manage service integrations (REST APIs, event-driven systems)
  • Develop and maintain Zeebe workers / microservices (Python)
  • Collaborate with stakeholders to refine workflows and handle edge cases
  • Implement error handling, retries, and compensation mechanisms
  • Analyse and improve workflows for scalability, reliability, and performance
  • Ensure data consistency and idempotent process execution
  • Work with cross-functional teams including data and analytics for process observability

Ideal Candidate

  • Strong Senior Camunda Developer / Workflow Orchestration Engineer Profiles
  • Mandatory (Experience 1) – Must have 4+years of hands-on experience in backend development and workflow systems, demonstrable through production-grade work on business process automation or backend service development.
  • Mandatory (Experience 2) – Must have strong hands-on experience with Camunda and BPMN 2.0, including designing, building, and deploying end-to-end business process workflows in production.
  • Mandatory (Experience 3) – Must have hands-on experience with Zeebe workers and the Camunda 8 stack, built and maintained as part of real orchestration systems.
  • Mandatory (Experience 4) – Must have strong production-level coding skills in Python, used for building and maintaining Zeebe workers and microservices.
  • Mandatory (Experience 5) – Must have experience designing and working within microservices architecture and distributed systems, with clear understanding of service decomposition, inter-service communication, and distributed system failure modes.
  • Mandatory (Experience 6) – Must have hands-on experience building and consuming REST APIs and working with event-driven systems (message brokers, pub/sub, event streams).
  • Mandatory (Skills) – Must have strong debugging and problem-solving skills in production workflow environments, with specific examples of resolving complex issues such as stuck processes, race conditions, or data inconsistency bugs.
  • Preferred (Experience 1) – Exposure to cloud platforms (AWS / GCP / Azure) and experience with data platforms (e.g., Snowflake).
  • Preferred (Experience 2) – Understanding of finance-related workflows (billing, reconciliation, etc.).


Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Mumbai, Trivandrum
4 - 8 yrs
Best in industry
skill iconNodeJS (Node.js)
skill iconPython
Dialog Flow
rasa
yellow.ai
+1 more

Responsible for developing, enhancing, modifying, and maintaining chatbot applications in the Global Markets environment. The role involves designing, coding, testing, debugging, and documenting conversational AI solutions, along with supporting activities aligned to the corporate systems architecture.

You will work closely with business partners to understand requirements, analyze data, and deliver optimal, market-ready conversational AI and automation solutions.


Key Responsibilities

  • Design, develop, test, debug, and maintain chatbot and virtual agent applications
  • Collaborate with business stakeholders to define and translate requirements into technical solutions
  • Analyze large volumes of conversational data to improve chatbot accuracy and performance
  • Develop automation workflows for data handling and refinement
  • Train and optimize chatbots using historical chat logs and user-generated content
  • Ensure solutions align with enterprise architecture and best practices
  • Document solutions, workflows, and technical designs clearly

Required Skills

  • Hands-on experience in developing virtual agents (chatbots/voicebots) and Natural Language Processing (NLP)
  • Experience with one or more AI/NLP platforms such as:
  • Dialogflow, Amazon Lex, Alexa, Rasa, LUIS, Kore.AI
  • Microsoft Bot Framework, IBM Watson, Wit.ai, Salesforce Einstein, Converse.ai
  • Strong programming knowledge in Python, JavaScript, or Node.js
  • Experience training chatbots using historical conversations or large-scale text datasets
  • Practical knowledge of:
  • Formal syntax and semantics
  • Corpus analysis
  • Dialogue management
  • Strong written communication skills
  • Strong problem-solving ability and willingness to learn emerging technologies

Nice-to-Have Skills

  • Understanding of conversational UI and voice-based processing (Text-to-Speech, Speech-to-Text)
  • Experience building voice apps for Amazon Alexa or Google Home
  • Experience with Test-Driven Development (TDD) and Agile methodologies
  • Ability to design and implement end-to-end pipelines for AI-based conversational applications
  • Experience in text mining, hypothesis generation, and historical data analysis
  • Strong knowledge of regular expressions for data cleaning and preprocessing
  • Understanding of API integrations, SSO, and token-based authentication
  • Experience writing unit test cases as per project standards
  • Knowledge of HTTP, REST APIs, sockets, and web services
  • Ability to perform keyword and topic extraction from chat logs
  • Experience training and tuning topic modeling algorithms such as LDA and NMF
  • Understanding of classical Machine Learning algorithms and appropriate evaluation metrics
  • Experience with NLP frameworks such as NLTK and spaCy
Read more
Bengaluru (Bangalore)
5 - 10 yrs
₹1L - ₹10L / yr
databricks
PySpark
Apache Spark
ETL
CI/CD
+10 more

Profile - Databricks Developer

Experience- 5+ years

Location- Bangalore (On site)

PF & BGV is Mandatory


Job Description: -

* Design, build, and optimize data pipelines and ETL/ELT workflows using Databricks and

Apache Spark (PySpark).

* Develop scalable, high performance data solutions using Spark distributed processing.

* Lead engineering initiatives focused on automation, performance tuning, and platform

modernization.

* Implement and manage CI/CD pipelines using Git-based workflows and tools such as

GitHub Actions or Jenkins.

* Collaborate with cross-functional teams to translate business needs into technical

solutions.

* Ensure data quality, governance, and security across all processes.

* Troubleshoot and optimize Spark jobs, Databricks clusters, and workflows.

* Participate in code reviews and develop reusable engineering frameworks.

* Should have knowledge of utilizing AI tools to improve productivity and support daily

engineering activities.

* Strong knowledge and hands-on experience in Databricks Genie, including prompt

engineering, workspace usage, and automation.

Required Skills & Experience:

* 5+ years of experience in Data Engineering or related fields.

* Strong hands-on expertise in Databricks (notebooks, Delta Lake, job orchestration).

* Deep knowledge of Apache Spark (PySpark, Spark SQL, optimization techniques).

* Strong proficiency in Python for data processing, automation, and framework

development.

* Strong proficiency in SQL, including complex queries, performance tuning, and analytical

functions.

* Strong knowledge of Databricks Genie and leveraging it for engineering workflows.

* Strong experience with CI/CD and Git-based development workflows.

* Proficiency in data modeling and ETL/ELT pipeline design.


* Experience with automation frameworks and scheduling tools.

* Solid understanding of distributed systems and big data concepts

Read more
Bootlabs Technologies Private Limited

at Bootlabs Technologies Private Limited

2 candid answers
1 recruiter
Aakanksha Soni
Posted by Aakanksha Soni
Mumbai, Bengaluru (Bangalore)
4 - 7 yrs
Best in industry
skill iconAmazon Web Services (AWS)
skill iconPython
ECS
AWS IAM
Amazon S3
+3 more

Job Title: AWS DevOps Engineer (MLOps)

We are looking for a highly skilled AWS + MLOps Engineer to design, build, and maintain scalable machine learning infrastructure and pipelines on AWS. The ideal candidate will have strong expertise in DevOps practices, cloud architecture, and MLOps frameworks, along with solid Python programming skills.

Job Description:

We are looking for an experienced AWS DevOps Engineer to join our team. You will be responsible for building and optimising CI/CD pipelines, managing AWS infrastructure, and automating tasks using AWS services.

Key Responsibilities:

  • CI/CD Pipelines: Develop CI/CD pipelines with AWS CodePipeline, build ECR images, and update services on ECS.
  • Automation: Create Python Lambda functions for automation and AWS Batch jobs for GPU processing.
  • Infrastructure Management: Manage AWS infrastructure using Terraform (IAM roles, RDSLambda, etc.) and deploy microservices on EKS with ALB Ingress.
  • Data Processing: Work with AWS Step Functions and EMR for data workflows; troubleshoot Spark jobs.
  • Microservices: Deploy ATLAS on ECS, and create AWS Glue crawlers for data integration.
  • Strong Experience with MLOps is an added advantage.

Required Skills:

  • Experience with AWS services (ECS, ECR, Lambda, Step Functions, EMR, Glue, etc.).
  • Proficient in CI/CDTerraform, and Python scripting.
  • Experience deploying EKS clusters and using AWS ALB for routing.
  • Strong troubleshooting skills with EMR and Spark.
  • Understanding/experience with AWS EMR, Sagemaker and Databricks would be added advantage

Preferred:

  • AWS Certification (DevOps, Solutions Architect, etc.).
  • Experience with microservices and GPU-intensive processes.
Read more
Global MNC serving 40+ Fortune 500 Companies

Global MNC serving 40+ Fortune 500 Companies

Agency job
Bengaluru (Bangalore)
5 - 8 yrs
₹20L - ₹26L / yr
Generative AI
Retrieval Augmented Generation (RAG)
skill iconMachine Learning (ML)
LangGraph
langchain
+11 more

Want to work on exciting GenAI projects for Fortune 500 companies across multiple sectors? Then read on..


About Company:

CSG is a multi-national company having a presence in 20 countries with 1600+ Engineers. Company works with more than 40 Fortune 500 customers such as Sony, Samsung, ABB, Thyssenkrup, Toyota, Mitsubishi and many more.


Job Description:

We are looking for a talented Generative AI Developer to join our dynamic AI/ML team. This position offers an exciting opportunity to leverage cutting-edge Generative AI (GenAI) technologies to drive innovation to solve real world problems. You will be responsible for developing and optimizing GenAI-based applications, implementing advanced techniques like Retrieval-Augmented Generation (RAG), RIG (Retrieval Interleaved Generation), Agentic Frameworks and vector databases. This is a collaborative role where you will work directly with customers cross-functional teams to design, implement, and optimize AI-driven solutions. Exposure to cloud-native AI platforms such as Amazon Bedrock and Microsoft Azure OpenAI is highly desirable.


Key Responsibilities

Generative AI Application Development:

Design, develop, and deploy GenAI-driven applications to address complex industrial challenges.

Implement Retrieval-Augmented Generation (RAG) and Agentic frameworks


Data Management & Optimization:

Design and optimize document chunking strategies tailored to specific datasets and use cases.

Build, manage, and optimize data embeddings for high-performance similarity searches across vector databases.


Collaboration & Integration:

Work closely with data engineers and scientists to integrate AI solutions into existing pipelines.

Collaborate with cross-functional teams to ensure seamless AI implementation.


Cloud & AI Platform Utilization:

Explore and implement best practices for utilizing cloud-native AI platforms, such as Amazon Bedrock and Azure OpenAI, to enhance solution delivery.

Continuous Learning & Innovation:

Stay updated with the latest trends and emerging technologies in the GenAI and AI/ML fields, ensuring our solutions remain cutting-edge.


Requirements:

The ideal candidate will have strong experience in Generative AI technologies, particularly in the areas of RAG, document chunking, and vector database management. They will be able to quickly adapt to evolving AI frameworks and leverage cloud-native platforms to create efficient, scalable solutions. You will be working in a fast-paced and collaborative environment, where innovation and the ability to learn and grow are key to success.

- 3 to 5 years of overall experience in software development, with 3 years focused on AI/ML.

- Minimum 2 years of experience specifically working with Generative AI (GenAI) technologies.

- Python, PySpark and SQL knowledge is necessary for tasks

- Proven ability to work in a collaborative, fast-paced, and innovative environment.


Technical Skills:

- Generative AI Frameworks & Technologies:

- Expertise in Generative AI frameworks, including prompt engineering, fine-tuning, and few-shot learning.

- Familiarity with frameworks such as T5 (Text-to-Text Transfer Transformation), LangChain, Lang Graph, Open-source tech stalk Ollama, Mistral, DeepSeek.

- Strong knowledge of Retrieval-Augmented Generation (RAG) for combining LLMs with external data retrieval systems.


Data Management:

- Experience in designing chunking strategies for different datasets.

- Expertise in data embedding techniques and experience with vector databases like Pinecone, ChromaDB etc

- Programming & AI/ML Libraries:

- Strong programming skills in Python.

- Experience with AI/ML libraries such as TensorFlow, PyTorch, and Hugging Face Transformers.


Cloud Platforms & Integration:

- Familiarity with cloud services for AI/ML workloads (AWS, Azure).

- Experience with API integration for AI services and building scalable applications.

- Certifications (Optional but Desirable):

- Certification in AI/ML (e.g., TensorFlow, AWS Certified Machine Learning Specialty).

- Certification or coursework in Generative AI or related technologies.

Read more
Product based company

Product based company

Agency job
Bengaluru (Bangalore)
4 - 9 yrs
₹12L - ₹13L / yr
skill icon.NET
ASP.NET
ASP.NET MVC
Microservices
FastAPI
+6 more

Technical Lead – Full Stack 

Work Location (WFO):

Nagar, Bengaluru, Karnataka

Interview Process:

L1 Interview – Face-to-Face at Office

Experience Required:

4-6 Years (Minimum1+ years in Technical Leadership role)

Budget:

Up to 13 LPA

Role Overview:

The candidate will lead the technical vision and architecture of a compliance platform by designing scalable, secure, and high-performance systems. The role involves driving full-stack development across .NET and open-source technologies, enabling unified AI Agent capabilities, Single Authentication (SSO), and a One-UI experience.

Key Responsibilities:

  • Define and own end-to-end architecture including micro-frontends, .NET services, FastAPI APIs, and microservices
  • Lead full-stack development using .NET and modern open-source technologies
  • Modernize legacy systems (ASP.NET, .NET Core, MS SQL Server) to cloud-native architecture
  • Design and implement AI Agents, SSO, and unified UI experiences
  • Manage sprint planning, backlogs, and collaborate with Product Owners
  • Implement CI/CD pipelines using Jenkins, GitHub Actions
  • Drive containerization and orchestration using Docker & Kubernetes
  • Ensure secure deployments and cloud infrastructure management
  • Establish engineering best practices, code reviews, and architecture governance
  • Mentor teams on Clean Architecture, SOLID principles, and DevOps practices

 

 

Required Skills:

  • ReactJS, FastAPI, Python, REST/GraphQL
  • ASP.NET, MVC, .NET Core, Entity Framework, MS SQL Server
  • Strong experience in Microservices Architecture
  • DevOps: CI/CD, Jenkins, GitOps, Docker, Kubernetes
  • Cloud Platforms: AWS / Azure / GCP
  • AI/ML & LLM tools: OpenAI, Llama, LangChain, etc.
  • Security: RBAC, API security, secrets management

Qualifications:

  • BE / BTech in Computer Science

 

Read more
Travel Tech - IPO company

Travel Tech - IPO company

Agency job
via Recruiting Bond by Pavan Kumar
Bengaluru (Bangalore)
12 - 16 yrs
₹80L - ₹130L / yr
Distributed Systems
Search systems
Pricing & Fare Engine
Booking & Ticketing
Airline Integrations
+47 more

Director of Engineering — Flights Platform

AI-First Travel Commerce · High-Scale Distributed Systems · Marketplace Infrastructure


🌏 The Problem Space

A flight search looks trivially simple. It is anything but.


Every query you fire triggers a choreography of distributed systems operating in real-time — integrating with a dozen airline GDS/NDC providers, computing dynamic fares across inventory buckets and fare rules, ranking thousands of itineraries by relevance and business intent, and returning a ranked, priced, bookable result set — all in under 100ms.


→ Millions of search queries per minute

→ <100ms end-to-end SLA with external API dependencies

→ High-value transactions — a bug here means a missed booking, not a failed render

→ Pricing errors erode trust faster than any other failure mode


We are rebuilding the Flights platform as a real-time commerce engine for Bharat — AI-native from day zero, built to power both B2C consumer journeys and high-stakes B2B enterprise corridors.


This is a once-in-a-decade opportunity to build national-scale flight infrastructure from first principles.

🧠 What You Will Own

You will own the full Flights platform — systems, architecture, and the teams that build them.


Core System Domains:

•Search Systems — high-throughput, low-latency query pipelines returning ranked, bookable options

•Pricing & Fare Engine — dynamic pricing logic, fare rules, promotional overlays, and real-time validation

•Booking & Ticketing — transaction-critical flows requiring strict consistency, idempotency, and zero data loss

•Airline Integrations — managing unreliable external GDS/NDC APIs with retries, circuit-breakers, and reconciliation

•Post-Booking Flows — cancellations, modifications, refunds — correctness at the margin is non-negotiable


Platform Scope:

•High-scale APIs serving consumer apps, B2B enterprise clients, and third-party partners

•Event-driven state machines managing booking workflows across async boundaries

•Observability and reliability infrastructure across all mission-critical flows


Team Scope:

•Lead 15–30+ engineers across multiple product and platform teams

•Manage Engineering Managers and Principal/Staff engineers

•Own hiring, org design, and technical direction


⚙️ Core Engineering Challenges

This role is fundamentally about making the right trade-offs under uncertainty — at scale.


Latency vs. Accuracy — when do you serve a cached fare vs. call a live airline API?

Availability vs. Consistency — graceful degradation at booking time vs. strict price validation

Cost vs. Performance — when is an external API call worth it vs. a cache hit?

Scalability vs. Simplicity — the best system is the one your team can reason about under incident


🤖 AI-First Engineering

AI is not an afterthought. It is load-bearing architecture.

•LLM-powered pricing intelligence — dynamic fare prediction and demand signals

•RAG pipelines for fare rules, refund policy, and support automation

•Agentic booking resolution workflows — autonomous exception handling at scale

•MCP-based orchestration layers for multi-provider integration


⚖️ Key Responsibilities

Architecture & Distributed Systems

•Design and evolve sub-100ms distributed query systems serving millions of concurrent searches

•Build fault-tolerant booking pipelines with strong consistency and durability guarantees

•Drive Kafka-based event architectures for booking state management


Reliability & Observability

•Own 99.99%+ availability for booking and pricing systems

•Build deep observability — metrics, distributed tracing, structured logging, SLOs/SLAs

•Lead post-incident reviews and drive systemic reliability improvements


Business Partnership

•Partner with Product, Revenue, and Partnerships to translate commercial goals into architecture

•Influence platform roadmap, supplier strategy, and long-term technical investment


🛠️ Technology Stack

Backend: Java · Kotlin · Go · Python

Architecture: Microservices · Event-Driven (Kafka) · gRPC

Data: Redis · Aerospike · DynamoDB · Elasticsearch

Cloud: AWS (EKS, EC2, S3)

Observability: Prometheus · Grafana · OpenTelemetry


👤 Who You Are

•12–16 years in backend/distributed systems; 5+ Years in an Engineering Leadership role, led teams of 15–50 engineers

•Built and scaled large B2C + B2B platforms — Travel Tech, FinTech, or high-scale Consumer

•Deep expertise in real-time systems, marketplace dynamics, and external API integration

•Tier-I institute background strongly preferred (IIT / IIIT / NIT / IISC / BITS / VIT / SRM — CSE/ISE)


🚀 Why This Matters

Build national-scale infrastructure for 1.4 billion people

Sit at the intersection of AI · distributed systems · marketplace economics

Define the future of travel commerce in India — from architecture to product



Read more
Thingularity

Thingularity

Agency job
via Thomasmount Consulting by Shirin Shahana
Bengaluru (Bangalore)
4 - 8 yrs
₹18L - ₹20L / yr
skill iconPython
SQL
ETL

Job Summary

We are seeking a skilled Data Engineer with 4+ years of experience in building scalable data pipelines and working with modern data platforms. The ideal candidate should have strong expertise in Python, SQL, and cloud-based data solutions, with hands-on experience in ETL/ELT processes and data warehousing.

Key Responsibilities

  • Design, build, and maintain scalable data pipelines using Python
  • Develop and optimize ETL/ELT workflows for data ingestion and transformation
  • Work with structured and unstructured data from multiple sources
  • Build and manage data warehouses/data lakes
  • Perform data validation, cleansing, and quality checks
  • Optimize SQL queries and improve data processing performance
  • Collaborate with data analysts, data scientists, and business teams
  • Implement data governance, security, and best practices
  • Monitor pipelines and troubleshoot production issues

Required Skills

  • Strong programming experience in Python (Pandas, NumPy, PySpark preferred)
  • Excellent SQL skills (joins, window functions, performance tuning)
  • Experience with ETL tools like Informatica, Talend, or DBT
  • Hands-on experience with cloud platforms (Azure / AWS / GCP)
  • Experience in data warehousing solutions like Snowflake, Redshift, BigQuery
  • Knowledge of workflow orchestration tools like Apache Airflow
  • Familiarity with version control tools like Git

Preferred Skills

  • Experience with Big Data technologies (Spark, Hadoop)
  • Knowledge of streaming tools like Kafka
  • Exposure to CI/CD pipelines and DevOps practices
  • Experience in data modeling (Star/Snowflake schema)
  • Understanding of APIs and data integration


Read more
Bengaluru (Bangalore)
4 - 10 yrs
₹1L - ₹10L / yr
skill icon.NET
SSO
ASP.NET
ASP.NET MVC
MySQL
+16 more

Dear Candidates,


We have an urgent requirement for a Technical Lead – Full Stack role based in Bangalore. Please find the details below:


Work Location (WFO):

Nagar, Bengaluru, Karnataka


Interview Process:

L1 Interview – Face-to-Face at Office


Experience Required:

4-6 Years (Minimum1+ years in Technical Leadership role)


Role Overview:

The candidate will lead the technical vision and architecture of a compliance platform by designing scalable, secure, and high-performance systems. The role involves driving full-stack development across .NET and open-source technologies, enabling unified AI Agent capabilities, Single Authentication (SSO), and a One-UI experience.

Key Responsibilities:

  • Define and own end-to-end architecture including micro-frontends, .NET services, FastAPI APIs, and microservices
  • Lead full-stack development using .NET and modern open-source technologies
  • Modernize legacy systems (ASP.NET, .NET Core, MS SQL Server) to cloud-native architecture
  • Design and implement AI Agents, SSO, and unified UI experiences
  • Manage sprint planning, backlogs, and collaborate with Product Owners
  • Implement CI/CD pipelines using Jenkins, GitHub Actions
  • Drive containerization and orchestration using Docker & Kubernetes
  • Ensure secure deployments and cloud infrastructure management
  • Establish engineering best practices, code reviews, and architecture governance
  • Mentor teams on Clean Architecture, SOLID principles, and DevOps practices

Required Skills:

  • ReactJS, FastAPI, Python, REST/GraphQL
  • ASP.NET, MVC, .NET Core, Entity Framework, MS SQL Server
  • Strong experience in Microservices Architecture
  • DevOps: CI/CD, Jenkins, GitOps, Docker, Kubernetes
  • Cloud Platforms: AWS / Azure / GCP
  • AI/ML & LLM tools: OpenAI, Llama, LangChain, etc.
  • Security: RBAC, API security, secrets management

Qualifications:

  • BE / BTech in Computer Science
Read more
Euphoric Thought Technologies
Bengaluru (Bangalore)
3 - 5 yrs
₹6L - ₹10L / yr
skill iconPython
skill iconDjango
skill iconPostgreSQL
RESTful APIs

Job Summary

We are looking for a skilled Python Developer with 3 years of experience to join our team in Bangalore. The ideal candidate should have strong expertise in Python, Django, and PostgreSQL, along with a good understanding of backend development. Knowledge of Java will be an added advantage.


Key Responsibilities

Develop, test, and maintain scalable backend applications using Python and Django

Design and manage databases using PostgreSQL

Write clean, efficient, and reusable code

Collaborate with cross-functional teams to define, design, and ship new features

Debug and resolve technical issues and optimize application performance

Participate in code reviews and ensure best coding practices


Required Skills

Strong experience in Python

Hands-on experience with Django framework

Good knowledge of PostgreSQL database

Understanding of REST APIs and web services

Familiarity with version control systems (e.g., Git)


Good to Have

Basic knowledge of Java

Experience with cloud platforms or deployment processes

Understanding of front-end technologies is a plus


Qualifications

Bachelor’s degree in Computer Science, Engineering, or related field


Additional Requirements

Immediate joiners or candidates with short notice period preferred

Strong problem-solving and analytical skills

Good communication and teamwork abilities

Read more
Euphoric Thought Technologies
Bengaluru (Bangalore)
3 - 4 yrs
₹6L - ₹10L / yr
skill iconPython
skill iconDjango
skill iconPostgreSQL

Job Summary

We are looking for a skilled Python Developer with 3 years of experience to join our

team in Bangalore. The ideal candidate should have strong expertise in Python,

Django, and PostgreSQL, along with a good understanding of backend development.

Knowledge of Java will be an added advantage.


Key Responsibilities

 Develop, test, and maintain scalable backend applications using Python and

Django

 Design and manage databases using PostgreSQL

 Write clean, efficient, and reusable code

 Collaborate with cross-functional teams to define, design, and ship new

features

 Debug and resolve technical issues and optimize application performance

 Participate in code reviews and ensure best coding practices

 Required Skills

 Strong experience in Python

 Hands-on experience with Django framework

 Good knowledge of PostgreSQL database

 Understanding of REST APIs and web services

 Familiarity with version control systems (e.g., Git)

 Good to Have

 Basic knowledge of Java

 Experience with cloud platforms or deployment processes

 Understanding of front-end technologies is a plus

Qualifications

 Bachelor’s degree in Computer Science, Engineering, or related field

 Additional Requirements

 Immediate joiners or candidates with short notice period preferred

 Strong problem-solving and analytical skills

 Good communication and teamwork abilities

Read more
INI8 Labs
Shwetha K
Posted by Shwetha K
HSR layout, Bengaluru (Bangalore)
6 - 10 yrs
₹18L - ₹30L / yr
skill iconPython
skill iconDjango
skill iconMongoDB
skill iconAmazon Web Services (AWS)
skill iconGo Programming (Golang)
+1 more

Full-Stack Developer (Backend-Focused)

We are seeking a seasoned Full-Stack Developer with strong expertise in backend engineering using Python and Golang. In this role, you will take ownership of backend systems while contributing to the development of modern, responsive frontend interfaces. The focus will be on building secure, scalable, and high-performance applications, with emphasis on API development, database engineering, and cloud deployment.

Key Responsibilities

  • Develop and enhance backend services using Python frameworks such as Django or FastAPI
  • Design, build, and maintain RESTful APIs and microservices
  • Work extensively with relational and NoSQL databases, including PostgreSQL, MySQL, and MongoDB
  • Collaborate with frontend developers to integrate user-facing elements with backend logic
  • Implement efficient, secure, and scalable application architectures
  • Troubleshoot and resolve software defects across different environments
  • Optimize performance and reliability of backend services
  • Write clean, maintainable, and well-tested code following best practices
  • Contribute to DevOps activities, including CI/CD pipelines and containerization

Required Skills & Qualifications

  • 6+ years of experience in full-stack or backend-focused development
  • Strong proficiency in Python with hands-on experience in frameworks like Django or FastAPI
  • Solid understanding of SQL and NoSQL databases, including data modeling and query optimization
  • Familiarity with modern frontend technologies such as React, Vue, or Angular
  • Experience with Docker, Kubernetes, and at least one cloud platform (AWS, Azure, or GCP) is preferred
  • Strong understanding of system design, distributed systems, and microservices architecture
  • Experience with Git and CI/CD automation pipelines
  • Excellent problem-solving skills and ability to work collaboratively


Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Bengaluru (Bangalore)
5 - 8 yrs
₹10L - ₹30L / yr
databricks
skill iconPython
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Supervised learning

What You’ll Be Doing:

  • Design and develop advanced AI/ML models to solve complex business problems
  • Work closely with cross-functional teams including data engineers and domain experts
  • Perform exploratory data analysis, data cleaning, and model development
  • Translate business challenges into data-driven solutions and actionable insights
  • Drive innovation in advanced analytics and AI/ML capabilities
  • Communicate model insights effectively to both technical and non-technical stakeholders

What We’re Looking For:

  • 5+ years of experience in AI/ML model development
  • Strong foundation in mathematics, probability, and statistics
  • Proficiency in Python and exposure to Azure Machine Learning / Databricks
  • Experience with supervised & unsupervised learning techniques
  • Domain exposure to Energy / Oil & Gas value chain (preferred)
  • Strong problem-solving, stakeholder management, and communication skills


Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Mumbai
2 - 5 yrs
Best in industry
Agentic AI
skill iconPython
RESTful APIs
Google Vertex AI
Gemini (Google AI)
+1 more

We are seeking a Senior Machine Learning Engineer to support the development and deployment of advanced AI capabilities within the PHI ecosystem.

This role focuses on the execution of Generative AI tasks, including model integration and agent deployment. The candidate will be responsible for building RAG-based workflows and ensuring AI interactions remain grounded and accurate using Google Cloud AI tools.


Key Responsibilities

1. GenAI Integration

  • Develop and maintain integrations with Gemini 1.5 Pro and Flash models
  • Use the Google Gen AI SDK for Python to build and manage model integrations

2. Agent Deployment

  • Assist in deploying AI agents to Vertex AI Agent Engine
  • Work with the Agent Development Kit (ADK) for agent lifecycle management

3. RAG & Embeddings

  • Generate and manage text and multimodal embeddings
  • Support semantic search and Retrieval-Augmented Generation (RAG) pipelines

4. Testing & Quality

  • Run evaluation scripts to verify model output quality
  • Ensure models follow grounding and response accuracy guidelines

Must-Have Skills

  • Strong Python programming
  • Experience working with REST APIs
  • Hands-on experience with Vertex AI Studio
  • Experience working with Gemini APIs
  • Understanding of Agentic AI concepts
  • Familiarity with ADK CLI
  • Experience or understanding of RAG architecture
  • Knowledge of embedding generation

Good-to-Have Skills (Foundation):

BigQuery

  • Basic SQL knowledge
  • Experience with data loading
  • Ability to debug and troubleshoot queries

Data Streaming

  • Familiarity with Google Pub/Sub
  • Understanding of synthetic data generation

Visualization

  • Basic reporting and dashboards using Looker Studio
Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Bengaluru (Bangalore)
5 - 7 yrs
₹4L - ₹12L / yr
skill iconPython
Large Language Models (LLM)
FastAPI
Windows Azure
CI/CD

👉 Job Title: Senior Backend Developer

🌟 Experience: 5-7 Years

💡Location: Bangalore

👉 Notice Period :- Immediate Joiners

💡 Work Mode - 5 Days work from Office

( Candidate Serving notice period are preffered)


Role Summary

We are seeking a Senior Backend Developer with strong expertise in Python and FastAPI to build scalable, high-performance backend systems integrated with LLM technologies on Azure. The role involves designing distributed systems, optimizing data pipelines, and ensuring secure, enterprise-grade applications.


Key Responsibilities

  • Develop backend services using Python & FastAPI (async, middleware)
  • Build high-concurrency, scalable systems and microservices
  • Work with Azure services and event-driven architectures
  • Optimize MongoDB & Redis for performance
  • Integrate LLM APIs (OpenAI, Gemini, Claude)
  • Implement security (JWT, encryption, API management)

Mandatory Skills (Top 3)

  1. Strong Python backend development with FastAPI
  2. Hands-on experience with Microsoft Azure cloud
  3. Experience in building scalable distributed/microservices systems


Good to Have

  • Docker, Kubernetes, CI/CD
  • LLM frameworks (LangChain, vector DBs)
  • Monitoring tools and real-time data processing


Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Bengaluru (Bangalore)
5 - 7 yrs
₹4L - ₹12L / yr
skill iconJava
skill iconPython
skill iconNodeJS (Node.js)
Windows Azure
Google Cloud Platform (GCP)
+3 more

👉 Job Title: Backend Developer

🌟 Experience: 5-7 Years

💡Location: Bangalore

👉 Notice Period :- Immediate Joiners

💡 Work Mode - 5 Days work from Office

( Candidate Serving notice period are preffered)


Role Summary

We are looking for a Backend Engineer to join the Platform Implementation Team, responsible for building scalable, secure, and high-performance backend systems for a multi-cloud Data & AI platform. You will design microservices, develop REST APIs, and enable seamless data integration across enterprise systems like CRM and ERP.


💫 Key Responsibilities

✅ Design and develop scalable microservices and RESTful APIs

✅ Build event-driven architectures for asynchronous processing

✅ Integrate backend systems with cloud platforms (GCP/Azure)

✅ Ensure secure, reliable, and optimized data handling

✅ Collaborate with cross-functional teams (UI, Data, Platform)

✅ Follow best practices in coding, testing, CI/CD, and containerization


💫 Mandatory Skills (Top 3)

✅ Strong backend programming experience (Python / Node.js / Java)

✅ Expertise in API development & Microservices architecture

✅ Hands-on experience with Cloud platforms (GCP or Azure)





Read more
TalentXO
tabbasum shaikh
Posted by tabbasum shaikh
Bengaluru (Bangalore)
4 - 7 yrs
₹34L - ₹40L / yr
skill iconPython
LLM
OpenAI
Gemini
RAG
+5 more

Role & Responsibilities

As a Senior GenAI Engineer you will own the AI layer of our product — building the features that make Zenskar intelligent. This is not a research role and not a prompt-engineering role. You will build production AI systems that enterprise clients depend on, which means reliability, observability, and rigorous evals matter as much as the AI capability itself. You own the full vertical — the model, the pipeline, and the UI.

  • Build and own CS Copilot — a real-time assistant for customer success teams, spanning STT pipelines, live transcription, and LLM-powered suggestions
  • Build LLM-powered document understanding features — extracting structured, reliable data from unstructured enterprise documents
  • Own AI feature UIs end-to-end — you build the interface, not just the model integration layer
  • Design and maintain an eval framework — define what 'working' means for each AI feature and catch regressions before users do
  • Drive model selection and integration decisions — choosing the right provider and approach for each use case, managing latency and cost
  • Own AI platform reliability — observability, fallback behaviour, and graceful degradation when models fail
  • Work closely with product, customer success, and the full-stack engineer — AI features only matter if they are usable and trusted by real users

THE IMPACT YOU'LL MAKE-

  • You will define what AI means at Zenskar — the features you ship will be the most visible and differentiated parts of the product
  • CS Copilot, if done well, changes how enterprise customer success teams operate every single day — this is a high-stakes, high-visibility surface
  • You will establish the engineering culture around AI reliability at Zenskar — evals, observability, and disciplined iteration
  • Your work will directly accelerate enterprise deals — AI features are increasingly a buying criterion for our clients
  • You will be the person who brings engineering rigour to a domain where most companies ship demos and call it a feature

Ideal Candidate

  • Strong Senior GenAI / AI Backend Engineer Profiles
  • Mandatory (Experience 1) – Must have 4+ years of total software development experience, with at least 2+ years working on AI/LLM-based features in production
  • Mandatory (Experience 2) – Must have strong backend engineering experience using Python (FastAPI / Django preferred) and building production-grade systems
  • Mandatory (Experience 3) – Must have hands-on experience building LLM-based applications, including OpenAI / Gemini / similar models in real projects
  • Mandatory (Experience 4) – Must have experience with RAG (Retrieval Augmented Generation) including chunking, embeddings, and retrieval pipelines
  • Mandatory (Experience 5) – Must have experience designing end-to-end AI pipelines, including chaining, tool usage, structured outputs, and handling failure cases
  • Mandatory (Experience 6) – Must have experience building agentic AI systems (multi-step workflows, tool orchestration like LangGraph / CrewAI or custom agents)
  • Mandatory (Experience 7) – Must have strong coding and system design skills, not just prompt engineering or experimentation
  • Mandatory (Experience 8) – Must have experience shipping AI features in production, not just POCs or research projects
  • Mandatory (Experience 9) – Must have experience working with APIs, backend services, and integrations
  • Mandatory (Experience 10) – Must have understanding of AI system reliability, including latency, cost optimization, fallback handling, and basic eval thinking
  • Mandatory (Company) – Product companies / startups, preferably Series A to Series D
  • Mandatory (Note) - Candidate's overall experience should not be more than 7 Yrs
  • Mandatory (Tech Stack) – Strong in Python + AI/LLM ecosystem, experience with modern AI tooling and frameworks
  • Mandatory (Exclusion) – Reject profiles that are only Prompt Engineers, Data Scientists, or Frontend Engineers without strong backend + system building experience
  • Preferred (Skill) – Experience with fine-tuning (LoRA / QLoRA) or open-source model deployment (vLLM / Ollama)
  • Preferred (Frontend) – Basic ability to build or contribute to frontend (React or similar)
  • Highly Preferred (Education) – Candidates from Tier-1 institutes (IITs, BITS, NITs, IIITs, top global universities)


Read more
Zeuron.AI

at Zeuron.AI

1 candid answer
Kavitha Rajan
Posted by Kavitha Rajan
Bengaluru (Bangalore)
1 - 2 yrs
₹11L - ₹12L / yr
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Computer Vision
skill iconFlutter
Embedded C
+2 more

Job Title: Software/Hardware Engineer (IIT/NIT)

Location: Bangalore

Website: https://www.zeuron.ai

Experience: 1 Year

CTC: ₹12 LPA


About the Company

Zeuron.ai is a Bangalore-based deep-tech startup founded in 2019, focused on building brain-inspired computing and AI-driven healthcare solutions. The company combines neuroscience, AI, and gaming to create innovative digital therapeutics and neurotechnology platforms for improving brain health, rehabilitation, and overall well-being.

About the Role

We are looking for a highly motivated Software/Hardware Engineer from premier institutes (IIT/NIT) with strong fundamentals and a passion for building scalable and efficient systems. This role offers an opportunity to work on cutting-edge technology and solve real-world problems.

 

Key Responsibilities

Design, develop, and optimize software/hardware solutions

Work on system architecture, debugging, and performance improvements

Collaborate with cross-functional teams (product, design, operations)

Participate in code reviews, testing, and deployment processes

Contribute to innovation and continuous improvement initiatives

 

Requirements

B.Tech/M.Tech from IITs/NITs (Computer Science, Electronics, Electrical, or related fields)

1 year of experience (internships/project experience considered)

Strong programming skills (C/C++/Python/Java) or hardware fundamentals (embedded systems, VLSI, circuit design)

Good understanding of data structures, algorithms, and system design

Problem-solving mindset with strong analytical skills


Preferred Skills

Experience with embedded systems, IoT, or product development

Knowledge of cloud platforms or system-level programming

Good in Computer vision, Flutter, JavaScript, AI/ML

Read more
Bengaluru (Bangalore)
5 - 10 yrs
₹1L - ₹8L / yr
databricks
ETL
PySpark
Apache Spark
CI/CD
+7 more

Profile - Databricks Developer

Experience- 5+ years

Location- Bangalore (On site)

PF & BGV is Mandatory


Job Description: -


* Design, build, and optimize data pipelines and ETL/ELT workflows using Databricks and Apache Spark (PySpark).

* Develop scalable, high performance data solutions using Spark distributed processing.

* Lead engineering initiatives focused on automation, performance tuning, and platform modernization.

* Implement and manage CI/CD pipelines using Git-based workflows and tools such as GitHub Actions or Jenkins.

* Collaborate with cross-functional teams to translate business needs into technical solutions.

* Ensure data quality, governance, and security across all processes.

* Troubleshoot and optimize Spark jobs, Databricks clusters, and workflows.

* Participate in code reviews and develop reusable engineering frameworks.

* Should have knowledge of utilizing AI tools to improve productivity and support daily engineering activities.

* Strong knowledge and hands-on experience in Databricks Genie, including prompt engineering, workspace usage, and automation


. Required Skills & Experience:

* 5+ years of experience in Data Engineering or related fields.

* Strong hands-on expertise in Databricks (notebooks, Delta Lake, job orchestration).

* Deep knowledge of Apache Spark (PySpark, Spark SQL, optimization techniques).

* Strong proficiency in Python for data processing, automation, and framework development.

* Strong proficiency in SQL, including complex queries, performance tuning, and analytical functions.

* Strong knowledge of Databricks Genie and leveraging it for engineering workflows.

* Strong experience with CI/CD and Git-based development workflows. * Proficiency in data modeling and ETL/ELT pipeline design.

* Experience with automation frameworks and scheduling tools.

* Solid understanding of distributed systems and big data concepts

Read more
Hashone Career
Madhavan I
Posted by Madhavan I
Bengaluru (Bangalore)
3 - 8 yrs
₹20L - ₹28L / yr
SQL
skill iconPython
AtScale

Summary:

Data Engineer/Analytics Engineer with experience in semantic layer modeling using AtScale, building scalable data pipelines, and delivering high-performance analytics solutions on cloud platforms.




 Responsibilities

• Build and maintain ETL/ELT pipelines for large-scale data

• Develop semantic models, cubes, and metrics in AtScale

• Optimize query performance and BI dashboards

• Integrate data platforms (Snowflake, Databricks, BigQuery)

• Collaborate with analysts and business teams




 Skills

• SQL, Python/Scala

• Data modeling (star schema, OLAP)

• AtScale (semantic layer)

• Spark, dbt, Airflow

• BI tools (Tableau, Power BI, Looker)

• AWS / GCP / Azure



 Experience

• 3–8+ years in data/analytics engineering

• Experience with enterprise data platforms and BI systems

Read more
Verse
Ravi K
Posted by Ravi K
Bengaluru (Bangalore)
2 - 5 yrs
₹15L - ₹20L / yr
skill iconPython
FastAPI
skill iconPostgreSQL
Neo4J
LangGraph

Founding Engineer (Bangalore)


The problem:

Business enterprises overpay vendors - on every batch of invoices, on every month because the data that would catch lives in different systems. We are building an AI agent that processes invoices end-to-end, reasons across all the relevant sources, flags genuine discrepancies, and acts - without a human having to investigate each one.


What you will own

Everything engineering. Schema design to deployment to the 2am fix when something breaks in production. There is no tech lead above you. There is no platform team. There is the architecture, you, and the founders. Concretely, this means building:

  • A multi-stage agentic pipeline that takes a vendor invoice and produces a structured decision - fully autonomous for clear cases, escalating to human review for genuinely ambiguous ones. We use LangGraph, but if you've built equivalent systems with Temporal, Prefect, or custom state machines with LLM orchestration, that works
  • An LLM-powered extraction layer that handles real invoices - scanned PDFs, stamped documents, inconsistent layouts - and returns structured output
  • A graph data model that connects invoices to various sources and can traverse those relationships to detect discrepancies
  • ERP connectors, GST validation logic, and a write-back layer that closes the loop


What we need

  • Strong Python. Async FastAPI, clean service boundaries, tests that actually catch bugs. You have shipped Python backends that handled real production load
  • Solid Postgres. Complex queries, schema design, migrations without downtime, row-level security for multi-tenant data. pgvector is a plus - if not, you pick it up fast
  • LLM API experience in production. You have called an LLM API for something that real users depended on. You know about structured output, retry logic, cost management, prompt versioning. A side project counts if it was genuinely deployed
  • Comfort with graph data models. You understand when a graph is the right structure and when it is not. You do not need deep Neo4j production experience - you need to understand graph relationships conceptually and be willing to learn Cypher. It is a 2-day ramp for the right person
  • Working knowledge of deployment. Deployed and operated production workloads on GCP. Cloud Run, Cloud SQL, Cloud Storage, Redis — you're comfortable across the stack. If you've done it on AWS, the translation isn't hard, but GCP is where we are
  • You own things. Not "I contributed to" - you designed it, shipped it, and fixed it when it broke. That pattern needs to be visible in your history


Good to have, not mandatory

  • Built an agentic pipeline with multiple stages
  • Any fintech, P2P domain experience - even tangential
  • Worked at a startup with under 20 people
  • Has a GitHub, blog, or writeup that shows how you think about a hard technical problem


What you get

  • The hardest engineering problem you would have worked on. This is not CRUD with an LLM bolted on
  • Real ownership. First engineering hire. Your architectural decisions will be in this product five years from now
  • Equity that matters. ESOP - Open to discussion. We are pre-seed - this is a bet, not a guarantee. We will not pretend otherwise
  • No meetings tax. You work directly with the founders. The product is specified clearly. You know what you are building and why


Honest about stage: We do not have a production ready infra yet. We have a complete architecture specification and a working prototype. If you need the stability of an established engineering org, this is not the right moment. If you want to build something real from zero and own a meaningful piece of it, it is.


The founders

One of us has spent 20 years building revenue and operational engines at companies where there was no playbook - part of the pilot team that established the world's largest search company's direct sales operations in India, managed global operations for a global mobile advertising platform, scaled a B2C platform to become one of India’s leading edtech platforms and most recently worked on building an enterprise Agentic Voice AI platform. The other has spent 15 years taking AI from demo to production in domains where failure is expensive - voice, lending, and conversational systems across a Series D conversational AI company, a major telco, a Big 4, and a leading NBFC.


Two IIT/IIM alumni who have both watched AI work in enterprise, and know exactly what it takes to get it there. We are not building this product because it sounds interesting. We are building it because we have both sat across the table from CFOs who know they are losing margin and have no tool capable of doing anything about it.

Read more
Improving
Rohini Jadhav
Posted by Rohini Jadhav
Bengaluru (Bangalore)
5 - 8 yrs
₹25L - ₹35L / yr
skill iconPython
skill iconKubernetes
skill iconJenkins
CI/CD
skill iconDocker
+1 more

What are we looking for?

  • You have a good understanding and work experience in AKS, Kubernetes, and EKS.
  • You are able to manage multi region clusters for disaster recovery.
  • You have a good understanding of AWS stack.
  • You have experience of production level in Kubernetes. 
  • You are comfortable coding/programming and can do so whenever required. 
  • You have worked with programmable infrastructure in some way - Built a CI/CD pipeline, Provisioned infrastructure programmatically or Provisioned monitoring and logging infrastructure for large sets of machines.
  • You love automating things, sometimes even what seems like you can’t automate - such as one of our engineers used Ansible to set up the Ubuntu workstation and runs a playbook every time something has to be installed.
  • You don’t throw around words such as “high availability” or “resilient systems” without understanding at least their basics. Because you know that words are easy to talk about but there is a fair amount of work to build such a system in practice.
  • You love coaching people - about the 12-factor apps or the latest tool that reduced your time of doing a task by X times and so on. You lead by example when it comes to technical work and community.
  • You understand the areas you have worked on very well but, you are curious about many systems that you may not have worked on and want to fiddle with them.
  • You know that understanding applications and the runtime technologies gives you a better perspective - you never looked at them as two different things.

What you will be learning and doing?

  1. You will be working with customers trying to transform their applications and adopt cloud-native technologies. The technologies used will be Kubernetes, Prometheus, Service Mesh, Distributed tracing and public cloud technologies or on-premise infrastructure.
  2. The problems and solutions are continuously evolving in space but fundamentally you will be solving problems with simplest and scalable automation.
  3. You will be building open source tools for problems that you think are common across customers and industry. No one ever benefited from re-inventing the wheel, did they? 
  4. You will be hacking around open source projects, understand their capabilities, limitations and apply the right tool for the right job.
  5. You will be educating the customers - from their operations engineers to developers on scalable ways to build and operate applications in modern cloud-native infrastructure.
Read more
Bengaluru (Bangalore)
2 - 4 yrs
₹21L - ₹28L / yr
Artificial Intelligence (AI)
skill iconPython

Strong Junior GenAI / AI Backend Engineer Profiles

Mandatory (Experience 1) – Must have 1.5+ years of full time expeirence in software development using LLMs (OpenAI / Gemini / similar) in projects (internship / full-time / strong personal projects)

Mandatory (Experience 2) – Must have strong coding skills in Python and hands-on backend development experience (FastAPI / Django preferred)

Mandatory (Experience 3) – Must have built or contributed to AI/LLM-based applications, such as chatbots, copilots, document processing tools, etc.

Mandatory (Experience 4) – Must have basic understanding of RAG concepts (embeddings, vector DBs, retrieval)

Mandatory (Experience 5) – Must have experience building APIs or backend services and integrating with external systems

Mandatory (Experience 6) – Must have AI/LLM projects clearly mentioned in CV (with what was built, not just tools used)

Mandatory (Experience 7) – Must have worked with modern development tools (Git, APIs, basic cloud exposure)

Mandatory (Tech Stack) – Strong in Python + basic AI/LLM ecosystem

Mandatory (Company) – Product companies / Funded startups (Series A / B / high-growth environments)

Mandatory (Education) - Tier-1 institutes (IITs, BITS, IIITs), Can be skipped if from Top notch product companies

Mandatory (Exclusion) - Avoid candidates with Only Prompt Engineers, from pure Data Science / ML theory background without backend coding, Frontend-heavy engineers.

Read more
Talent Pro
Bengaluru (Bangalore)
4 - 7 yrs
₹37L - ₹48L / yr
Artificial Intelligence (AI)
skill iconPython

Strong Senior GenAI / AI Backend Engineer Profiles

Mandatory (Experience 1) – Must have 4+ years of total software development experience, with at least 2+ years working on AI/LLM-based features in production

Mandatory (Experience 2) – Must have strong backend engineering experience using Python (FastAPI / Django preferred) and building production-grade systems

Mandatory (Experience 3) – Must have hands-on experience building LLM-based applications, including OpenAI / Gemini / similar models in real projects

Mandatory (Experience 4) – Must have experience with RAG (Retrieval Augmented Generation) including chunking, embeddings, and retrieval pipelines

Mandatory (Experience 5) – Must have experience designing end-to-end AI pipelines, including chaining, tool usage, structured outputs, and handling failure cases

Mandatory (Experience 6) – Must have experience building agentic AI systems (multi-step workflows, tool orchestration like LangGraph / CrewAI or custom agents)

Mandatory (Experience 7) – Must have strong coding and system design skills, not just prompt engineering or experimentation

Mandatory (Experience 8) – Must have experience shipping AI features in production, not just POCs or research projects

Mandatory (Experience 9) – Must have experience working with APIs, backend services, and integrations

Mandatory (Experience 10) – Must have understanding of AI system reliability, including latency, cost optimization, fallback handling, and basic eval thinking

Mandatory (Company) – Product companies / startups, preferably Series A to Series D

Mandatory (Note) - Candidate's overall experience should not be more than 7 Yrs

Mandatory (Tech Stack) – Strong in Python + AI/LLM ecosystem, experience with modern AI tooling and frameworks

Mandatory (Exclusion) – Reject profiles that are only Prompt Engineers, Data Scientists, or Frontend Engineers without strong backend + system building experience

Read more
AI-powered content creation and automation platform

AI-powered content creation and automation platform

Agency job
via Uplers by Shrishti Singh
Bengaluru (Bangalore)
2 - 4 yrs
₹15L - ₹28L / yr
skill iconPython
skill iconNodeJS (Node.js)
TypeScript
Artificial Intelligence (AI)
Generative AI
+2 more

Software Engineer

Onsite - HSR Bangalore

6 Days work from Office (Flexible working hours)


Product is a PowerPoint AI assistant used by consulting companies and Fortune 500 teams. A typical professional spends 1 to 3 hours creating one slide. With Product company, they create a v1 of their entire deck in 10 minutes, and make changes like “turn this table to a chart” in seconds directly within PowerPoint.

In the next 2 years, our goal at company is to forever change the way business presentations are made.


Who are we?

  • small, strong team of 5
  • founders are CS graduates from IIT Kharagpur with a specialisation in AI
  • work 6 days a week from our office in HSR Layout in Bangalore
  • funded by Y Combinator and other amazing investors
  • used by consulting companies and Fortune 500 teams


Your responsibilities (in order)

  • Design, implement, test, and deploy full features
  • Design and implement a robust infrastructure to enable rapid development and automated testing
  • Look at usage data to iterate on features


What we’re looking for

  • Undergraduate or master's in Computer Science or equivalent degree
  • 2+ years of backend or DevOps software engineering experience
  • Experience with TypeScript (JavaScript) or Python


You’ll be a good fit if

  • You want to work on a product that can change the way a very large number of people work
  • The chaos of high growth and things breaking is exciting to you
  • You are a workaholic, looking to upskill faster than most people think is possible. This role is not a good fit for you if you’re looking to prioritise work-life balance.
  • You prefer working in-person with other smart people who are excited and passionate about what they’re building
  • You love solving very hard problems at a rapid pace. We discuss timelines in days or weeks, so you’ll constantly be expected to ship really high-quality work.



Perks

  • Comprehensive health insurance for you and dependents
  • Workstation enhancements
  • Subscriptions to AI tools such as Cursor, ChatGPT, etc.

(If there's anything else we can do to make your work more enjoyable, just ask)


If you are interested in proceeding, we would be happy to move your profile to the next stage of the evaluation process.

Kindly share the following details to help us take this forward :


  • Current CTC (Fixed + Variable):
  • Expected CTC:
  • Notice Period (If currently serving, please mention your Last Working Day)
  • Details of any active offers in hand (if applicable)
  • Expected/Available Date of Joining (if applicable)
  • Attach Updated CV:
  • Attach Github Link / Leet code link or other:
  • Current Location:
  • Preffered Location:
  • Reason for job Change:
  • Reason for relocation (if applicable):
  • Are you comfortable with 6 days wfo (flexible working hours)?( Yes / No):

Read more
oil and Gas Industry (petroleum refinery)

oil and Gas Industry (petroleum refinery)

Agency job
via First Tek, Inc. by David Ingale
Bengaluru (Bangalore)
8 - 12 yrs
₹15L - ₹25L / yr
skill iconPython
MLOps
skill iconMachine Learning (ML)
API
CI/CD
+5 more

🔹 Role: Python Engineer – Python & MLOps

📍 Location: Bellandur, Bangalore

🕐 Work Timings: 01:30 PM – 10:30 PM

🏢 Work Mode: Monday (WFH), Tuesday–Friday (WFO)

📅 Experience: 8-12 Years (Ideal: 8-10 Years)

🔹 Role Overview

This role focuses on building and maintaining a production-grade AI/ML platform. You will work on scalable Python systems, MLOps pipelines, APIs, and CI/CD workflows in an enterprise environment.

🔹 Key Responsibilities

✔ Develop production-grade Python applications using OOP principles

✔ Build and enhance MLOps pipelines (training, validation, deployment)

✔ Design and optimize REST APIs with OpenAI/Swagger

✔ Implement async programming for high-performance systems

✔ Work on CI/CD pipelines (Azure Pipelines / GitHub Actions)

✔ Ensure clean, testable, and maintainable code (PyTest, TDD)

🔹 Required Skills

✔ Strong Python (OOP, modular design)

✔ MLOps & CI/CD pipeline experience

✔ REST API development

✔ Async programming (async/await, concurrency)

✔ Pandas / Polars & Scikit-learn

✔ JSON Schema–driven development

✔ Testing using PyTest

🔹 Nice to Have

➕ Azure ML SDK

➕ Pydantic

➕ Azure Cosmos DB

➕ Experience with large enterprise platforms

Read more
enParadigm

at enParadigm

2 candid answers
3 products
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
2 - 4 yrs
Upto ₹10L / yr (Varies
)
skill iconJava
skill iconPython
Selenium Web driver
cypress
playwright

Job Description:


Test Design & Execution

Design and execute detailed, well-structured test plans, test cases, and test scenarios to ensure high-quality product releases.


Automation Development

Develop and maintain automated test scripts for functional and regression testing using tools such as Selenium, Cypress, or Playwright.


Defect Management

Identify, log, and track defects through to resolution using tools like Jira, ensuring minimal impact on production releases.


API & Backend Testing

Conduct API testing using Postman, perform backend validation, and execute database testing using SQL/Oracle.


Collaboration

Work closely with developers, product managers, and UX designers in an Agile/Scrum environment to embed quality across the SDLC.


CI/CD Integration

Integrate automated test suites into CI/CD pipelines using platforms such as Jenkins or Azure DevOps.


Required Skills & Experience

  • Minimum 2+ years of experience in Software Quality Assurance or Automation Testing.
  • Hands-on experience with Selenium WebDriver, Cypress, or Playwright.
  • Proficiency in at least one programming/scripting language: Java, Python, or JavaScript.
  • Strong experience in functional, regression, integration, and UI testing.
  • Solid understanding of SQL for data validation and backend testing.
  • Familiarity with Git for version control, Jira for defect tracking, and Postman for API testing.


Desirable Skills

  • Experience in mobile application testing (Android/iOS).
  • Exposure to performance testing tools such as JMeter.
  • Experience working with cloud platforms like AWS or Azure.
Read more
Mid Size Product Engineering Services Company

Mid Size Product Engineering Services Company

Agency job
via Vidpro Consultancy Services by Vidyadhar Reddy
Remote, Bengaluru (Bangalore), Chennai, Hyderabad
20 - 26 yrs
₹65L - ₹120L / yr
skill iconVue.js
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconJavascript
+18 more

This role will report to the Chief Technology Officer


You Will Be Responsible For


* Driving decision-making on enterprise architecture and component-level software design to our software platforms' timely build and delivery.

* Leading a team in building a high-performing and scalable SaaS product.

* Conducting code reviews to maintain code quality and follow best practices

* DevOps practice development on promoting automation, including asset creation, enterprise strategy definition, and training teams

* Developing and building microservices leveraging cloud services

* Working on application security aspects

* Driving innovation within the engineering team, translating product roadmaps into clear development priorities, architectures, and timely release plans to drive business growth.

* Creating a culture of innovation that enables the continued growth of individuals and the company

* Working closely with Product and Business teams to build winning solutions

* Led talent management, including hiring, developing, and retaining a world-class team


Ideal Profile


* You possess a Degree in Engineering or a related field and have at least 20+ years of experience as a Software Engineer, with a 10+ years of experience leading teams and at least 4 Years of experience in building a SaaS / Fintech platform.

* Proficiency in MERN / Java / Full Stack.

* Led a team in optimizing the performance and scalability of a product

* You have extensive experience with DevOps environment and CI/CD practices and can train teams.

* You're a hands-on leader, visionary, and problem solver with a passion for excellence.

* You can work in fast-paced environments and communicate asynchronously with geographically distributed teams.


What's on Offer?


* Exciting opportunity to drive the Engineering efforts of a reputed organisation

* Work alongside &amp; learn from best in class talent

* Competitive compensation + ESOPs

Read more
Mercari, Inc

at Mercari, Inc

2 candid answers
1 video
Ashwin S
Posted by Ashwin S
Bengaluru (Bangalore)
6 - 9 yrs
Best in industry
skill iconMachine Learning (ML)
PyTorch
TensorFlow
NumPy
skill iconPython
+2 more

Introduction

About Us:


Mercari is a Japan-based C2C marketplace company founded in 2013 with the mission to “Create value in a global marketplace where anyone can buy & sell.” From being the first tech unicorn from Japan before its IPO in 2018 we have come a long way towards becoming a global player and continuously and diligently work towards our transformation journey with a strong focus on our mission.

Since its inception, Mercari Group has worked to grow its services, investing in both our people and technology. Over time Mercari has expanded from being the top player in the C2C marketplace in Japan to new geographies like the U.S. We have also successfully launched new businesses such as Merpay, which is a mobile payment service platform with a vision to create a society where anyone can realize their dreams through a new ecosystem centered not only on payment service but also on credit. Today, Mercari Group is made up of multiple subsidiary businesses including logistics, B2C platform, blockchain, and sports team management.


For our services to be utilized by people worldwide; however, there is still a mountain of work ahead of us. This endeavor naturally requires the capability of the best talent and minds, and that is exactly the reason for us to launch the India Center of Excellence. With your help, we will continue to take on the world stage and strive to grow into a successful global tech company.


Our Culture:

To achieve our mission at Mercari, our organization and each of our employees share the same values and perspectives. Our individual guidelines for action are defined by our four values: Go Bold, All for One, Be a Pro and Move Fast. Our organization is also shaped by our four foundations: Sustainability, Diversity & Inclusion, Trust & Openness, and Well-being for Performance. Regardless of how big Mercari gets, the culture will remain essential to achieving our mission and something we want to preserve throughout our organization. We invite you to read the Mercari Culture Doc which summarizes the behaviors and mindset shared by Mercari and its employees. We continue to build an environment where all of our members of diverse backgrounds are accepted and recognized, and where they can thrive while holding dear to Mercari’s culture.


Work Responsibilities

  • Machine learning engineers working in the Recommendation domain develop the functions and services of the marketplace app Mercari through the development and maintenance of machine learning systems like Recommender systems while leveraging necessary infrastructure and companywide platform tools. 
  • Mercari is actively applying advanced machine learning technology to provide a more convenient, safer, and more enjoyable marketplace. Machine learning engineers use the cloud and Kubernetes to operate and improve machine learning systems.


Bold Challenges

  • We are looking for people who are interested in our services, mission, and values, and want to work where engineers can go bold, use the latest technology, make autonomous decisions, and take on challenges at a rapid pace.
  • Develop and optimize machine learning algorithms and models to enhance recommendation system to improve discovery experience of users
  • Collaborate with cross-functional teams and product stakeholders to gather requirements, design solutions, and implement features that improve user engagement
  • Conduct data analysis and experimentation with large-scale data sets to identify patterns, trends, and insights that drive the refinement of recommendation algorithms
  • Utilize machine learning frameworks and libraries to deploy scalable and efficient recommendation solutions.
  • Monitor system performance and conduct A/B testing to evaluate the effectiveness of features.
  • Continuously research and stay updated on advancements in AI/machine learning techniques and recommend innovative approaches to enhance recommendation capabilities.


Minimum Requirements:

  • Over 5-9 years of professional experience in end-to-end development of large-scale ML systems in production
  • Strong experience demonstrating development and delivery of end-to-end machine learning solutions starting from experimentation to deploying models, including backend engineering and MLOps, in large scale production systems.
  • Experience using common machine learning frameworks (e.g., TensorFlow, PyTorch) and libraries (e.g., scikit-learn, NumPy, pandas)
  • Deep understanding of machine learning and software engineering fundamentals
  • Basic knowledge and skills related to monitoring system, logging, and common operations in production environment
  • Communication skills to carry out projects in collaboration with multiple teams and stakeholders


Preferred skills:

  • Experience developing Recommender systems utilizing large-scale data sets
  • Basic knowledge of enterprise search systems and related stacks (e.g. ELK)
  • Functional development and bug fixing skills necessary to improve system performance and reliability
  • Experience with technology such as Docker and Kubernetes
  • Experience with cloud platforms (AWS, GCP, Microsoft Azure, etc.)
  • Microservice development and operation experience with Docker and Kubernetes
  • Utilizing deep learning models/LLMs in production
  • Experience in publications at top-tier peer-reviewed conferences or journals


Employment Status

Full-time

Office

Bangalore

Hybrid workstyle

  • We believe in high performance and professionalism. We work from office for 2 days/week and work from home 3 days/week
  • To build a strong & highly-engaged organization in India, we highly encourage everyone to work from our Bangalore office, especially during the initial office setup phase
  • We will continue to review and update the policy to address future organizational needs

Work Hours

  • Full flextime (no core time)

*Flexible to choose working hours other than team common meetings

Media


Owned Media

  • Mercari Engineering Portal
  • AI at Mercari portal
  • Mercan - Introduces the people that make Mercari
  • Mercari US Blog

Related Articles

  • Development Platforms and Platformers: On Rising to the Global Standard Ken Wakasa, Mercari CTO | mercan
  • “I'm Not a Talented Engineer” Insists the Member-Turned-Manager Revamping Our Internal CS Tool | mercan
  • Personalize to globalize:How Mercari is reshaping their app, their company, and the world | mercan
  • The Providers of the Safe and Secure Mercari Experience: The TnS Team, Introduced by Its Members! | mercan
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort