Cutshort logo
Python Jobs in Bangalore (Bengaluru)

50+ Python Jobs in Bangalore (Bengaluru) | Python Job openings in Bangalore (Bengaluru)

Apply to 50+ Python Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.

icon
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Mumbai, Bengaluru (Bangalore)
5 - 10 yrs
Upto ₹45L / yr (Varies
)
Agentic AI
skill iconPython
RESTful APIs
Google Vertex AI
Gemini (Google AI)
+1 more

This role is responsible for architecting and implementing the Agentic capabilities of the PHI ecosystem. The engineer will lead the development of multi-agent systems, enabling seamless interoperability between AI agents, internal tools, and external services.

The position requires a strong focus on AI safety, secure agent orchestration, and tool-connected AI systems capable of executing complex workflows within the health insurance domain.


1. Agent Orchestration

  • Build and manage autonomous AI agents using Agent Development Kit (ADK) and Vertex AI Agent Engine.
  • Design and implement multi-agent workflows capable of handling complex tasks.

2. Interoperability

  • Implement the Model Context Protocol (MCP) to enable connectivity between:
  • AI agents
  • Internal PHI tools
  • External services and APIs.

3. Multimodal Development

  • Build real-time, bidirectional audio applications using the Gemini Live API.
  • Integrate image generation models and support multimodal AI capabilities.

4. Safety Engineering

  • Implement AI safety layers to protect sensitive healthcare data.
  • Use Model Armor and Cloud DLP API to:
  • Sanitize prompts
  • Prevent exposure of PII/PHI data
  • Enforce secure AI interactions.

5. Agent-to-Agent (A2A) Communication

  • Configure remote agent connectivity using the A2A SDK.
  • Enable cross-agent collaboration and workflow orchestration.

Must-Have Skills

  • Advanced proficiency with Agent Development Kit (ADK).
  • Strong experience with Vertex AI Agent Engine.
  • Hands-on experience with Model Context Protocol (MCP).
  • Experience implementing Agent-to-Agent (A2A) workflows using the A2A SDK.
  • Expertise in Google Gen AI SDK for Python.
  • Experience building multimodal AI applications.
  • Proven experience implementing AI safety layers, including:
  • Model Armor
  • Cloud DLP API

Good-to-Have Skills (Foundation)

Data & Analytics

  • BigQuery optimization techniques, including:
  • Partitioning
  • Clustering
  • Denormalization for performance and cost optimization.

Streaming & Real-Time Pipelines

  • Experience building real-time data pipelines using:
  • Google Pub/Sub
  • BigQuery streaming pipelines
Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Mumbai
3 - 5 yrs
Upto ₹33L / yr (Varies
)
Agentic AI
skill iconPython
RESTful APIs
Google Vertex AI
Gemini (Google AI)
+1 more

We are seeking a Senior Machine Learning Engineer to support the development and deployment of advanced AI capabilities within the PHI ecosystem.

This role focuses on the execution of Generative AI tasks, including model integration and agent deployment. The candidate will be responsible for building RAG-based workflows and ensuring AI interactions remain grounded and accurate using Google Cloud AI tools.


Key Responsibilities

1. GenAI Integration

  • Develop and maintain integrations with Gemini 1.5 Pro and Flash models
  • Use the Google Gen AI SDK for Python to build and manage model integrations

2. Agent Deployment

  • Assist in deploying AI agents to Vertex AI Agent Engine
  • Work with the Agent Development Kit (ADK) for agent lifecycle management

3. RAG & Embeddings

  • Generate and manage text and multimodal embeddings
  • Support semantic search and Retrieval-Augmented Generation (RAG) pipelines

4. Testing & Quality

  • Run evaluation scripts to verify model output quality
  • Ensure models follow grounding and response accuracy guidelines

Must-Have Skills

  • Strong Python programming
  • Experience working with REST APIs
  • Hands-on experience with Vertex AI Studio
  • Experience working with Gemini APIs
  • Understanding of Agentic AI concepts
  • Familiarity with ADK CLI
  • Experience or understanding of RAG architecture
  • Knowledge of embedding generation

Good-to-Have Skills (Foundation):

BigQuery

  • Basic SQL knowledge
  • Experience with data loading
  • Ability to debug and troubleshoot queries

Data Streaming

  • Familiarity with Google Pub/Sub
  • Understanding of synthetic data generation

Visualization

  • Basic reporting and dashboards using Looker Studio
Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
5 - 10 yrs
Upto ₹40L / yr (Varies
)
skill iconPython
RESTful APIs
Microservices

As a Backend Engineer, you will be a core member of the Platform Implementation Team, responsible for building the robust, scalable, and secure backend infrastructure for a multi-cloud enterprise Data & AI platform.


You will design and develop high-performance microservices, RESTful APIs, and event-driven architectures that serve as the backbone for enterprise-wide applications.

Working closely with Platform Engineers, Data Modelers, and UI teams, you will ensure seamless data flow between core business systems (CRM, ERP) and the platform, enabling the rollout of critical business services across multiple global Local Business Units (LBUs).



Backend Development

  • Design and develop scalable backend services and microservices
  • Build and maintain RESTful APIs for enterprise applications
  • Define and maintain API contracts using OpenAPI/Swagger

Platform & System Integration

  • Enable seamless integration between enterprise systems (CRM, ERP) and the platform
  • Support data flow across multiple global business units

Event-Driven Architecture

  • Implement asynchronous processing and event-driven systems
  • Work with message brokers and streaming platforms

Cross-Functional Collaboration

  • Collaborate with platform engineers, data modelers, and frontend teams
  • Contribute to architecture discussions and backend design decisions

Must-Have Skills

Experience

  • 5–7 years of hands-on experience in backend software engineering
  • Experience building enterprise-grade backend systems

Core Programming

Strong proficiency in at least one backend language:

  • Python
  • Node.js
  • Java

Strong understanding of:

  • Object-oriented programming (OOP)
  • Functional programming principles

API & Microservices

  • Extensive experience building RESTful APIs
  • Experience designing microservices architectures
  • Ability to define API contracts using OpenAPI / Swagger

Cloud Infrastructure

Hands-on experience with cloud platforms:

  • Google Cloud Platform (GCP)
  • Microsoft Azure

Examples of services:

  • Cloud Functions
  • Cloud Run
  • Azure App Services

Database Management

Experience with both Relational and NoSQL databases

Relational:

  • PostgreSQL
  • Cloud SQL

NoSQL:

  • Schema design
  • Complex querying
  • Performance optimization

Event-Driven Architecture

Experience with asynchronous processing and message brokers:

  • GCP Pub/Sub
  • Apache Kafka
  • RabbitMQ

Security & Authentication

Strong understanding of:

  • OAuth 2.0
  • JWT authentication
  • Role-Based Access Control (RBAC)
  • Data encryption

Software Engineering Best Practices

  • Writing clean, maintainable code
  • Version control using Git
  • Writing unit and integration tests
  • Familiarity with CI/CD pipelines
  • Containerization using Docker

Good-to-Have Skills

AI & LLM Integration

  • Experience integrating Generative AI models
  • Exposure to:
  • OpenAI
  • Vertex AI
  • LLM gateways
  • Retrieval-Augmented Generation (RAG)

Frontend Exposure

Basic familiarity with frontend frameworks such as:

  • React
  • Next.js
  • Angular

Understanding how backend APIs integrate with UI applications

Advanced Data Stores

Experience with:

  • Vector databases (Pinecone, Milvus)
  • Knowledge graphs

Domain Knowledge

  • Experience in Life Insurance or BFSI sector
  • Understanding of enterprise data governance and compliance standards
Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
5 - 10 yrs
Best in industry
skill iconPython
Generative AI
Microservices
RESTful APIs

We are seeking a highly skilled Senior Backend Developer with deep expertise in Python and FastAPI to join our team. This role focuses on building high-performance, scalable backend services capable of handling high request volumes while integrating advanced LLM technologies.


The ideal candidate will design robust distributed systems, implement efficient data storage solutions, and ensure enterprise-grade security within an Azure-based infrastructure. This is a great opportunity to work on AI/ML integrations and mission-critical applications requiring high performance and reliability.


Key Responsibilities:


Backend Development

  • Design and maintain high-performance backend services using Python and FastAPI
  • Implement advanced FastAPI features such as dependency injection, middleware, and async programming
  • Write comprehensive unit tests using pytest
  • Design and maintain Pydantic schemas

High-Concurrency Systems

  • Implement asynchronous code for high-volume request processing
  • Apply concurrency patterns and atomic operations to ensure efficient system performance

Data & Storage

  • Optimize MongoDB operations
  • Implement Redis caching strategies (TTL, performance tuning, caching patterns)

Distributed Systems

  • Implement rate limiting, retry logic, failover mechanisms, and region routing
  • Build microservices and event-driven architectures
  • Work with EventHub, Blob Storage, and Databricks

AI/ML Integration

  • Integrate OpenAI API, Gemini API, and Claude API
  • Manage LLM integrations using LiteLLM
  • Optimize AI service usage within the Azure ecosystem

Security

  • Implement JWT authentication
  • Manage API keys and encryption protocols
  • Implement PII masking and data security mechanisms

Collaboration

  • Work with cross-functional teams on architecture and system design
  • Contribute to engineering best practices and technical improvements
  • Mentor junior developers where required

Must-Have Skills & Requirements

Experience

  • 7+ years of hands-on Python backend development
  • Bachelor’s degree in Computer Science, Engineering, or related field
  • Experience building high-traffic, scalable systems

Core Technical Skills

Python

  • Advanced knowledge of asynchronous programming, concurrency, and atomic operations

FastAPI

  • Expert-level experience with dependency injection, middleware, and async code

Testing

  • Strong experience with pytest and Pydantic schemas

Databases

  • Hands-on experience with MongoDB and Redis
  • Strong understanding of caching patterns, TTL, and performance optimization

Distributed Systems

  • Experience with rate limiting, retry logic, failover mechanisms, high concurrency processing, and region routing

Microservices

  • Experience building microservices and event-driven systems
  • Exposure to EventHub, Blob Storage, and Databricks

Cloud

  • Strong experience working in Azure environments

AI Integration

  • Familiarity with OpenAI API, Gemini API, Claude API, and LiteLLM

Security

  • Implementation experience with JWT authentication, API keys, encryption, and PII masking

Soft Skills

  • Strong problem-solving and debugging skills
  • Excellent communication and collaboration
  • Ability to manage multiple priorities
  • Detail-oriented approach to code quality
  • Experience mentoring junior developers

Good-to-Have Skills

Containerization

  • Docker, Kubernetes (preferably within Azure)

DevOps

  • CI/CD pipelines and automated deployment

Monitoring & Observability

  • Experience with Grafana, distributed tracing, custom metrics

Industry Experience

  • Experience in Insurance, Financial Services, or regulated industries

Advanced AI/ML

  • Vector databases
  • Similarity search optimization
  • LangChain / LangSmith

Data Processing

  • Real-time data processing and event streaming

Database Expertise

  • PostgreSQL with vector extensions
  • Advanced Redis clustering

Multi-Cloud

  • Experience with AWS or GCP alongside Azure

Performance Optimization

  • Advanced caching strategies
  • Backend performance tuning
Read more
VegaStack
Careers VegaStack
Posted by Careers VegaStack
Bengaluru (Bangalore)
0 - 0 yrs
₹10000 - ₹15000 / mo
skill iconNextJs (Next.js)
skill iconPython
skill iconDjango
skill icontailwindcss
TypeScript
+4 more

Who We Are

We're a DevOps and Automation company based in Bengaluru, India. We have successfully delivered over 170 automation projects for 65+ global businesses, including Fortune 500 companies that entrust us with their most critical infrastructure and operations. We're bootstrapped, profitable, and scaling rapidly by consistently solving real, impactful problems.

What We Value

  • Ownership: As part of our team, you're responsible for strategy and outcomes, not just completing assigned tasks.
  • High Velocity: We move fast, iterate faster, and amplify our impact, always prioritizing quality over speed.

Who we seek

We are looking for a Fullstack Developer Intern to join our Engineering team. You’ll build and improve internal products. This is a hands-on internship focused on learning by shipping. Your ultimate goal will be to build highly responsive and innovative AI based software solutions that meet our business needs.

We're looking for individuals who genuinely care, ship fast, and are driven to make a significant impact.

🌏 Job Location: Bengaluru (Work From Office)

What You Will Be Doing

  • Build user-facing features using Next.js and TypeScript.
  • Convert designs into responsive UI using Tailwind CSS and reusable components.
  • Work with APIs to integrate frontend with backend services.
  • Implement common product workflows: authentication, forms, dashboards, tables, and navigation.
  • Fix bugs, write clean code, and improve performance.
  • Collaborate in a PR-based workflow on GitHub.
  • Write and maintain documentation for the features you ship.
  • Learn and apply best practices: component structure, state management, error handling, accessibility basics.

What We’re Looking For

  • Basic to intermediate experience with JavaScript and NextJS.
  • Familiarity with TypeScript basics.
  • Comfortable with HTML/CSS and responsive design, Tailwind CSS is a plus.
  • Understanding of how APIs work and how to consume them from the frontend.
  • Strong Git knowledge.
  • Strong learning mindset, ownership, and attention to detail.

Benefits

  • Work directly with founders and the leadership team.
  • Drive projects that create real business impact, not busywork.
  • Gain practical skills that traditional education misses.
  • Experience rapid growth as you tackle meaningful challenges.
  • Fuel your career journey with continuous learning and advancement paths.
  • Thrive in a workplace where collaboration powers innovation daily.


Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Bengaluru (Bangalore)
5 - 7 yrs
₹4L - ₹12L / yr
skill iconPython
Large Language Models (LLM)
FastAPI
Windows Azure
CI/CD

👉 Job Title: Senior Backend Developer

🌟 Experience: 5-7 Years

💡Location: Bangalore

👉 Notice Period :- Immediate Joiners

💡 Work Mode - 5 Days work from Office

( Candidate Serving notice period are preffered)


Role Summary

We are seeking a Senior Backend Developer with strong expertise in Python and FastAPI to build scalable, high-performance backend systems integrated with LLM technologies on Azure. The role involves designing distributed systems, optimizing data pipelines, and ensuring secure, enterprise-grade applications.


Key Responsibilities

  • Develop backend services using Python & FastAPI (async, middleware)
  • Build high-concurrency, scalable systems and microservices
  • Work with Azure services and event-driven architectures
  • Optimize MongoDB & Redis for performance
  • Integrate LLM APIs (OpenAI, Gemini, Claude)
  • Implement security (JWT, encryption, API management)

Mandatory Skills (Top 3)

  1. Strong Python backend development with FastAPI
  2. Hands-on experience with Microsoft Azure cloud
  3. Experience in building scalable distributed/microservices systems


Good to Have

  • Docker, Kubernetes, CI/CD
  • LLM frameworks (LangChain, vector DBs)
  • Monitoring tools and real-time data processing


Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Bengaluru (Bangalore)
5 - 7 yrs
₹4L - ₹12L / yr
skill iconJava
skill iconPython
skill iconNodeJS (Node.js)
Windows Azure
Google Cloud Platform (GCP)
+3 more

👉 Job Title: Backend Developer

🌟 Experience: 5-7 Years

💡Location: Bangalore

👉 Notice Period :- Immediate Joiners

💡 Work Mode - 5 Days work from Office

( Candidate Serving notice period are preffered)


Role Summary

We are looking for a Backend Engineer to join the Platform Implementation Team, responsible for building scalable, secure, and high-performance backend systems for a multi-cloud Data & AI platform. You will design microservices, develop REST APIs, and enable seamless data integration across enterprise systems like CRM and ERP.


💫 Key Responsibilities

✅ Design and develop scalable microservices and RESTful APIs

✅ Build event-driven architectures for asynchronous processing

✅ Integrate backend systems with cloud platforms (GCP/Azure)

✅ Ensure secure, reliable, and optimized data handling

✅ Collaborate with cross-functional teams (UI, Data, Platform)

✅ Follow best practices in coding, testing, CI/CD, and containerization


💫 Mandatory Skills (Top 3)

✅ Strong backend programming experience (Python / Node.js / Java)

✅ Expertise in API development & Microservices architecture

✅ Hands-on experience with Cloud platforms (GCP or Azure)





Read more
TalentXO
tabbasum shaikh
Posted by tabbasum shaikh
Bengaluru (Bangalore)
4 - 7 yrs
₹34L - ₹40L / yr
skill iconPython
LLM
OpenAI
Gemini
RAG
+5 more

Role & Responsibilities

As a Senior GenAI Engineer you will own the AI layer of our product — building the features that make Zenskar intelligent. This is not a research role and not a prompt-engineering role. You will build production AI systems that enterprise clients depend on, which means reliability, observability, and rigorous evals matter as much as the AI capability itself. You own the full vertical — the model, the pipeline, and the UI.

  • Build and own CS Copilot — a real-time assistant for customer success teams, spanning STT pipelines, live transcription, and LLM-powered suggestions
  • Build LLM-powered document understanding features — extracting structured, reliable data from unstructured enterprise documents
  • Own AI feature UIs end-to-end — you build the interface, not just the model integration layer
  • Design and maintain an eval framework — define what 'working' means for each AI feature and catch regressions before users do
  • Drive model selection and integration decisions — choosing the right provider and approach for each use case, managing latency and cost
  • Own AI platform reliability — observability, fallback behaviour, and graceful degradation when models fail
  • Work closely with product, customer success, and the full-stack engineer — AI features only matter if they are usable and trusted by real users

THE IMPACT YOU'LL MAKE-

  • You will define what AI means at Zenskar — the features you ship will be the most visible and differentiated parts of the product
  • CS Copilot, if done well, changes how enterprise customer success teams operate every single day — this is a high-stakes, high-visibility surface
  • You will establish the engineering culture around AI reliability at Zenskar — evals, observability, and disciplined iteration
  • Your work will directly accelerate enterprise deals — AI features are increasingly a buying criterion for our clients
  • You will be the person who brings engineering rigour to a domain where most companies ship demos and call it a feature

Ideal Candidate

  • Strong Senior GenAI / AI Backend Engineer Profiles
  • Mandatory (Experience 1) – Must have 4+ years of total software development experience, with at least 2+ years working on AI/LLM-based features in production
  • Mandatory (Experience 2) – Must have strong backend engineering experience using Python (FastAPI / Django preferred) and building production-grade systems
  • Mandatory (Experience 3) – Must have hands-on experience building LLM-based applications, including OpenAI / Gemini / similar models in real projects
  • Mandatory (Experience 4) – Must have experience with RAG (Retrieval Augmented Generation) including chunking, embeddings, and retrieval pipelines
  • Mandatory (Experience 5) – Must have experience designing end-to-end AI pipelines, including chaining, tool usage, structured outputs, and handling failure cases
  • Mandatory (Experience 6) – Must have experience building agentic AI systems (multi-step workflows, tool orchestration like LangGraph / CrewAI or custom agents)
  • Mandatory (Experience 7) – Must have strong coding and system design skills, not just prompt engineering or experimentation
  • Mandatory (Experience 8) – Must have experience shipping AI features in production, not just POCs or research projects
  • Mandatory (Experience 9) – Must have experience working with APIs, backend services, and integrations
  • Mandatory (Experience 10) – Must have understanding of AI system reliability, including latency, cost optimization, fallback handling, and basic eval thinking
  • Mandatory (Company) – Product companies / startups, preferably Series A to Series D
  • Mandatory (Note) - Candidate's overall experience should not be more than 7 Yrs
  • Mandatory (Tech Stack) – Strong in Python + AI/LLM ecosystem, experience with modern AI tooling and frameworks
  • Mandatory (Exclusion) – Reject profiles that are only Prompt Engineers, Data Scientists, or Frontend Engineers without strong backend + system building experience
  • Preferred (Skill) – Experience with fine-tuning (LoRA / QLoRA) or open-source model deployment (vLLM / Ollama)
  • Preferred (Frontend) – Basic ability to build or contribute to frontend (React or similar)
  • Highly Preferred (Education) – Candidates from Tier-1 institutes (IITs, BITS, NITs, IIITs, top global universities)


Read more
Zeuron.AI

at Zeuron.AI

1 candid answer
Kavitha Rajan
Posted by Kavitha Rajan
Bengaluru (Bangalore)
1 - 2 yrs
₹11L - ₹12L / yr
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Computer Vision
skill iconFlutter
Embedded C
+2 more

Job Title: Software/Hardware Engineer (IIT/NIT)

Location: Bangalore

Website: https://www.zeuron.ai

Experience: 1 Year

CTC: ₹12 LPA


About the Company

Zeuron.ai is a Bangalore-based deep-tech startup founded in 2019, focused on building brain-inspired computing and AI-driven healthcare solutions. The company combines neuroscience, AI, and gaming to create innovative digital therapeutics and neurotechnology platforms for improving brain health, rehabilitation, and overall well-being.

About the Role

We are looking for a highly motivated Software/Hardware Engineer from premier institutes (IIT/NIT) with strong fundamentals and a passion for building scalable and efficient systems. This role offers an opportunity to work on cutting-edge technology and solve real-world problems.

 

Key Responsibilities

Design, develop, and optimize software/hardware solutions

Work on system architecture, debugging, and performance improvements

Collaborate with cross-functional teams (product, design, operations)

Participate in code reviews, testing, and deployment processes

Contribute to innovation and continuous improvement initiatives

 

Requirements

B.Tech/M.Tech from IITs/NITs (Computer Science, Electronics, Electrical, or related fields)

1 year of experience (internships/project experience considered)

Strong programming skills (C/C++/Python/Java) or hardware fundamentals (embedded systems, VLSI, circuit design)

Good understanding of data structures, algorithms, and system design

Problem-solving mindset with strong analytical skills


Preferred Skills

Experience with embedded systems, IoT, or product development

Knowledge of cloud platforms or system-level programming

Good in Computer vision, Flutter, JavaScript, AI/ML

Read more
Bengaluru (Bangalore)
5 - 10 yrs
₹1L - ₹8L / yr
databricks
ETL
PySpark
Apache Spark
CI/CD
+7 more

Profile - Databricks Developer

Experience- 5+ years

Location- Bangalore (On site)

PF & BGV is Mandatory


Job Description: -


* Design, build, and optimize data pipelines and ETL/ELT workflows using Databricks and Apache Spark (PySpark).

* Develop scalable, high performance data solutions using Spark distributed processing.

* Lead engineering initiatives focused on automation, performance tuning, and platform modernization.

* Implement and manage CI/CD pipelines using Git-based workflows and tools such as GitHub Actions or Jenkins.

* Collaborate with cross-functional teams to translate business needs into technical solutions.

* Ensure data quality, governance, and security across all processes.

* Troubleshoot and optimize Spark jobs, Databricks clusters, and workflows.

* Participate in code reviews and develop reusable engineering frameworks.

* Should have knowledge of utilizing AI tools to improve productivity and support daily engineering activities.

* Strong knowledge and hands-on experience in Databricks Genie, including prompt engineering, workspace usage, and automation


. Required Skills & Experience:

* 5+ years of experience in Data Engineering or related fields.

* Strong hands-on expertise in Databricks (notebooks, Delta Lake, job orchestration).

* Deep knowledge of Apache Spark (PySpark, Spark SQL, optimization techniques).

* Strong proficiency in Python for data processing, automation, and framework development.

* Strong proficiency in SQL, including complex queries, performance tuning, and analytical functions.

* Strong knowledge of Databricks Genie and leveraging it for engineering workflows.

* Strong experience with CI/CD and Git-based development workflows. * Proficiency in data modeling and ETL/ELT pipeline design.

* Experience with automation frameworks and scheduling tools.

* Solid understanding of distributed systems and big data concepts

Read more
Ctruh

at Ctruh

2 candid answers
1 video
Ariba Khan
Posted by Ariba Khan
Bengaluru (Bangalore)
9 - 13 yrs
Upto ₹60L / yr (Varies
)
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconPython
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
+2 more

About the Role:

Ctruh is hiring a hands-on Director of Engineering who codes, architects systems, and builds teams. You’ll set the technical foundation, drive engineering excellence, and own the architecture of our AI, 3D, and XR platform.


This is not a pure management role - expect to spend 50–60% of your time writing code, solving deep technical problems, and owning mission-critical systems. As we scale, this role transitions into CTO, taking full ownership of technical vision and long-term strategy.


What You’ll Own:

1. Technical Leadership & Architecture

  • Architect Ctruh’s full-stack platform across frontend, backend, infrastructure, and AI.
  • Scale core systems: VersaAI engine, rendering pipeline, AR deployment, analytics.
  • Make decisions on stack, scalability patterns, architecture, and technical debt.
  • Own design for high-performance 3D asset processing, real-time rendering, and ML deployment.
  • Lead architectural discussions, design reviews, and set engineering standards.

2. Hands-On Development

  • Write production-grade code across frontend, backend, APIs, and cloud infra.
  • Build critical features and core system components independently.
  • Debug complex systems and optimize performance end-to-end.
  • Implement and optimize AI/ML pipelines for 3D generation, CV, and recognition.
  • Build scalable backend services for large-scale asset processing and real-time pipelines.
  • Develop WebGL/Three.js rendering and AR workflows.

3. Team Building & Engineering Management

  • Hire and grow a team of 5–8 engineers initially (scaling to 15–20).
  • Establish engineering culture, values, and best practices.
  • Build career frameworks, performance systems, and growth plans.
  • Conduct 1:1s, mentor engineers, and drive continuous improvement.
  • Set up processes for agile execution, deployments, and incident response.

4. Product & Cross-Functional Collaboration

  • Work with the founder and product team on roadmap, feasibility, and prioritization.
  • Translate product requirements into technical execution plans.
  • Collaborate with design for UX quality and technical alignment.
  • Support sales and customer success with integrations and technical discussions.
  • Contribute technical inputs to product strategy and customer-facing initiatives.

5. Engineering Operations & Infrastructure

  • Own CI/CD, testing frameworks, deployments, and automation.
  • Create monitoring, logging, and alerting setups for reliability.
  • Manage AWS infrastructure with a focus on cost and performance.
  • Build internal tools, documentation, and developer workflows.
  • Ensure enterprise-grade security, compliance, and reliability.


Tech Stack:

1. Frontend: React.js, Next.js, TypeScript, WebGL, Three.js

2. BackendNode.js, Python, Express/FastAPI, REST, GraphQL

3. AI/ML: PyTorch, TensorFlow, CV models, Stable Diffusion, LLMs, ML pipelines

4. 3D & Graphics: Three.js, WebGL, Babylon.js, glTF, USDZ, rendering optimization

5. Databases: PostgreSQL, MongoDB, Redis, vector databases

6. Cloud & Infra: AWS (EC2, S3, Lambda, SageMaker), Docker, Kubernetes, CI/CD: GitHub Actions, Monitoring: Datadog, Sentry


What We’re Looking For:

1. Must-Haves

  • 9+ years of engineering experience, with 3–4 years in technical leadership.
  • Deep full-stack experience with strong system design fundamentals.
  • Proven success building products from 0→1 in fast-paced environments.
  • Hands-on expertise with React/Next.js, Node.js/Python, and AWS.
  • AI/ML deployment experience (CV, generative AI, 3D reconstruction).
  • Ability to design scalable architectures for high-performance systems.
  • Strong people leadership with experience hiring and mentoring teams.
  • Ready to code, review, design, and lead from the front.
  • Startup mindset: fast execution, problem-solving, ownership.


2. Highly Desirable

  • Strong 3D graphics/WebGL/Three.js knowledge.
  • Experience with real-time systems, rendering optimizations, or large-scale pipelines.
  • Background in B2B SaaS, XR, gaming, or immersive tech.
  • Experience scaling engineering teams from 5 → 20+.
  • Open-source contributions or technical content creation.
  • Experience working closely with founders or executive leadership.


Why Ctruh:

  • Hard, meaningful engineering problems at the intersection of AI, 3D, XR, and web tech.
  • Build from day zero – architecture, team, and culture.
  • Path to CTO as the company scales.
  • High autonomy to drive technical decisions.
  • Direct founder collaboration on product vision.
  • High ownership, high-growth environment.
  • Backed by global leaders: Microsoft, Google, NVIDIA, AWS.

Location & Work Culture:

  • Location: HSR Layout, Bengaluru
  • Schedule: 6 days a week, (5 days-in-office, Saturdays WFH)
  • Culture: High-intensity, high-integrity, engineering-first
  • Team: Young, ambitious, technically strong


The Ideal Candidate:

You're an engineer at heart and a leader by instinct. You love coding as much as architecting systems. You balance speed with quality, innovate fearlessly, and thrive in ambiguity.


You can:

  • Architect microservices in the morning
  • Review mission-critical PRs at noon
  • Build a Three.js shader in the afternoon
  • Run an engineering standup in the evening


You’ve experienced both the pain of poor architecture and the joy of elegant systems - and know how to build things that scale. If you geek out over AI/ML pipelines, 3D rendering, WebGL performance, or building engineering orgs from scratch, you’ll love Ctruh.

Read more
Hashone Career
Madhavan I
Posted by Madhavan I
Bengaluru (Bangalore)
3 - 8 yrs
₹20L - ₹28L / yr
SQL
skill iconPython
AtScale

Summary:

Data Engineer/Analytics Engineer with experience in semantic layer modeling using AtScale, building scalable data pipelines, and delivering high-performance analytics solutions on cloud platforms.




 Responsibilities

• Build and maintain ETL/ELT pipelines for large-scale data

• Develop semantic models, cubes, and metrics in AtScale

• Optimize query performance and BI dashboards

• Integrate data platforms (Snowflake, Databricks, BigQuery)

• Collaborate with analysts and business teams




 Skills

• SQL, Python/Scala

• Data modeling (star schema, OLAP)

• AtScale (semantic layer)

• Spark, dbt, Airflow

• BI tools (Tableau, Power BI, Looker)

• AWS / GCP / Azure



 Experience

• 3–8+ years in data/analytics engineering

• Experience with enterprise data platforms and BI systems

Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
4 - 12 yrs
Upto ₹45L / yr (Varies
)
MLOps
skill iconPython
databricks
Windows Azure
skill iconAmazon Web Services (AWS)

We are seeking a skilled and passionate ML Engineer with 3+ years of experience to join our team. The ideal candidate will be instrumental in developing, deploying, and maintaining machine learning models, with a strong focus on MLOps practices.

This role requires hands-on experience with Azure cloud services, Databricks, and MLflow to build robust and scalable ML solutions.


Responsibilities

  • Design, develop, and implement machine learning models and algorithms to solve complex business problems.
  • Collaborate with data scientists to transition models from research and development into production-ready systems.
  • Build and maintain scalable data pipelines for ML model training and inference using Databricks.
  • Implement and manage the ML model lifecycle using MLflow, including experiment tracking, model versioning, and model registry.
  • Deploy and manage ML models in production environments on Azure, leveraging services such as:
  • Azure Machine Learning
  • Azure Kubernetes Service (AKS)
  • Azure Functions
  • Support MLOps workloads by automating model training, evaluation, deployment, and monitoring processes.
  • Ensure the reliability, performance, and scalability of ML systems in production.
  • Monitor model performance, detect model drift, and implement retraining strategies.
  • Collaborate with DevOps and Data Engineering teams to integrate ML solutions into existing infrastructure and CI/CD pipelines.
  • Document model architecture, data flows, and operational procedures.

Qualifications

Education

  • Bachelor’s or Master’s degree in Computer Science, Engineering, Statistics, or a related quantitative field.

Experience

  • Minimum 3+ years of professional experience as an ML Engineer or in a similar role.

Required Skills

  • Strong proficiency in Python for data manipulation, machine learning, and scripting.
  • Hands-on experience with machine learning frameworks, such as:
  • Scikit-learn
  • TensorFlow
  • PyTorch
  • Keras
  • Demonstrated experience with MLflow for:
  • Experiment tracking
  • Model management
  • Model deployment
  • Proven experience working with Microsoft Azure cloud services, specifically:
  • Azure Machine Learning
  • Azure Databricks
  • Related compute and storage services
  • Solid experience with Databricks for:
  • Data processing
  • ETL pipelines
  • ML model development
  • Strong understanding of MLOps principles and practices, including:
  • CI/CD for ML
  • Model versioning
  • Model monitoring
  • Model retraining
  • Experience with containerization and orchestration technologies, including:
  • Docker
  • Kubernetes (especially AKS)
  • Familiarity with SQL and data warehousing concepts.
  • Experience working with large datasets and distributed computing frameworks.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and collaboration skills.

Nice-to-Have Skills

  • Experience with other cloud platforms (AWS or GCP).
  • Knowledge of big data technologies such as Apache Spark.
  • Experience with Azure DevOps for CI/CD pipelines.
  • Familiarity with real-time inference patterns and streaming data.
  • Understanding of Responsible AI principles, including fairness, explainability, and privacy.

Certifications (Preferred)

  • Microsoft Certified: Azure AI Engineer Associate
  • Databricks Certified Machine Learning Associate (or higher) 
Read more
Verse
Ravi K
Posted by Ravi K
Bengaluru (Bangalore)
2 - 5 yrs
₹15L - ₹20L / yr
skill iconPython
FastAPI
skill iconPostgreSQL
Neo4J
LangGraph

Founding Engineer (Bangalore)


The problem:

Business enterprises overpay vendors - on every batch of invoices, on every month because the data that would catch lives in different systems. We are building an AI agent that processes invoices end-to-end, reasons across all the relevant sources, flags genuine discrepancies, and acts - without a human having to investigate each one.


What you will own

Everything engineering. Schema design to deployment to the 2am fix when something breaks in production. There is no tech lead above you. There is no platform team. There is the architecture, you, and the founders. Concretely, this means building:

  • A multi-stage agentic pipeline that takes a vendor invoice and produces a structured decision - fully autonomous for clear cases, escalating to human review for genuinely ambiguous ones. We use LangGraph, but if you've built equivalent systems with Temporal, Prefect, or custom state machines with LLM orchestration, that works
  • An LLM-powered extraction layer that handles real invoices - scanned PDFs, stamped documents, inconsistent layouts - and returns structured output
  • A graph data model that connects invoices to various sources and can traverse those relationships to detect discrepancies
  • ERP connectors, GST validation logic, and a write-back layer that closes the loop


What we need

  • Strong Python. Async FastAPI, clean service boundaries, tests that actually catch bugs. You have shipped Python backends that handled real production load
  • Solid Postgres. Complex queries, schema design, migrations without downtime, row-level security for multi-tenant data. pgvector is a plus - if not, you pick it up fast
  • LLM API experience in production. You have called an LLM API for something that real users depended on. You know about structured output, retry logic, cost management, prompt versioning. A side project counts if it was genuinely deployed
  • Comfort with graph data models. You understand when a graph is the right structure and when it is not. You do not need deep Neo4j production experience - you need to understand graph relationships conceptually and be willing to learn Cypher. It is a 2-day ramp for the right person
  • Working knowledge of deployment. Deployed and operated production workloads on GCP. Cloud Run, Cloud SQL, Cloud Storage, Redis — you're comfortable across the stack. If you've done it on AWS, the translation isn't hard, but GCP is where we are
  • You own things. Not "I contributed to" - you designed it, shipped it, and fixed it when it broke. That pattern needs to be visible in your history


Good to have, not mandatory

  • Built an agentic pipeline with multiple stages
  • Any fintech, P2P domain experience - even tangential
  • Worked at a startup with under 20 people
  • Has a GitHub, blog, or writeup that shows how you think about a hard technical problem


What you get

  • The hardest engineering problem you would have worked on. This is not CRUD with an LLM bolted on
  • Real ownership. First engineering hire. Your architectural decisions will be in this product five years from now
  • Equity that matters. ESOP - Open to discussion. We are pre-seed - this is a bet, not a guarantee. We will not pretend otherwise
  • No meetings tax. You work directly with the founders. The product is specified clearly. You know what you are building and why


Honest about stage: We do not have a production ready infra yet. We have a complete architecture specification and a working prototype. If you need the stability of an established engineering org, this is not the right moment. If you want to build something real from zero and own a meaningful piece of it, it is.


The founders

One of us has spent 20 years building revenue and operational engines at companies where there was no playbook - part of the pilot team that established the world's largest search company's direct sales operations in India, managed global operations for a global mobile advertising platform, scaled a B2C platform to become one of India’s leading edtech platforms and most recently worked on building an enterprise Agentic Voice AI platform. The other has spent 15 years taking AI from demo to production in domains where failure is expensive - voice, lending, and conversational systems across a Series D conversational AI company, a major telco, a Big 4, and a leading NBFC.


Two IIT/IIM alumni who have both watched AI work in enterprise, and know exactly what it takes to get it there. We are not building this product because it sounds interesting. We are building it because we have both sat across the table from CFOs who know they are losing margin and have no tool capable of doing anything about it.

Read more
Improving
Rohini Jadhav
Posted by Rohini Jadhav
Bengaluru (Bangalore)
5 - 8 yrs
₹25L - ₹35L / yr
skill iconPython
skill iconKubernetes
skill iconJenkins
CI/CD
skill iconDocker
+1 more

What are we looking for??

  1. You have a good understanding and work experience in AKS, Kubernetes, and EKS.
  2. You are able to manage multi region clusters for disaster recovery.
  3. You have a good understanding of AWS stack.
  4. You have experience of production level in Kubernetes. 
  5. You are comfortable coding/programming and can do so whenever required. 
  6. You have worked with programmable infrastructure in some way - Built a CI/CD pipeline, Provisioned infrastructure programmatically or Provisioned monitoring and logging infrastructure for large sets of machines.
  7. You love automating things, sometimes even what seems like you can’t automate - such as one of our engineers used Ansible to set up the Ubuntu workstation and runs a playbook every time something has to be installed.
  8. You don’t throw around words such as “high availability” or “resilient systems” without understanding at least their basics. Because you know that words are easy to talk about but there is a fair amount of work to build such a system in practice.
  9. You love coaching people - about the 12-factor apps or the latest tool that reduced your time of doing a task by X times and so on. You lead by example when it comes to technical work and community.
  10. You understand the areas you have worked on very well but, you are curious about many systems that you may not have worked on and want to fiddle with them.
  11. You know that understanding applications and the runtime technologies gives you a better perspective - you never looked at them as two different things.

What you will be learning and doing?

  1. You will be working with customers trying to transform their applications and adopt cloud-native technologies. The technologies used will be Kubernetes, Prometheus, Service Mesh, Distributed tracing and public cloud technologies or on-premise infrastructure.
  2. The problems and solutions are continuously evolving in space but fundamentally you will be solving problems with simplest and scalable automation.
  3. You will be building open source tools for problems that you think are common across customers and industry. No one ever benefited from re-inventing the wheel, did they? 
  4. You will be hacking around open source projects, understand their capabilities, limitations and apply the right tool for the right job.
  5. You will be educating the customers - from their operations engineers to developers on scalable ways to build and operate applications in modern cloud-native infrastructure.
Read more
Bengaluru (Bangalore)
2 - 4 yrs
₹21L - ₹28L / yr
Artificial Intelligence (AI)
skill iconPython

Strong Junior GenAI / AI Backend Engineer Profiles

Mandatory (Experience 1) – Must have 1.5+ years of full time expeirence in software development using LLMs (OpenAI / Gemini / similar) in projects (internship / full-time / strong personal projects)

Mandatory (Experience 2) – Must have strong coding skills in Python and hands-on backend development experience (FastAPI / Django preferred)

Mandatory (Experience 3) – Must have built or contributed to AI/LLM-based applications, such as chatbots, copilots, document processing tools, etc.

Mandatory (Experience 4) – Must have basic understanding of RAG concepts (embeddings, vector DBs, retrieval)

Mandatory (Experience 5) – Must have experience building APIs or backend services and integrating with external systems

Mandatory (Experience 6) – Must have AI/LLM projects clearly mentioned in CV (with what was built, not just tools used)

Mandatory (Experience 7) – Must have worked with modern development tools (Git, APIs, basic cloud exposure)

Mandatory (Tech Stack) – Strong in Python + basic AI/LLM ecosystem

Mandatory (Company) – Product companies / Funded startups (Series A / B / high-growth environments)

Mandatory (Education) - Tier-1 institutes (IITs, BITS, IIITs), Can be skipped if from Top notch product companies

Mandatory (Exclusion) - Avoid candidates with Only Prompt Engineers, from pure Data Science / ML theory background without backend coding, Frontend-heavy engineers.

Read more
Talent Pro
Bengaluru (Bangalore)
4 - 7 yrs
₹37L - ₹48L / yr
Artificial Intelligence (AI)
skill iconPython

Strong Senior GenAI / AI Backend Engineer Profiles

Mandatory (Experience 1) – Must have 4+ years of total software development experience, with at least 2+ years working on AI/LLM-based features in production

Mandatory (Experience 2) – Must have strong backend engineering experience using Python (FastAPI / Django preferred) and building production-grade systems

Mandatory (Experience 3) – Must have hands-on experience building LLM-based applications, including OpenAI / Gemini / similar models in real projects

Mandatory (Experience 4) – Must have experience with RAG (Retrieval Augmented Generation) including chunking, embeddings, and retrieval pipelines

Mandatory (Experience 5) – Must have experience designing end-to-end AI pipelines, including chaining, tool usage, structured outputs, and handling failure cases

Mandatory (Experience 6) – Must have experience building agentic AI systems (multi-step workflows, tool orchestration like LangGraph / CrewAI or custom agents)

Mandatory (Experience 7) – Must have strong coding and system design skills, not just prompt engineering or experimentation

Mandatory (Experience 8) – Must have experience shipping AI features in production, not just POCs or research projects

Mandatory (Experience 9) – Must have experience working with APIs, backend services, and integrations

Mandatory (Experience 10) – Must have understanding of AI system reliability, including latency, cost optimization, fallback handling, and basic eval thinking

Mandatory (Company) – Product companies / startups, preferably Series A to Series D

Mandatory (Note) - Candidate's overall experience should not be more than 7 Yrs

Mandatory (Tech Stack) – Strong in Python + AI/LLM ecosystem, experience with modern AI tooling and frameworks

Mandatory (Exclusion) – Reject profiles that are only Prompt Engineers, Data Scientists, or Frontend Engineers without strong backend + system building experience

Read more
AI-powered content creation and automation platform

AI-powered content creation and automation platform

Agency job
via Uplers by Shrishti Singh
Bengaluru (Bangalore)
2 - 4 yrs
₹15L - ₹28L / yr
skill iconPython
skill iconNodeJS (Node.js)
TypeScript
Artificial Intelligence (AI)
Generative AI
+2 more

Software Engineer

Onsite - HSR Bangalore

6 Days work from Office (Flexible working hours)


Product is a PowerPoint AI assistant used by consulting companies and Fortune 500 teams. A typical professional spends 1 to 3 hours creating one slide. With Product company, they create a v1 of their entire deck in 10 minutes, and make changes like “turn this table to a chart” in seconds directly within PowerPoint.

In the next 2 years, our goal at company is to forever change the way business presentations are made.


Who are we?

  • small, strong team of 5
  • founders are CS graduates from IIT Kharagpur with a specialisation in AI
  • work 6 days a week from our office in HSR Layout in Bangalore
  • funded by Y Combinator and other amazing investors
  • used by consulting companies and Fortune 500 teams


Your responsibilities (in order)

  • Design, implement, test, and deploy full features
  • Design and implement a robust infrastructure to enable rapid development and automated testing
  • Look at usage data to iterate on features


What we’re looking for

  • Undergraduate or master's in Computer Science or equivalent degree
  • 2+ years of backend or DevOps software engineering experience
  • Experience with TypeScript (JavaScript) or Python


You’ll be a good fit if

  • You want to work on a product that can change the way a very large number of people work
  • The chaos of high growth and things breaking is exciting to you
  • You are a workaholic, looking to upskill faster than most people think is possible. This role is not a good fit for you if you’re looking to prioritise work-life balance.
  • You prefer working in-person with other smart people who are excited and passionate about what they’re building
  • You love solving very hard problems at a rapid pace. We discuss timelines in days or weeks, so you’ll constantly be expected to ship really high-quality work.



Perks

  • Comprehensive health insurance for you and dependents
  • Workstation enhancements
  • Subscriptions to AI tools such as Cursor, ChatGPT, etc.

(If there's anything else we can do to make your work more enjoyable, just ask)


If you are interested in proceeding, we would be happy to move your profile to the next stage of the evaluation process.

Kindly share the following details to help us take this forward :


  • Current CTC (Fixed + Variable):
  • Expected CTC:
  • Notice Period (If currently serving, please mention your Last Working Day)
  • Details of any active offers in hand (if applicable)
  • Expected/Available Date of Joining (if applicable)
  • Attach Updated CV:
  • Attach Github Link / Leet code link or other:
  • Current Location:
  • Preffered Location:
  • Reason for job Change:
  • Reason for relocation (if applicable):
  • Are you comfortable with 6 days wfo (flexible working hours)?( Yes / No):

Read more
oil and Gas Industry (petroleum refinery)

oil and Gas Industry (petroleum refinery)

Agency job
via First Tek, Inc. by David Ingale
Bengaluru (Bangalore)
8 - 12 yrs
₹15L - ₹25L / yr
skill iconPython
MLOps
skill iconMachine Learning (ML)
API
CI/CD
+5 more

🔹 Role: Python Engineer – Python & MLOps

📍 Location: Bellandur, Bangalore

🕐 Work Timings: 01:30 PM – 10:30 PM

🏢 Work Mode: Monday (WFH), Tuesday–Friday (WFO)

📅 Experience: 8-12 Years (Ideal: 8-10 Years)

🔹 Role Overview

This role focuses on building and maintaining a production-grade AI/ML platform. You will work on scalable Python systems, MLOps pipelines, APIs, and CI/CD workflows in an enterprise environment.

🔹 Key Responsibilities

✔ Develop production-grade Python applications using OOP principles

✔ Build and enhance MLOps pipelines (training, validation, deployment)

✔ Design and optimize REST APIs with OpenAI/Swagger

✔ Implement async programming for high-performance systems

✔ Work on CI/CD pipelines (Azure Pipelines / GitHub Actions)

✔ Ensure clean, testable, and maintainable code (PyTest, TDD)

🔹 Required Skills

✔ Strong Python (OOP, modular design)

✔ MLOps & CI/CD pipeline experience

✔ REST API development

✔ Async programming (async/await, concurrency)

✔ Pandas / Polars & Scikit-learn

✔ JSON Schema–driven development

✔ Testing using PyTest

🔹 Nice to Have

➕ Azure ML SDK

➕ Pydantic

➕ Azure Cosmos DB

➕ Experience with large enterprise platforms

Read more
enParadigm

at enParadigm

2 candid answers
3 products
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
2 - 4 yrs
Upto ₹16L / yr (Varies
)
skill iconJava
skill iconPython
skill iconNodeJS (Node.js)
skill iconGo Programming (Golang)
skill iconPHP
+4 more

We are looking for a Full Stack Developer to build scalable software solutions and contribute across the entire software development lifecycle—from conception to deployment.

You will work closely with cross-functional teams and should be comfortable with both front-end and back-end technologies, modern frameworks, and third-party libraries. If you enjoy building visually appealing, functional applications and thrive in Agile environments, we’d love to connect.


Current Technologies Used

  • Backend: FastAPI (active), PHP (legacy), Java (legacy)
  • Frontend: Svelte, TypeScript, JavaScript

Experience with Python and PHP is a plus, but not mandatory.


Role Responsibilities

  • Collaborate with development teams and product managers to ideate software solutions
  • Design client-side and server-side architecture
  • Build visually appealing front-end applications
  • Develop and manage efficient databases and applications
  • Write effective and scalable APIs
  • Test software for responsiveness and performance
  • Troubleshoot, debug, and upgrade systems
  • Implement security and data-protection measures
  • Build mobile-responsive features and applications
  • Create and maintain technical documentation

Candidate Requirements:


Education

  • B.Tech / BE in Computer Science, Statistics, or a relevant field

Experience

  • 2–4 years as a Full Stack Developer or in a similar role

Location

  • Bangalore (Hybrid)

Skill Set – Role Based

  • Experience building web applications
  • Familiarity with common technology stacks
  • Knowledge of front-end languages and libraries:
  • HTML, CSS, JavaScript, XML, jQuery
  • Knowledge of back-end languages and frameworks:
  • Java, Python, PHP
  • Angular, React, Svelte, Node.js
  • Familiarity with:
  • Databases: PostgreSQL, MySQL, MongoDB
  • Web servers: Apache
  • UI/UX principles

Skill Set – Behavioural

  • Excellent communication and teamwork skills
  • Strong attention to detail
  • Good organizational skills
  • Analytical mindset


Read more
enParadigm

at enParadigm

2 candid answers
3 products
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
2 - 4 yrs
Upto ₹8L / yr (Varies
)
skill iconJava
skill iconPython
Selenium Web driver
cypress
playwright

Job Description:


Test Design & Execution

Design and execute detailed, well-structured test plans, test cases, and test scenarios to ensure high-quality product releases.


Automation Development

Develop and maintain automated test scripts for functional and regression testing using tools such as Selenium, Cypress, or Playwright.


Defect Management

Identify, log, and track defects through to resolution using tools like Jira, ensuring minimal impact on production releases.


API & Backend Testing

Conduct API testing using Postman, perform backend validation, and execute database testing using SQL/Oracle.


Collaboration

Work closely with developers, product managers, and UX designers in an Agile/Scrum environment to embed quality across the SDLC.


CI/CD Integration

Integrate automated test suites into CI/CD pipelines using platforms such as Jenkins or Azure DevOps.


Required Skills & Experience

  • Minimum 2+ years of experience in Software Quality Assurance or Automation Testing.
  • Hands-on experience with Selenium WebDriver, Cypress, or Playwright.
  • Proficiency in at least one programming/scripting language: Java, Python, or JavaScript.
  • Strong experience in functional, regression, integration, and UI testing.
  • Solid understanding of SQL for data validation and backend testing.
  • Familiarity with Git for version control, Jira for defect tracking, and Postman for API testing.


Desirable Skills

  • Experience in mobile application testing (Android/iOS).
  • Exposure to performance testing tools such as JMeter.
  • Experience working with cloud platforms like AWS or Azure.


Read more
Mid Size Product Engineering Services Company

Mid Size Product Engineering Services Company

Agency job
via Vidpro Consultancy Services by Vidyadhar Reddy
Remote, Bengaluru (Bangalore), Chennai, Hyderabad
20 - 26 yrs
₹65L - ₹120L / yr
skill iconVue.js
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconJavascript
+18 more

This role will report to the Chief Technology Officer


You Will Be Responsible For


* Driving decision-making on enterprise architecture and component-level software design to our software platforms' timely build and delivery.

* Leading a team in building a high-performing and scalable SaaS product.

* Conducting code reviews to maintain code quality and follow best practices

* DevOps practice development on promoting automation, including asset creation, enterprise strategy definition, and training teams

* Developing and building microservices leveraging cloud services

* Working on application security aspects

* Driving innovation within the engineering team, translating product roadmaps into clear development priorities, architectures, and timely release plans to drive business growth.

* Creating a culture of innovation that enables the continued growth of individuals and the company

* Working closely with Product and Business teams to build winning solutions

* Led talent management, including hiring, developing, and retaining a world-class team


Ideal Profile


* You possess a Degree in Engineering or a related field and have at least 20+ years of experience as a Software Engineer, with a 10+ years of experience leading teams and at least 4 Years of experience in building a SaaS / Fintech platform.

* Proficiency in MERN / Java / Full Stack.

* Led a team in optimizing the performance and scalability of a product

* You have extensive experience with DevOps environment and CI/CD practices and can train teams.

* You're a hands-on leader, visionary, and problem solver with a passion for excellence.

* You can work in fast-paced environments and communicate asynchronously with geographically distributed teams.


What's on Offer?


* Exciting opportunity to drive the Engineering efforts of a reputed organisation

* Work alongside & learn from best in class talent

* Competitive compensation + ESOPs

Read more
Mercari, Inc

at Mercari, Inc

2 candid answers
1 video
Ashwin S
Posted by Ashwin S
Bengaluru (Bangalore)
6 - 9 yrs
Best in industry
skill iconMachine Learning (ML)
PyTorch
TensorFlow
NumPy
skill iconPython
+2 more

Introduction

About Us:


Mercari is a Japan-based C2C marketplace company founded in 2013 with the mission to “Create value in a global marketplace where anyone can buy & sell.” From being the first tech unicorn from Japan before its IPO in 2018 we have come a long way towards becoming a global player and continuously and diligently work towards our transformation journey with a strong focus on our mission.

Since its inception, Mercari Group has worked to grow its services, investing in both our people and technology. Over time Mercari has expanded from being the top player in the C2C marketplace in Japan to new geographies like the U.S. We have also successfully launched new businesses such as Merpay, which is a mobile payment service platform with a vision to create a society where anyone can realize their dreams through a new ecosystem centered not only on payment service but also on credit. Today, Mercari Group is made up of multiple subsidiary businesses including logistics, B2C platform, blockchain, and sports team management.


For our services to be utilized by people worldwide; however, there is still a mountain of work ahead of us. This endeavor naturally requires the capability of the best talent and minds, and that is exactly the reason for us to launch the India Center of Excellence. With your help, we will continue to take on the world stage and strive to grow into a successful global tech company.


Our Culture:

To achieve our mission at Mercari, our organization and each of our employees share the same values and perspectives. Our individual guidelines for action are defined by our four values: Go Bold, All for One, Be a Pro and Move Fast. Our organization is also shaped by our four foundations: Sustainability, Diversity & Inclusion, Trust & Openness, and Well-being for Performance. Regardless of how big Mercari gets, the culture will remain essential to achieving our mission and something we want to preserve throughout our organization. We invite you to read the Mercari Culture Doc which summarizes the behaviors and mindset shared by Mercari and its employees. We continue to build an environment where all of our members of diverse backgrounds are accepted and recognized, and where they can thrive while holding dear to Mercari’s culture.


Work Responsibilities

  • Machine learning engineers working in the Recommendation domain develop the functions and services of the marketplace app Mercari through the development and maintenance of machine learning systems like Recommender systems while leveraging necessary infrastructure and companywide platform tools. 
  • Mercari is actively applying advanced machine learning technology to provide a more convenient, safer, and more enjoyable marketplace. Machine learning engineers use the cloud and Kubernetes to operate and improve machine learning systems.


Bold Challenges

  • We are looking for people who are interested in our services, mission, and values, and want to work where engineers can go bold, use the latest technology, make autonomous decisions, and take on challenges at a rapid pace.
  • Develop and optimize machine learning algorithms and models to enhance recommendation system to improve discovery experience of users
  • Collaborate with cross-functional teams and product stakeholders to gather requirements, design solutions, and implement features that improve user engagement
  • Conduct data analysis and experimentation with large-scale data sets to identify patterns, trends, and insights that drive the refinement of recommendation algorithms
  • Utilize machine learning frameworks and libraries to deploy scalable and efficient recommendation solutions.
  • Monitor system performance and conduct A/B testing to evaluate the effectiveness of features.
  • Continuously research and stay updated on advancements in AI/machine learning techniques and recommend innovative approaches to enhance recommendation capabilities.


Minimum Requirements:

  • Over 5-9 years of professional experience in end-to-end development of large-scale ML systems in production
  • Strong experience demonstrating development and delivery of end-to-end machine learning solutions starting from experimentation to deploying models, including backend engineering and MLOps, in large scale production systems.
  • Experience using common machine learning frameworks (e.g., TensorFlow, PyTorch) and libraries (e.g., scikit-learn, NumPy, pandas)
  • Deep understanding of machine learning and software engineering fundamentals
  • Basic knowledge and skills related to monitoring system, logging, and common operations in production environment
  • Communication skills to carry out projects in collaboration with multiple teams and stakeholders


Preferred skills:

  • Experience developing Recommender systems utilizing large-scale data sets
  • Basic knowledge of enterprise search systems and related stacks (e.g. ELK)
  • Functional development and bug fixing skills necessary to improve system performance and reliability
  • Experience with technology such as Docker and Kubernetes
  • Experience with cloud platforms (AWS, GCP, Microsoft Azure, etc.)
  • Microservice development and operation experience with Docker and Kubernetes
  • Utilizing deep learning models/LLMs in production
  • Experience in publications at top-tier peer-reviewed conferences or journals


Employment Status

Full-time

Office

Bangalore

Hybrid workstyle

  • We believe in high performance and professionalism. We work from office for 2 days/week and work from home 3 days/week
  • To build a strong & highly-engaged organization in India, we highly encourage everyone to work from our Bangalore office, especially during the initial office setup phase
  • We will continue to review and update the policy to address future organizational needs

Work Hours

  • Full flextime (no core time)

*Flexible to choose working hours other than team common meetings

Media


Owned Media

  • Mercari Engineering Portal
  • AI at Mercari portal
  • Mercan - Introduces the people that make Mercari
  • Mercari US Blog

Related Articles

  • Development Platforms and Platformers: On Rising to the Global Standard Ken Wakasa, Mercari CTO | mercan
  • “I'm Not a Talented Engineer” Insists the Member-Turned-Manager Revamping Our Internal CS Tool | mercan
  • Personalize to globalize:How Mercari is reshaping their app, their company, and the world | mercan
  • The Providers of the Safe and Secure Mercari Experience: The TnS Team, Introduced by Its Members! | mercan
Read more
Searce Inc

at Searce Inc

3 recruiters
Srishti Dani
Posted by Srishti Dani
Mumbai, Pune, Bengaluru (Bangalore)
7 - 10 yrs
Best in industry
Data migration
Datawarehousing
ETL
SQL
Google Cloud Platform (GCP)
+7 more

Lead Data Engineer


What are we looking for

real solver?

Solver? Absolutely. But not the usual kind. We're searching for the architects of the audacious & the pioneers of the possible. If you're the type to dismantle assumptions, re-engineer ‘best practices,’ and build solutions that make the future possible NOW, then you're speaking our language.


Your Responsibilities

What you will wake up to solve.

  • Lead Technical Design & Data Architecture: Architect and lead the end-to-end development of scalable, cloud-native data platforms. You’ll guide the squad on critical architectural decisions—choosing between Batch vs. Streaming or ETL vs. ELT—while remaining 100% hands-on, contributing high-quality, production-grade code.
  • Build High-Velocity Data Pipelines: Drive the implementation of robust data transports and ingestion frameworks using Python, SQL, and Spark. You will build integration layers that connect heterogeneous sources (SaaS, RDBMS, NoSQL) into unified, high-availability environments like BigQuery, Snowflake, or Redshift.
  • Mentor & Elevate the Squad: Foster a culture of technical excellence by mentoring and inspiring a team of data analysts and engineers. Lead deep-dive code reviews, promote best-practice data modeling (Star/Snowflake schema), and ensure the squad adopts modern engineering standards like CI/CD for data.
  • Drive AI-Ready Data Strategy: Be the expert in designing data foundations optimized for AI and Machine Learning. You will champion the use of GCP (Dataflow, Pub/Sub, BigQuery) and AWS (Lambda, Glue, EMR) to create "clean room" environments that fuel advanced analytics and generative AI models.
  • Partner with Clients as a Technical DRI: Act as the Directly Responsible Individual for client success. Translate ambiguous business questions into elegant data services, manage project deliverables using Agile methodologies, and ensure that the data provided is accurate, consistent, and mission-critical.
  • Troubleshoot & Optimize for Scale: Own the reliability of the reporting layer. You will proactively monitor pipelines, troubleshoot complex transformation bottlenecks, and propose ways to improve platform performance and cost-efficiency.
  • Innovate and Build Reusable IP: Spearhead the creation of reusable data frameworks, custom operators, and transformation libraries that accelerate future projects and establish Searce’s unique technical advantage in the market.


Welcome to Searce


The AI-Native tech consultancy that's rewriting the rules.

Searce is an AI-native, engineering-led, modern tech consultancy that empowers clients to futurify their business by delivering intelligent, impactful, real business outcomes. Searce solvers co-innovate with clients as their trusted transformational partners ensuring sustained competitive advantage. Searce clients realize smarter, faster, better business outcomes delivered by AI-native Searce solver squads. 


Functional Skills 

the solver personas.

  • The Data Architect: This persona deconstructs ambiguous business goals into scalable, elegant data blueprints. They don't just move data; they design the foundation—from schema design to partitioning strategies—that allows data scientists and analysts to thrive, foreseeing technical bottlenecks and making pragmatic trade-offs.
  • The Player-Coach: As a hands-on leader, this persona leads from the front by writing exemplary, production-grade SQL and Python while simultaneously mentoring and elevating the skills of the squad. Their success is measured by the team's ability to deliver high-quality, maintainable code and their growth as engineers.
  • The Pragmatic Innovator: This individual balances a passion for modern data tech (like Generative AI and Real-time Streaming) with a sharp focus on business outcomes. They champion new tools where they add real value but are disciplined enough to choose stable, cost-effective solutions to meet deadlines and deliver robust products.
  • The Client-Facing Technologist: This persona acts as the crucial technical bridge between the data squad and the client. They build trust by listening actively, explaining complex data concepts (like data latency or idempotency) in simple terms, and demonstrating how engineering decisions align with the client’s strategic goals.
  • The Quality Craftsman: This individual possesses an unwavering commitment to data integrity and treats data engineering as a craft. They are the guardian of the reporting layer, advocating for robust testing, data validation frameworks, and clean, modular code to ensure the long-term reliability of the data platform.


Experience & Relevance 

  • Engineering Depth: 7-10 years of professional experience in end-to-end data product development. You have a portfolio that proves your ability to build complex, high-velocity pipelines for both Batch and Streaming workloads.
  • Cloud-Native Fluency: Deep, hands-on experience designing and deploying scalable data solutions on at least one major cloud platform (AWS, GCP, or Azure). You are comfortable navigating the nuances of EMR, BigQuery, or Synapse at scale.
  • AI-Native Workflow: You don’t just build for AI; you build with AI. You must be proficient in using AI coding assistants (e.g., GitHub Copilot) to accelerate your delivery and have a track record of building the data foundations required for Generative AI.
  • Architectural Portfolio: Evidence of leading 2-3 large-scale transformations—including platform migrations, data lakehouse builds, or real-time analytics architectures.
  • Client-Facing Acumen: You have direct experience in a consultative, client-facing role. You can confidently translate a CEO’s business vision into a Lead Engineer’s technical specification without losing anything in translation.


Join the ‘real solvers’

ready to futurify?

If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’. Don’t Just Send a Resume. Send a Statement.



Read more
Searce Inc

at Searce Inc

3 recruiters
Jatin Gereja
Posted by Jatin Gereja
Bengaluru (Bangalore), Mumbai, Pune
10 - 18 yrs
Best in industry
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Enterprise Data Warehouse (EDW)
Data modeling
Big Data
+9 more

Director - Data engineering


What are we looking for

real solver?

Solver? Absolutely. But not the usual kind. We're searching for the architects of the audacious & the pioneers of the possible. If you're the type to dismantle assumptions, re-engineer ‘best practices,’ and build solutions that make the future possible NOW, then you're speaking our language.


Your Responsibilities

what you will wake up to solve.

1. Delivery & Tactical Rigor

  • Methodology Implementation: Implement and manage a unified, 'DataOps-First' methodology for data engineering delivery (ETL/ELT pipelines, Data Modeling, MLOps, Data Governance) within assigned business units. This ensures predictable outcomes and trusted data integrity by reducing architecture variability at the project level.
  • Operational Stewardship: Drive initiatives to optimize team utilization and enhance operational efficiency within the practice. You manage the commercial success of your squads, ensuring data delivery models (from migration to modern data stack implementation) are executed profitably, scalably, and cost-effectively.
  • Execution & Technical Resolution
  • Technical Escalation: Serve as the primary escalation point for delivery issues, personally leading the resolution of complex data integration bottlenecks and pipeline failures to protect client timelines and data reliability standards.
  • Quality Enforcement
  • Quality Oversight: Execute and monitor technical data quality standards, ensuring engineering teams adhere to strict policies regarding data lineage, automated quality checks (observability), security/privacy compliance (GDPR/CCPA/PII), and active catalog management.

2. Strategic Growth & Practice Scaling

  • Talent & Scaling Execution: Execute the strategy for data engineering talent acquisition and development within your business units. Implement objective metrics to assess and grow the 'Data-Native' DNA of your teams, ensuring squads are consistently equipped to handle petabyte-scale environments and high-impact delivery.
  • Offerings Alignment: Drive the adoption of standardized regional offerings (e.g., Modern Data Platform, Data Mesh, Lakehouse Implementation). Ensure your teams leverage the profitable frameworks defined by the practice to accelerate time-to-insight and eliminate architectural fragmentation in client environments.
  • Innovation & IP Development: Lead the practical integration of Vector Databases and LLM-ready architectures into project delivery. Champion the hands-on development of IP and reusable accelerators (e.g., automated ingestion engines) that improve delivery speed and enhance data availability across your portfolio.

3. Leadership & Unit Management

  • Unit Leadership: Directly lead, mentor, and manage the Engineering Managers and Lead Architects within your business unit. Hold your teams accountable for project-level operational consistency, technical talent development, and strict adherence to the practice's data governance standards.
  • Stakeholder Communication: Clearly articulate the business unit’s operational performance, technical quality metrics, and delivery progress to the C-suite Stakeholders and regional client leadership, bridging the gap between technical execution and business value.
  • Ecosystem Alignment: Maintain strong technical relationships with key partner contacts (Snowflake, Databricks, AWS/GCP). Align team delivery capabilities with current product roadmaps and ensure squad-level participation in training, certifications, and partner-led enablement opportunities.


Welcome to Searce

The ‘process-first’, AI-native modern tech consultancy that's rewriting the rules.

We don’t do traditional.

As an engineering-led consultancy, we are dedicated to relentlessly improving the real business outcomes. Our solvers co-innovate with clients to futurify operations and make processes smarter, faster & better.


Functional Skills

1. Delivery Management & Operational Excellence

  • Methodology Execution: Expert capability in implementing and enforcing a unified delivery methodology (DataOps, Agile, Mesh Principles) within specific business units. Proven track record of auditing squad-level adherence to ensure consistency across the project lifecycle.
  • Operational Performance: High proficiency in managing day-to-day operational metrics, including squad utilization, resource forecasting, and productivity tracking. Skilled at optimizing team performance to meet profitability and efficiency targets.
  • SOW & Risk Mitigation: Proven experience in operationalizing Statement of Work (SOW) requirements and identifying technical delivery risks early. Expert at mitigating scope creep and data-specific bottlenecks (e.g., latency, ingestion gaps) before they impact client outcomes.
  • Technical Escalation Leadership: Demonstrated ability to lead "war room" efforts to resolve complex pipeline failures or data integrity issues. Skilled at providing clear, rapid remediation plans and communicating technical status directly to regional stakeholders.

2. Architectural Implementation & Technical Oversight

  • Modern Stack Proficiency: Deep, hands-on expertise in implementing Cloud-Native architectures (Lakehouse, Data Mesh, MPP) on Snowflake, Databricks, or hyperscalers. Ability to conduct deep-dive architectural reviews and course-correct design decisions at the squad level to ensure scalability.
  • Operationalizing Governance: Proven experience in embedding data quality and observability (completeness, freshness, accuracy) directly into the CI/CD pipeline. Responsible for technical enforcement of regulatory compliance (GDPR/PII) and maintaining the integrity of data catalogs across active projects.
  • Applied Domain Expertise: Practical experience leading the delivery of high-growth solutions, specifically Generative AI infrastructure (RAG, Vector DBs), Real-Time Streaming, and large-scale platform migrations with a focus on zero-downtime execution.
  • DataOps & Engineering Standards: Expert-level mastery of DataOps, including the setup and management of orchestration frameworks (Airflow, Dagster) and Infrastructure as Code (IaC). You ensure that automation is a baseline requirement, not an afterthought, for all delivery teams.

3. Unit Management & Commercial Execution

  • Unit & Team Management: Proven success in leading and mentoring Engineering Managers and Lead Architects. Responsible for the operational metrics, technical output, and career development of the business unit's talent pool.
  • Offerings Implementation & Scoping: Expertise in translating service offerings (e.g., Data Maturity Assessments, Lakehouse Builds) into accurate project scopes, technical estimates, and resource plans to ensure delivery is both profitable and competitive.
  • Talent Growth & Mentorship: Functional ability to implement growth frameworks for data engineering roles. Focus on hands-on coaching and scaling high-performance technical talent to meet the demands of complex, petabyte-scale environments.
  • Partner Enablement: Functional competence in managing regional technical relationships with major partners (Snowflake, Databricks, GCP/AWS). Drives squad-level certifications, joint technical enablement, and alignment with partner product roadmaps.

Tech Superpowers

  • Modern Data Architect – Reimagines business with the Modern Data Stack (MDS) to deliver data mesh implementations, insights, & real value to clients.
  • End-to-End Ecosystem Thinker – Builds modular, reusable data products across ingestion, transformation (ETL/ELT), governance, and consumption layers.
  • Distributed Compute Savant – Crafts resilient, high-throughput architectures that survive petabyte-scale volume and data skew without breaking the bank.
  • Governance & Integrity Guardian – Embeds data quality, complete lineage, and privacy-by-design (GDPR/PII) into every table, view, and pipeline.
  • AI-Ready Orchestrator – Engineers pipelines that bridge structured data with Unstructured/Vector stores, powering RAG models and Generative AI workflows.
  • Product-Minded Strategist – Balances architectural purity with time-to-insight; treats every dataset as a measurable "Data Product" with clear ROI.
  • Pragmatic Stack Curator – Chooses the simplest tools that compound reliability; fluent in SQL, Python, Spark, dbt, and Cloud Warehouses.
  • Builder @ Heart – Writes, reviews, and optimizes queries daily; proves architectures with cost-performance benchmarks, not slideware. Business-first, data-second, outcome focused technology leader.

Experience & Relevance

  • Executive Experience: Minimum 10+ years of progressive experience in data engineering and analytics, with at least 3 years in a Senior Manager or Director -level role managing multiple technical teams and owning significant operational and efficiency metrics for a large data service line.
  • Delivery Standardization: Demonstrated success in defining and implementing globally consistent, repeatable delivery methodologies (DataOps/Agile Data Warehousing) across diverse teams.
  • Architectural Depth: Must retain deep, current expertise in Modern Data Stack architectures (Lakehouse, MPP, Mesh) and maintain the ability to personally validate high-level architectural and data pipeline design decisions.
  • Operational Leadership: Proven expertise in managing and scaling large professional services organizations, demonstrated ability to optimize utilization, resource allocation, and operational expense.
  • Domain Expertise: Strong background in Enterprise Data Platforms, Applied AI/ML, Generative AI integration, or large-scale Cloud Data Migration.
  • Communication: Exceptional executive-level presentation and negotiation skills, particularly in communicating complex operational, data quality, and governance metrics to C-level stakeholders.

Join the ‘real solvers’

ready to futurify?

If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’. Don’t Just Send a Resume. Send a Statement.

Read more
Background is in Oil&Gas

Background is in Oil&Gas

Agency job
via First Tek, Inc. by David Ingale
Bengaluru (Bangalore)
8 - 10 yrs
₹15L - ₹30L / yr
Apache Spark
databricks
Delta lake
CI/CD
skill iconPython
+5 more

Role: Sr. Azure Data Engineer

Experience: 8–10 Years

Work Timings: 1:30 PM – 10:30 PM IST

Location: Bellandur Bengaluru (Work from Office)

Company: Chevron

Employment Type: 6- 12 months Contract

 

Role Overview

We are seeking an experienced Senior Data Engineer to design and deliver scalable cloud data solutions on Azure. The ideal candidate will have strong expertise in Databricks, PySpark, and modern data architectures, with exposure to energy domain standards like OSDU.

Key Responsibilities

  • Architect and design robust Azure-based data solutions using Databricks, ADLS, and PaaS services
  • Define and implement scalable data Lakehouse architectures aligned with OSDU standards
  • Build and manage end-to-end data pipelines for batch and real-time processing using PySpark
  • Establish data governance frameworks including metadata, lineage, security, and access control
  • Implement DevOps best practices (CI/CD, Azure Pipelines, GitHub, automated deployments)
  • Collaborate with stakeholders to translate business needs into technical solutions
  • Develop and maintain architecture documentation, solution patterns, and standards
  • Provide technical leadership and mentorship to engineering teams
  • Optimize solutions for performance, cost, reliability, and security
  • Ensure alignment with enterprise architecture and compliance standards
  • Drive adoption of modular and reusable cloud data components

Required Skills & Qualifications

Core Technical Skills

  • Azure Databricks, Apache Spark (PySpark), Delta Lake, Unity Catalog
  • Azure Data Lake Storage (ADLS), Azure Data Factory, Synapse Analytics
  • Strong experience in Python-based data engineering
  • Data pipeline development (batch + real-time)

Architecture & Advanced Skills

  • Data Lakehouse architecture and distributed systems
  • Microservices, APIs, and integration frameworks
  • OSDU (Open Subsurface Data Universe) or similar energy data models

DevOps & Tools

  • CI/CD tools: Azure Pipelines, GitHub Actions
  • Infrastructure as Code: Terraform or similar

Other Skills

  • Data governance, security, compliance, and cost optimization
  • Strong analytical and problem-solving skills
  • Excellent communication and stakeholder management


Read more
Srijan Technologies

at Srijan Technologies

6 recruiters
Devendra Singh
Posted by Devendra Singh
Bengaluru (Bangalore), Delhi, Gurugram, Noida, Ghaziabad, Faridabad
4 - 6 yrs
₹15L - ₹26L / yr
skill iconPython
skill iconReact.js
Generative AI (GenAI)

About US:-

We turn customer challenges into growth opportunities.

Material is a global strategy partner to the world’s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences.


We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve.


Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners.

 

Experience Range: 4-8 Years

Role: Full Stack Developer


Duties: 

As Full Stack Engineer, you will work in small teams in a highly collaborative way, use the latest technologies and enjoy seeing the direct impact from your work. Our highly skilled system architects and development managers configure software packages and build custom applications, creating the foundation for rapid and cost-effective implementation of systems that maximize value from day one. Our development teams are small, flexible and employ agile methodologies to quickly provide our consultants with the solutions they need. We combine the latest open source technologies together with traditional Enterprise software products. 

 

The Role: 

 

We create both rapid prototypes, usually in 2 to 3 weeks, as well as full-scale applications typically within 2 to 3 months, by working collaboratively and iteratively through design and development to deliver fully functioning web-based and mobile applications that meet business goals. Our Front-End Developers contribute to the architecture across the technology stack, from database to native apps. 


Skills: 

Minimum of 5–9 years of experience, with a proven record of hands-on software development in at least one of the following languages: Java, C#, C/C++, Python, JavaScript, Ruby, plus modern frontend proficiency in React and TypeScript. Demonstrated ownership of delivering end-to-end solutions (from design through production support), with strong proactivity in identifying opportunities, anticipating risks, and driving improvements without waiting for direction. 

Significant experience designing, implementing, and operating Web Services and APIs (REST, SOAP, RPC, RMI) including API monitoring/observability and performance tuning. Solid understanding of network communication protocols (HTTP, TCP/IP, UDP, SMTP, DNS) and distributed system behaviors. 

Capable of applying best coding practices, design patterns, and evaluating tradeoffs in complex, microservices-based architectures. Well versed in cloud computing (AWS), automated testing, CI/CD, and DevOps tooling; comfortable owning reliability, scalability, and operational excellence. Bonus: hands-on knowledge of Terraform (infrastructure as code). 

Experience with relational data stores (MySQL, SQL Server, Oracle) and non-relational technologies, with strong proficiency in MongoDB (schema design, indexing, performance optimization), plus exposure to Elasticsearch, Cassandra, and related ecosystems. Strong professional experience with frameworks such as Node.js, AngularJS, Spring, Guice, and expertise building mobile, responsive/adaptive applications. 

First-hand understanding of Agile development methodologies, with a commitment to engineering excellence (e.g., DRY, TDD, CI) and pragmatic delivery. 


Non-Technical: First and foremost, passionate about technology, especially AI and emerging/disruptive technologies, and excited about translating innovation into real product impact. Strong command of English (verbal and written), excellent interpersonal skills, and a highly collaborative mindset, able to partner effectively across engineering, product, design, and stakeholders. Sound problem-solving ability to quickly process complex information and communicate it clearly and simply. Demonstrated leadership/mentorship, accountability, and a self-starter attitude suited to environments that foster entrepreneurial thinking. 


 What We Offer 

  •  Professional Development and Mentorship.
  •  Hybrid work mode with remote friendly workplace. (6 times in a row Great Place To Work Certified).
  •  Health and Family Insurance.
  •  40+ Leaves per year along with maternity & paternity leaves.
  •  Wellness, meditation and Counselling sessions.


Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Bengaluru (Bangalore)
4 - 10 yrs
₹10L - ₹30L / yr
skill iconPython
SQL
Spark
skill iconAmazon Web Services (AWS)
Amazon S3
+13 more

Job Title : AWS Data Engineer

Experience : 4+ Years

Location : Bengaluru (HSR – Hybrid, 3 Days WFO)

Notice Period : Immediate Joiner


💡 Role Overview :

We are looking for a skilled AWS Data Engineer to design, build, and scale modern data platforms. The role involves working with AWS-native services, Python, Spark, and DBT to deliver secure, scalable, and high-performance data solutions in an Agile environment.


🔥 Mandatory Skills :

Python, SQL, Spark, AWS (S3, Glue, EMR, Redshift, Athena, Lambda), DBT, ETL/ELT pipeline development, Airflow/Step Functions, Data Lake (Parquet/ORC/Iceberg), Terraform & CI/CD, Data Governance & Security


🚀 Key Responsibilities :

  • Design, build, and optimize ETL/ELT pipelines using Python, DBT, and AWS services
  • Develop and manage scalable data lakes on S3 using formats like Parquet, ORC, and Iceberg
  • Build end-to-end data solutions using Glue, EMR, Lambda, Redshift, and Athena
  • Implement data governance, security, and metadata management using Glue Data Catalog, Lake Formation, IAM, and KMS
  • Orchestrate workflows using Airflow, Step Functions, or AWS-native tools
  • Ensure reliability and automation via CloudWatch, CloudTrail, CodePipeline, and Terraform
  • Collaborate with data analysts and data scientists to deliver actionable insights
  • Work in an Agile environment to deliver high-quality data solutions

✅ Mandatory Skills :

  • Strong Python (including AWS SDKs), SQL, Spark
  • Hands-on experience with AWS data stack (S3, Glue, EMR, Redshift, Athena, Lambda)
  • Experience with DBT and ETL/ELT pipeline development
  • Workflow orchestration using Airflow / Step Functions
  • Knowledge of data lake formats (Parquet, ORC, Iceberg)
  • Exposure to DevOps practices (Terraform, CI/CD)
  • Strong understanding of data governance and security best practices
  • Minimum 4–7 years in Data Engineering (3+ years on AWS)

➕ Good to Have :

  • Understanding of Data Mesh architecture
  • Experience with platforms like Data.World
  • Exposure to Hadoop / HDFS ecosystems

🤝 What We’re Looking For :

  • Strong problem-solving and analytical skills
  • Ability to work in a collaborative, cross-functional environment
  • Good communication and stakeholder management skills
  • Self-driven and adaptable to fast-paced environments

📝 Interview Process :

  1. Online Assessment
  2. Technical Interview
  3. Fitment Round
  4. Client Round
Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Bengaluru (Bangalore)
5 - 12 yrs
₹10L - ₹32L / yr
skill iconPython
Azure OpenAI
databricks
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
+6 more

Job Title : Azure Data Scientist (AI/ML)

Experience : 5 to 10 Years

Location : Bengaluru

Work Mode : Hybrid (4 Days WFO, Tue to Fri – Non-Negotiable)

Notice Period : Immediate Joiner


💡 Role Overview :

We are looking for a highly skilled Azure Data Scientist with strong expertise in AI/ML, Python, and cloud-based data platforms. The role involves building scalable ML solutions, working on GenAI & RAG use cases, and delivering business impact through data-driven insights.


🔥 Mandatory Skills :

Python, Azure Machine Learning, Databricks, AI/ML model development (5+ yrs), Statistics & Probability, EDA & Data Modeling, Machine Learning algorithms, GenAI/RAG experience


✅ Key Responsibilities :

  • Design, develop, and deploy AI/ML models to solve complex business problems
  • Perform Exploratory Data Analysis (EDA) for data cleaning, discovery, and insights
  • Build and optimize ML pipelines using Azure Machine Learning & Databricks
  • Work on GenAI applications, RAG implementations, and advanced analytics solutions
  • Collaborate with data engineers, business stakeholders, and domain experts
  • Translate complex data into actionable business insights
  • Manage model lifecycle (development, validation, deployment, monitoring)
  • Communicate model outputs and insights to technical & non-technical stakeholders
  • Drive innovation and contribute to AI/ML best practices and strategy

🧠 Required Skills (Must Have) :

  • Strong experience in Python (ML/AI development)
  • Hands-on with Azure Machine Learning & Databricks
  • Deep understanding of Mathematics, Probability, and Statistics
  • Expertise in Machine Learning & Data Science methodologies
  • Experience in EDA, data visualization, and model development
  • Exposure to GenAI, RAG, and ML application development
  • Minimum 5+ years of experience in AI/ML model development
  • Strong problem-solving and analytical skills

➕ Good to Have :

  • Experience with MLOps practices
  • Domain knowledge in Energy / Oil & Gas value chain
  • Experience in data visualization tools
  • Team collaboration or mentoring experience

🤝 What We’re Looking For :

  • Strong communication & stakeholder management skills
  • Ability to work in a cross-functional, global team environment
  • Self-driven, adaptable, and innovation-focused mindset

📝 Interview Process :

  1. Geektrust Assessment (Assemble)
  2. Technical Interview
  3. Fitment Round
  4. Client Round
Read more
A leading data & analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage

A leading data & analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage

Agency job
via HyrHub by Neha Koshy
Bengaluru (Bangalore)
8 - 10 yrs
₹14L - ₹20L / yr
skill iconPython
skill iconReact.js
skill iconAmazon Web Services (AWS)
Architecture
skill iconLeadership
+1 more

Responsibilities:

  • Lead architecture, technical decisions, and ensure code quality, scalability, and performance
  • Develop backend systems using Python & SQL; build APIs and optimize databases
  • Work with frontend (React/Angular) and API-driven architectures
  • Integrate AI/ML models and support analytics/LLM-based solutions
  • Manage cloud deployments (Azure/AWS) and implement CI/CD practices
  • Ensure system reliability, monitoring, and production readiness
  • Mentor team members, conduct reviews, and collaborate with cross-functional teams
Read more
Euphoric Thought Technologies
Bengaluru (Bangalore)
6 - 8 yrs
₹12L - ₹22L / yr
skill iconJava
skill iconPython
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
Agile/Scrum
+4 more

Key Responsibilities:

  • Lead and mentor a team of Java and Python developers, providing technical guidance and fostering a culture of continuous learning and improvement.
  • Oversee the design, development, and implementation of high-performance, scalable, and secure software solutions for the financial services industry.
  • Collaborate with product managers and architects to translate business requirements into technical specifications and ensure alignment with overall product strategy.
  • Drive the adoption of best practices in software development, including code reviews, testing, and continuous integration/continuous deployment (CI/CD).
  • Manage project timelines and resources effectively, ensuring on-time and within-budget delivery of projects.
  • Identify and mitigate technical risks, proactively addressing potential issues and ensuring the stability and reliability of our platforms.
  • Stay abreast of emerging technologies and trends in Java, Python, and related fields, and evaluate their potential application to our products and services.
  • Contribute to the development of technical documentation and training materials.

Required Skillset:

  • Demonstrated expertise in Java and Python development, with a strong understanding of object-oriented principles, design patterns, and data structures.
  • Proven ability to lead and mentor a team of software engineers, fostering a collaborative and high-performing environment.
  • Experience in designing and developing scalable, high-performance, and secure software solutions.
  • Strong understanding of software development methodologies, including Agile and Waterfall.
  • Excellent communication, interpersonal, and problem-solving skills.
  • Ability to work effectively in a fast-paced, dynamic environment.
  • Bachelor's or Master's degree in Computer Science or a related field.
  • Experience with relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra).
  • Experience with cloud platforms (e.g., AWS, Azure, GCP) is a plus.
Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Mumbai, Trivandrum
4 - 8 yrs
Upto ₹30L / yr (Varies
)
skill iconNodeJS (Node.js)
skill iconPython
Dialog Flow
rasa
yellow.ai
+1 more

Responsible for developing, enhancing, modifying, and maintaining chatbot applications in the Global Markets environment. The role involves designing, coding, testing, debugging, and documenting conversational AI solutions, along with supporting activities aligned to the corporate systems architecture.

You will work closely with business partners to understand requirements, analyze data, and deliver optimal, market-ready conversational AI and automation solutions.


Key Responsibilities

  • Design, develop, test, debug, and maintain chatbot and virtual agent applications
  • Collaborate with business stakeholders to define and translate requirements into technical solutions
  • Analyze large volumes of conversational data to improve chatbot accuracy and performance
  • Develop automation workflows for data handling and refinement
  • Train and optimize chatbots using historical chat logs and user-generated content
  • Ensure solutions align with enterprise architecture and best practices
  • Document solutions, workflows, and technical designs clearly

Required Skills

  • Hands-on experience in developing virtual agents (chatbots/voicebots) and Natural Language Processing (NLP)
  • Experience with one or more AI/NLP platforms such as:
  • Dialogflow, Amazon Lex, Alexa, Rasa, LUIS, Kore.AI
  • Microsoft Bot Framework, IBM Watson, Wit.ai, Salesforce Einstein, Converse.ai
  • Strong programming knowledge in Python, JavaScript, or Node.js
  • Experience training chatbots using historical conversations or large-scale text datasets
  • Practical knowledge of:
  • Formal syntax and semantics
  • Corpus analysis
  • Dialogue management
  • Strong written communication skills
  • Strong problem-solving ability and willingness to learn emerging technologies

Nice-to-Have Skills

  • Understanding of conversational UI and voice-based processing (Text-to-Speech, Speech-to-Text)
  • Experience building voice apps for Amazon Alexa or Google Home
  • Experience with Test-Driven Development (TDD) and Agile methodologies
  • Ability to design and implement end-to-end pipelines for AI-based conversational applications
  • Experience in text mining, hypothesis generation, and historical data analysis
  • Strong knowledge of regular expressions for data cleaning and preprocessing
  • Understanding of API integrations, SSO, and token-based authentication
  • Experience writing unit test cases as per project standards
  • Knowledge of HTTP, REST APIs, sockets, and web services
  • Ability to perform keyword and topic extraction from chat logs
  • Experience training and tuning topic modeling algorithms such as LDA and NMF
  • Understanding of classical Machine Learning algorithms and appropriate evaluation metrics
  • Experience with NLP frameworks such as NLTK and spaCy


Read more
Bengaluru (Bangalore)
5 - 6 yrs
₹13L - ₹15L / yr
skill iconPython
skill iconDjango
skill iconFlask

We are looking for an experienced Python Developer with 5–6 years of hands-on experience in designing, developing, and maintaining scalable backend applications and APIs. The ideal candidate should have strong expertise in Python, backend frameworks, databases, and cloud/deployment practices. The candidate should be capable of working in a fast-paced environment and collaborating with cross-functional teams to deliver high-quality software solutions.

Key Responsibilities

  • Design, develop, test, and maintain robust and scalable Python-based applications.
  • Build and integrate RESTful APIs and backend services.
  • Work on server-side logic, database integration, and performance optimization.
  • Collaborate with frontend developers, QA teams, DevOps, and product teams for end-to-end delivery.
  • Write reusable, testable, and efficient code following best practices.
  • Debug, troubleshoot, and resolve production issues.
  • Participate in code reviews, technical design discussions, and architecture planning.
  • Optimize applications for maximum speed, scalability, and reliability.
  • Implement security and data protection measures.
  • Work with CI/CD pipelines and deployment processes.

Required Skills

  • Strong experience in Python development with 5–6 years of relevant experience.
  • Hands-on experience with Python frameworks such as:
  • Django
  • Flask
  • FastAPI
  • Strong understanding of OOPs, Data Structures, and Algorithms.
  • Experience in building and consuming REST APIs.
  • Good knowledge of SQL and relational databases like:
  • MySQL
  • PostgreSQL
  • Experience with NoSQL databases like:
  • MongoDB
  • Redis (preferred)
  • Knowledge of ORM frameworks such as SQLAlchemy or Django ORM.
  • Familiarity with Git/GitHub/GitLab version control.
  • Understanding of unit testing, debugging, and code quality practices.
  • Experience in working with Linux/Unix environments.
  • Knowledge of Docker, containerization, and deployment concepts.
  • Exposure to cloud platforms like AWS / Azure / GCP is preferred.

Preferred / Good to Have Skills

  • Experience in microservices architecture.
  • Knowledge of Celery, asynchronous processing, and message queues like:
  • RabbitMQ
  • Kafka
  • Familiarity with CI/CD pipelines.
  • Experience in writing clean architecture and scalable backend systems.
  • Exposure to DevOps practices is a plus.
  • Experience in Agile/Scrum methodology. 


Read more
Qiro Finance

at Qiro Finance

2 candid answers
2 recruiters
Bisman Gill
Posted by Bisman Gill
Bengaluru (Bangalore)
5yrs+
Upto ₹45L / yr (Varies
)
skill iconPython
TypeScript
skill iconAmazon Web Services (AWS)
Artificial Intelligence (AI)
Team Management

About the Role

Qiro is building the infrastructure powering the next generation of underwriting, credit analytics, and tokenized private credit markets.

We are looking for a Tech Lead — Credit & Blockchain Infrastructure to lead the architecture and execution of our core systems — spanning underwriting engines, credit lifecycle workflows, and blockchain-integrated capital markets infrastructure.

This is not a feature delivery role. This is a system ownership role.

You will be hands-on while leading a growing engineering team in a fast-moving, in-office environment.

What You’ll Own

  • Define and evolve the long-term technical vision for Qiro’s programmable credit infrastructure — architecting cohesive systems that unify underwriting engines, credit lifecycle workflows, and tokenized capital markets.
  • Own the end-to-end architecture of scalable backend platforms (Python and/or TypeScript), establishing clear boundaries between risk logic, platform APIs, and smart contract integrations while ensuring scalability, auditability, and extensibility.
  • Build and standardize configurable underwriting and credit lifecycle systems — from onboarding and drawdown orchestration to repayment waterfalls and early closures — ensuring deterministic, traceable financial state transitions at institutional scale.
  • Set integration and infrastructure standards across API contracts, data models, validation layers, and event-driven architectures, enabling reliable synchronization between off-chain services and on-chain contracts.
  • Architect secure and resilient blockchain integrations, including wallet interactions, capital flow coordination, and observable on-chain/off-chain state reconciliation.
  • Lead high-impact, cross-product initiatives from RFC and system design through production launch — validating architectural decisions, aligning stakeholders, and delivering measurable improvements in reliability, performance, and developer velocity.
  • Elevate reliability and operational excellence by defining SLOs, strengthening CI/CD and observability practices, reducing latency, and minimizing systemic risk in financial workflows.
  • Build and scale the engineering organization — mentoring senior engineers, shaping hiring standards, driving architecture reviews, and fostering a culture of ownership, craftsmanship, and first-principles thinking.
  • Partner closely with Product, Design, Security, and Operations to translate complex lending and capital market mechanics into simple, robust platform primitives.

Who You Are

  • 6-8+ years of engineering experience, with 3+ years in technical leadership roles.
  • Strong backend architecture experience in Python and/or TypeScript.
  • Comfortable designing distributed systems and financial workflows.
  • Experience building fintech, lending, underwriting, trading, or blockchain-integrated systems.
  • Strong understanding of API design, state management, and data modeling.
  • Able to navigate ambiguity and build 0→1 infrastructure.
  • Hands-on builder who leads by writing production-grade code.

We Value

  • Experience with underwriting engines or policy-driven decision systems.
  • Exposure to smart contracts and blockchain integrations.
  • Familiarity with PostgreSQL and event-driven architectures.
  • Experience in early-stage or high-growth startups.
  • Strong product thinking and ability to translate complex financial logic into scalable systems.

Why Join Qiro

  • Lead the architecture of a programmable credit infrastructure platform.
  • Join the founding technical leadership team.
  • High autonomy and ownership — your decisions shape the company.
  • In-office collaboration in Bangalore for speed and iteration.
  • Competitive compensation and meaningful equity.

Our Culture

We operate with:

  • First-principles thinking
  • Technical craftsmanship
  • High ownership
  • Fast execution with long-term architectural discipline


Read more
Euphoric Thought Technologies
Bengaluru (Bangalore)
4 - 7 yrs
₹8L - ₹18L / yr
skill iconPython
skill iconDjango
skill iconFlask

Company Description

Euphoric Thought Technologies Pvt. Ltd. provides modern technology solutions with a focus on performance and results for organizations. We are committed to creating a better future by acting differently, thinking carefully, and always being enthusiastic. Euphoric offers services in Product Development, Cloud Management and Consulting, DevOps, ML/AI, ServiceNow Integration, Blockchain Development, and Data Analytics.

Role Description

This is a full-time on-site role in Bengaluru for a Senior Python Developer at Euphoric Thought Technologies Pvt. Ltd. The developer will be responsible for back-end and front-end web development, software development, full-stack development, and using Cascading Style Sheets (CSS) to build effective and efficient applications.

Qualifications

  • Back-End Web Development and Full-Stack Development skills
  • Front-End Development and Software Development skills
  • Proficiency in Cascading Style Sheets (CSS)
  • Experience with Python, Django, and Flask frameworks
  • Strong problem-solving and analytical skills
  • Ability to work collaboratively in a team environment
  • Bachelor's or Master's degree in Computer Science or relevant field
  • Agile Methodologies: Proven experience working in agile teams, demonstrating the application of agile principles with lean thinking.
  • Front end - ReactJS skill
  • Data Engineering: Useful experience blending data engineering with core software engineering.
  • Additional Programming Skills: Desirable experience with other programming languages (C++, .NET) and frameworks.
  • CI/CD Tools: Familiarity with Github Actions is a plus.
  • Cloud Platforms: Experience with cloud platforms (e.g., Azure, AWS,) and containerization technologies (e.g., Docker, Kubernetes).
  • Code Optimization: Proficient in profiling and optimizing Python code.
Read more
Euphoric Thought Technologies
Bengaluru (Bangalore)
6 - 8 yrs
₹12L - ₹20L / yr
skill iconDjango
skill iconPython
skill iconFlask

We're looking for a Senior Python Developer with experience to join our team. You will lead and contribute to Python-based software projects as a Senior Python Developer, ensuring code quality and efficiency.


Senior Python Developer Job Responsibilities

  • Design and Development: Senior Python Developers are in charge of creating Python-based applications and systems. Their code is the foundation of all software projects, ensuring functionality and performance.
  • Leadership & Mentorship: Senior Developers frequently take on leadership positions, guiding and mentoring junior developers. They give technical skills and ensure the team adheres to best practices.
  • Collaboration: Working collaboratively with cross-functional groups is an important element of this role. They aid in the definition of project demands and specifications, ensuring that software meets business objectives.
  • Code Quality Assurance: A Senior Python Developer's role includes code reviews. They ensure code quality, suggest areas for development, and ensure best practices are followed.
  • Troubleshooting and Debugging: Senior Python Developers are in charge of finding and resolving code bugs. Their strong problem-solving abilities are put to use as they troubleshoot and debug software to ensure its flawless operation.
  • Staying Informed: It is critical to stay current with the newest trends and standards in Python development. Senior Developers ought to be knowledgeable about new technologies and tools.
  • Performance Optimisation: They are in charge of optimization and testing to ensure that software is functional and operates smoothly.
  • Documentation: Proper code and technical specifications documentation is required to ensure that the development process is open and readily available to the team.

Senior Python Developer Requirements and Skills

  • Educational Background: A bachelor's or master's degree in computer science or a related field is a good starting point for this position.
  • Experience: 6+yr Proven experience as a Python Developer is required. A strong project portfolio reveals expertise and capability.
  • Python Proficiency: A strong understanding of Python and its associated libraries is required. It is critical to have a thorough understanding of Python's capabilities and limitations.
  • Web Frameworks: Knowledge of web frameworks such as Django or Flask is advantageous because it speeds up web application development.
  • Database Knowledge: Understanding of relational and non-relational databases is frequently required. Understanding how to work with databases is essential for developing reliable software.
  • Front-End Skills: Being familiar with front-end technologies such as HTML, CSS, and JavaScript can be a valuable addition to the skill set of a Senior Python Developer, particularly when working on web applications.
  • Version Control: Working knowledge of source control systems such as Git is frequently required, as it aids in code integrity and collaboration.
  • Problem-Solving Skills: Strong skills in problem-solving and attention to detail are required. Senior Python developers must be able to effectively identify and resolve issues.
  • Communication and Collaboration: Effective communication and collaboration with team members and stakeholders are critical to the success of projects.
  • Leadership Experience: Prior leadership or mentorship experience is a significant asset. The ability to mentor and lead junior developers is frequently required.


Read more
REConnect Energy

at REConnect Energy

4 candid answers
2 recruiters
Ariba Khan
Posted by Ariba Khan
Bengaluru (Bangalore)
1 - 3 yrs
Upto ₹12L / yr (Varies
)
skill iconPython
Time series
Computer Vision
Artificial Intelligence (AI)
Generative AI
+1 more

Role Overview: We are seeking a Research Engineer to develop AI-driven solutions at the intersection of energy, climate science, and artificial intelligence. You will play a pivotal role in product development, leveraging data science and machine learning to solve engineering challenges.


Responsibilities:

  • Develop and deploy data-driven solutions for energy and power market applications.
  • Analyze large, diverse data sets: meteorological, local sensors (wind, solar, consumption), images (satellite images, etc.).
  • Solve core engineering problems with AI/ML techniques, domain expertise, and advanced modeling.
  • Design and build scalable pipelines for training, testing, and deploying models.
  • Communicate complex ideas effectively to both technical and non-technical stakeholders.
  • Collaborate across teams to drive product development and ensure impactful outcomes.


Expectations:

  • Ability to move from broad vision to technical solutions.
  • Ownership mindset: accountability for effort and outcomes.
  • Integrity, transparency, and teamwork. Requirements:
  • Strong analytical skills with a data-driven and scientific approach.
  • Proficiency in Python, capable of handling both structured and unstructured data.
  • Prior experience (industry or academia) building machine learning or deep learning based AI solutions.
  • Prior experience in LLM & GenAI is a plus.
Read more
SDX Partners
Pratik Ahir
Posted by Pratik Ahir
Bengaluru (Bangalore)
2 - 4 yrs
₹9L - ₹12L / yr
skill iconPython
skill iconDjango
skill iconPostgreSQL
skill iconAmazon Web Services (AWS)

Job Summary

We are seeking a skilled Python Platform Developer to join our engineering team. You will be responsible for building, optimizing, and maintaining the core backend infrastructure and internal platforms that power our applications. The ideal candidate will build scalable API architectures, enhance data security, and implement automation to improve developer productivity. 


Key Responsibilities

  • Platform Development: Design, develop, and maintain robust and scalable backend services, API frameworks, and shared libraries using Python.
  • Infrastructure Automation: Build and maintain tools for infrastructure automation using technologies such as AWS (Lambda, EC2, S3), Docker, and Infrastructure as Code (IaC) tools like Terraform or CloudFormation.
  • Performance Optimization: Improve system performance, low-latency API interactions, and data storage solutions.
  • CI/CD Optimization: Develop, maintain, and improve automated testing and continuous integration/continuous deployment (CI/CD) pipelines.
  • Collaboration: Work closely with product engineers, DevOps, and frontend developers to define requirements and deliver reliable infrastructure solutions.
  • Security & Monitoring: Implement strong security protocols and monitoring solutions (e.g., Prometheus, Datadog) to ensure platform reliability. 


Required Skills and Qualifications

  • Education: Bachelor’s degree in Computer Science, Engineering, or a related field.
  • Experience: 3–5+ years of experience in software development with a heavy focus on Python.
  • Core Python: Deep understanding of Python 3.x, object-oriented programming (OOP), and asynchronous programming (e.g., asyncio).
  • Frameworks: Hands-on experience with web frameworks like FastAPI, Django, or Flask.
  • Cloud Platforms: Experience with AWS or GCP services.
  • Tools: Proficient with Git, Docker, and CI/CD pipelines.
  • Database: Strong knowledge of SQL and database management. 


Preferred Skills

  • Experience with serverless architectures.
  • Knowledge of Kubernetes.
  • Experience in a DevOps or Site Reliability Engineering (SRE) role.


Read more
Peliqan

at Peliqan

3 recruiters
Bharath Kumar
Posted by Bharath Kumar
Bengaluru (Bangalore)
3 - 5 yrs
₹10L - ₹20L / yr
skill iconPython
skill iconKubernetes
helm
skill iconDocker
skill iconAmazon Web Services (AWS)
+3 more

DevOps Engineer

Location: Bangalore office


About Peliqan

Peliqan is an all-in-one data platform combining ELT/ETL pipelines, a built-in data warehouse, SQL and low-code Python transformations, reverse ETL, and AI-powered data activation. We connect 250+ data sources and serve enterprise teams, consultants, and SaaS companies. SOC 2 Type II certified and GDPR compliant.


The Role

Own and evolve the infrastructure powering Peliqan's multi-tenant data platform. You'll manage Kubernetes clusters, cloud resources, CI/CD pipelines, and monitoring — keeping everything reliable, secure, and scalable. You'll be the go-to person for infrastructure support across the engineering team.

Responsibilities


Manage and optimise Kubernetes clusters running production workloads — data pipelines, APIs, and customer-facing services.

Maintain Docker-based local development environments for the engineering team.

Administer cloud infrastructure on AWS and Google Cloud (compute, storage, networking, managed databases).

Build and maintain CI/CD pipelines for automated testing, building, and deploying across staging and production.

Set up and manage monitoring, alerting, and logging for platform health and incident response.

Manage release processes — deployments, rollbacks, and release strategies.

  • Maintain infrastructure-as-code using Helm charts.
  • Support security hardening and compliance efforts (SOC 2, GDPR).



Requirements

3+ years in a DevOps, SRE, or Infrastructure Engineering role.

Strong hands-on experience with Kubernetes and Helm charts.

Deep familiarity with Docker for containerisation and local dev workflows.

Production experience with AWS and/or Google Cloud.

  • Proficiency in Python and Bash scripting for automation and tooling.
  • Solid grasp of DevOps principles: infrastructure-as-code, GitOps, observability, continuous delivery.
  • Experience with CI/CD platforms (GitHub Actions, GitLab CI, or similar).



Nice to Have

  • Experience supporting multi-tenant SaaS platforms or data infrastructure at scale.
  • Knowledge of PostgreSQL, MySQL, or cloud-managed database administration.
  • Exposure to security compliance frameworks (SOC 2, ISO 27001, GDPR).


Read more
Peliqan

at Peliqan

3 recruiters
Bharath Kumar
Posted by Bharath Kumar
Bengaluru (Bangalore)
2 - 4 yrs
₹10L - ₹15L / yr
jest
pytest
Playwright
RestAPI
skill iconPython
+1 more

QA Tester

Location: Bangalore office


About Peliqan

Peliqan is an all-in-one data platform combining ELT/ETL pipelines, a built-in data warehouse, SQL and low-code Python transformations, reverse ETL, and AI-powered data activation. We connect 250+ data sources and serve enterprise teams, consultants, and SaaS companies. SOC 2 Type II certified and GDPR compliant.


The Role

Own quality end-to-end across Peliqan's platform — from the frontend UI and data apps to backend pipelines, connectors, and APIs. You'll design test strategies, build automated test suites, and work closely with developers to ship reliable software. Comfort using AI tools to accelerate test creation is essential.

Responsibilities


Design and maintain test plans covering manual and automated testing for features, regressions, and releases.

Write unit tests for the frontend (Jest) and backend (pytest).

  • Build and extend end-to-end test suites using Playwright across critical user flows — connector setup, data transformations, data app publishing, API creation.
  • Use AI tools (Copilot, Claude, etc.) to rapidly generate and refine test cases and test data.
  • Triage, document, and track defects with clear reproduction steps.
  • Integrate automated tests into CI/CD pipelines.
  • Conduct exploratory testing across data pipelines, the query engine, and user-facing interfaces.


Requirements

2+ years in a QA or SDET role, ideally in SaaS or data products.

Hands-on experience with Jest, pytest, and Playwright.

  • Comfortable with Python for scripting and test automation.
  • Demonstrated use of AI tools to craft and scale test suites.
  • Familiarity with CI/CD pipelines and REST API testing.
  • Strong analytical mindset and clear written communication.


Nice to Have

  • Experience testing ETL/ELT pipelines or database-heavy applications.
  • Familiarity with SQL and data validation testing.
  • Exposure to Docker and containerised environments.


Read more
TalentXO
tabbasum shaikh
Posted by tabbasum shaikh
Bengaluru (Bangalore)
1 - 3 yrs
₹19L - ₹23L / yr
skill iconPython
skill iconReact.js
iim
iit
BITS

Role & Responsibilities

As a Junior Full Stack Engineer at , you will work on product features end-to-end - from the data model to the pixel on the screen.

You will work alongside senior engineers, learn to bridge engineering and product, and contribute to translating complex business workflows into interfaces that enterprise teams actually use. You will own smaller features and grow into full vertical ownership over time.

  • Work on product features end-to-end - complex configuration UIs, workflow builders, and operational dashboards for enterprise users
  • Contribute to customer-facing surfaces - SDKs, embeddable flows, and APIs that directly shape how clients experience
  • Learn and apply frontend architecture best practices - component structure, state management, performance optimisation, and accessibility
  • Build backend APIs and data models under senior guidance - developing end-to-end ownership over time
  • Debug issues across the full stack - learn to trace problems from symptom to root cause
  • Collaborate with design, product, and customer success - hear feedback from real users and let it shape what you build
  • Partner with the GenAI engineer to surface AI capabilities through clean, well-designed product interfaces
  • Participate in code and design reviews to learn and grow

Ideal Candidate

  • Strong Full Stack Engineer profiles
  • Mandatory (Experience 1) – Must have 1+ years of hands-on full stack engineering experience (avoid frontend-heavy only profiles, backend heavy will work)
  • Mandatory (Experience 2) – Must have strong backend engineering experience using Python, including designing and owning APIs, services, and data models in production environments
  • Mandatory (Experience 3) – Must have strong frontend development experience using React (or equivalent), including component architecture, state management, and building production-grade user interfaces
  • Mandatory (Experience 4) – Must have end-to-end ownership experience, building and shipping features across the full stack (UI + API + database) without clear handoff boundaries
  • Mandatory (Experience 5) – Must have strong web fundamentals, including understanding of browser rendering, performance optimization, and accessibility best practices
  • Mandatory (Experience 6) – Must be able to demonstrate solving complex problems on both frontend and backend, clearly articulating trade-offs, decisions, and outcomes
  • Mandatory (Experience 7) – Must have solid experience with databases (SQL/NoSQL), including schema design, query optimization, and handling performance bottlenecks
  • Mandatory (Company) - Top Product Companies, (preferred early-stage startups with Seed to Series C/D with fast-paced shipping culture)
  • Mandatory (Education) - Tier-1 institutes (IITs, BITS, IIITs), Can be skipped if from Top notch product companies
  • Preferred (Skill 1) – Strong proficiency in TypeScript, with experience in type-safe architecture and scalable frontend/backend systems
  • Preferred (Skill 2) – Experience with testing frameworks such as Cypress / Playwright and strong unit testing practices
  • Preferred (Skill 3) – Experience working with cloud platforms (AWS preferred)


Read more
Bengaluru (Bangalore)
1 - 3 yrs
₹20L - ₹27L / yr
skill iconPython
skill iconReact.js

Strong Full Stack Engineer profiles

Mandatory (Experience 1) – Must have 1+ years of hands-on full stack engineering experience (avoid frontend-heavy only profiles, backend heavy will work)

Mandatory (Experience 2) – Must have strong backend engineering experience using Python, including designing and owning APIs, services, and data models in production environments

Mandatory (Experience 3) – Must have strong frontend development experience using React (or equivalent), including component architecture, state management, and building production-grade user interfaces

Mandatory (Experience 4) – Must have end-to-end ownership experience, building and shipping features across the full stack (UI + API + database) without clear handoff boundaries

Mandatory (Experience 5) – Must have strong web fundamentals, including understanding of browser rendering, performance optimization, and accessibility best practices

Mandatory (Experience 6) – Must be able to demonstrate solving complex problems on both frontend and backend, clearly articulating trade-offs, decisions, and outcomes

Mandatory (Experience 7) – Must have solid experience with databases (SQL/NoSQL), including schema design, query optimization, and handling performance bottlenecks

Mandatory (Company) - Top Product Companies, (preferred early-stage startups with Seed to Series C/D with fast-paced shipping culture)

Mandatory (Education) - Tier-1 institutes (IITs, BITS, IIITs), Can be skipped if from Top notch product companies

Read more
Auxo AI
kusuma Gullamajji
Posted by kusuma Gullamajji
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
3 - 8 yrs
₹10L - ₹40L / yr
aws
PySpark
databricks
skill iconPython

Role Summary:

AuxoAI is seeking a skilled and experienced Data Engineer to join our dynamic team. The ideal candidate will have 3-8 years of prior experience in data engineering, with a strong background in working on modern data platforms. This role offers an exciting opportunity to work on diverse projects, collaborating with cross-functional teams to design, build, and optimize data pipelines and infrastructure.


Responsibilities:

• Design, develop, and maintain data pipelines using Databricks (PySpark / Spark SQL)

• Build and manage data pipelines across Bronze, Silver, and Gold layers using Delta Lake

• Implement ETL/ELT workflows for batch and near real-time processing

• Work with Databricks Workflows for orchestration and job scheduling

• Leverage Unity Catalog for data governance, access control, and metadata management

• Optimize Spark jobs, cluster configurations, and cost efficiency

• Collaborate with business and analytics teams to translate requirements into scalable data models

• Integrate data from multiple sources (APIs, databases, cloud storage)

• Ensure data quality, validation, and observability across pipelines

• Troubleshoot and debug data pipeline issues, providing timely resolution and proactive monitoring


Qualifications:

• Bachelor’s degree in computer science, Engineering, or a related field.

• Overall 3+ years of prior experience in data engineering, with a focus on designing and building data pipelines

• Hands-on experience with Databricks platform and ecosystem

• Strong proficiency in Python (PySpark) and SQL

• Experience working with Delta Lake (ACID transactions, time travel, schema evolution)

• Good understanding of data warehousing concepts and dimensional modeling

• Familiarity with Unity Catalog (data governance, RBAC, lineage basics)

• Understanding of Spark performance tuning and optimization techniques

• Experience with cloud platforms (AWS / Azure / GCP)

• Working knowledge of Git and CI/CD practices

• Familiarity with implementing CI/CD processes or other orchestration tools is a plus.

Read more
Redtring
Keshav Senthil
Posted by Keshav Senthil
Bengaluru (Bangalore)
2 - 4 yrs
₹30L - ₹70L / yr
TypeScript
skill iconReact Native
vite
skill icontailwindcss
tanstack
+12 more

Product Engineer (Full Stack) – AI & Healthcare

About Us

We are building a next-generation computational model of human biology to predict, prevent, and cure diseases at their source. By combining real-world biological data, diagnostics, and advanced modeling, we aim to make biology computable.

Our platform integrates at-home diagnostics, biomarker tracking, and lifestyle data (wearables, sleep, nutrition) to create a continuous, connected view of human health.

Role Overview

We are looking for a Product Engineer who combines strong engineering skills with product intuition and design sensibility. You will build high-quality, performant software that serves as the interface between individuals and their biology.

This is a high-ownership role where you will work closely with design and product teams to ship impactful features end-to-end.

Key Responsibilities

  • Build and ship scalable, reliable full-stack systems
  • Develop intuitive, high-quality user interfaces for complex biological data
  • Collaborate closely with design to deliver polished user experiences
  • Own features end-to-end: from ideation to production
  • Optimize performance, responsiveness, and usability
  • Work in a fast-paced, AI-first environment

Required Skills & Qualifications

  • 2–4 years of experience building and shipping production systems
  • Strong proficiency in full-stack development
  • Experience with modern frontend frameworks (React preferred)
  • Solid understanding of backend systems and APIs
  • Strong product sense and attention to UI/UX detail
  • Ability to work independently and make decisions quickly

Preferred Qualifications (Good to Have)

  • Experience in startup environments
  • Strong side projects or open-source contributions
  • Experience with product analytics tools
  • Exposure to AI/ML or agent-based systems

Tech Stack

  • Frontend: TypeScript, React 18, Vite, Tailwind, TanStack, Radix
  • Mobile: React Native, Nativewind
  • Backend: Python (FastAPI, Pydantic)
  • Database: Supabase, PostgreSQL
  • AI/Tools: Braintrust, LiteLLM
  • Analytics: Mixpanel, Amplitude, FullStory

Compensation & Benefits

  • CTC: ₹30L – ₹35L (Base) + ESOPs
  • Location: HSR Layout, Bengaluru (In-person)
  • Work Schedule: 6 days/week (Mon–Sat)
  • Joining: Immediate

Perks:

  • Sponsored healthy meals (lunch & dinner)
  • Gym subscription
  • Learning & development budget
  • Freedom to use tools/tech of your choice

How We Work

  • AI-first mindset
  • Founder-mode ownership
  • Speed over process
  • High trust, high autonomy
  • Focus on learning velocity

Why Join Us?

  • Work on one of the hardest problems in human history
  • Build products at the intersection of AI, healthcare, and design
  • Direct impact on improving human health outcomes
  • High-growth, high-learning environment 
Read more
Talent Pro
Bengaluru (Bangalore)
1 - 3 yrs
₹20L - ₹27L / yr
skill iconPython
Artificial Intelligence (AI)

Strong Junior GenAI / AI Backend Engineer Profiles

Mandatory (Experience 1) – Must have 1+ years of full time expeirence in software development using LLMs (OpenAI / Gemini / similar) in projects (internship / full-time / strong personal projects)

Mandatory (Experience 2) – Must have strong coding skills in Python and hands-on backend development experience (FastAPI / Django preferred)

Mandatory (Experience 3) – Must have built or contributed to AI/LLM-based applications, such as chatbots, copilots, document processing tools, etc.

Mandatory (Experience 4) – Must have basic understanding of RAG concepts (embeddings, vector DBs, retrieval)

Mandatory (Experience 5) – Must have experience building APIs or backend services and integrating with external systems

Mandatory (Experience 6) – Must have AI/LLM projects clearly mentioned in CV (with what was built, not just tools used)

Mandatory (Experience 7) – Must have worked with modern development tools (Git, APIs, basic cloud exposure)

Mandatory (Tech Stack) – Strong in Python + basic AI/LLM ecosystem

Mandatory (Company) – Product companies / Funded startups (Series A / B / high-growth environments)

Mandatory (Education) - Tier-1 institutes (IITs, BITS, IIITs), Can be skipped if from Top notch product companies

Mandatory (Exclusion) - Avoid candidates with Only Prompt Engineers, from pure Data Science / ML theory background without backend coding, Frontend-heavy engineers.

Read more
Talent Pro
Bengaluru (Bangalore)
4 - 7 yrs
₹30L - ₹45L / yr
skill iconPython

Strong Senior GenAI / AI Backend Engineer Profiles

Mandatory (Experience 1) – Must have 4+ years of total software development experience, with at least 2+ years working on AI/LLM-based features in production

Mandatory (Experience 2) – Must have strong backend engineering experience using Python (FastAPI / Django preferred) and building production-grade systems

Mandatory (Experience 3) – Must have hands-on experience building LLM-based applications, including OpenAI / Gemini / similar models in real projects

Mandatory (Experience 4) – Must have experience with RAG (Retrieval Augmented Generation) including chunking, embeddings, and retrieval pipelines

Mandatory (Experience 5) – Must have experience designing end-to-end AI pipelines, including chaining, tool usage, structured outputs, and handling failure cases

Mandatory (Experience 6) – Must have experience building agentic AI systems (multi-step workflows, tool orchestration like LangGraph / CrewAI or custom agents)

Mandatory (Experience 7) – Must have strong coding and system design skills, not just prompt engineering or experimentation

Mandatory (Experience 8) – Must have experience shipping AI features in production, not just POCs or research projects

Mandatory (Experience 9) – Must have experience working with APIs, backend services, and integrations

Mandatory (Experience 10) – Must have understanding of AI system reliability, including latency, cost optimization, fallback handling, and basic eval thinking

Mandatory (Company) – Product companies / startups, preferably Series A to Series D

Mandatory (Note) - Candidate's overall experience should not be more than 7 Yrs

Mandatory (Tech Stack) – Strong in Python + AI/LLM ecosystem, experience with modern AI tooling and frameworks

Mandatory (Exclusion) – Reject profiles that are only Prompt Engineers, Data Scientists, or Frontend Engineers without strong backend + system building experience

Read more
Snabbit
Shweta Vyas
Posted by Shweta Vyas
Bengaluru (Bangalore)
3 - 7 yrs
₹25L - ₹45L / yr
Mobile App Development
Backend
skill iconPython
skill iconJava
skill iconGo Programming (Golang)
+2 more

We are looking for a strong Mobile Engineer with backend exposure who can own end-to-end feature development. This is a mobile-heavy fullstack role where you will primarily build scalable mobile applications while contributing to backend services and APIs.

Key Responsibilities

  • Design and develop high-quality mobile applications (primary focus)
  • Build and integrate RESTful APIs and backend services
  • Collaborate with product and design teams to ship features end-to-end
  • Ensure performance, scalability, and reliability of mobile apps
  • Write clean, maintainable, and testable code
  • Participate in architecture discussions and technical decision-making

Must Have Skills

  • Strong experience in mobile development (Flutter / React Native / iOS / Android)
  • Solid understanding of backend development (Node.js / Java / Python / Go)
  • Experience with API design, microservices, and databases
  • Good understanding of system design and app performance optimization
  • Familiarity with cloud platforms (AWS/GCP)

Good to Have

  • Experience working in startup environments
  • Exposure to CI/CD pipelines and DevOps practices
  • Understanding of real-time systems or scalable architectures
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Bengaluru (Bangalore), Mumbai, Pune
4.5 - 8 yrs
Best in industry
skill iconPython
SQL
FastAPI
Restapi
Artificial Intelligence (AI)
+4 more

Generative AI System Design

  • Architect and implement end-to-end LLM-powered applications
  • Build scalable RAG pipelines (chunking, embeddings, hybrid search, reranking)
  • Design and implement agent-based workflows (tool calling, multi-step reasoning, orchestration)
  • Integrate LLM APIs such as OpenAI and Anthropic, along with open-source models
  • Implement structured output validation, grounding strategies, and hallucination mitigation
  • Optimize inference cost, latency, and token efficiency
  • Design evaluation pipelines for performance, accuracy, and safety

2️⃣ Backend & Microservices Engineering

  • Design scalable backend systems using Python
  • Build REST and async APIs using FastAPI / Django
  • Architect and implement microservices with clear service boundaries
  • Implement service-to-service communication (REST, gRPC, event-driven messaging)
  • Work with message brokers (Kafka / RabbitMQ)
  • Optimize database performance (PostgreSQL, MongoDB)
  • Implement caching strategies (Redis)
  • Build observability: logging, monitoring, distributed tracing

3️⃣ Cloud-Native Architecture & DevOps

  • Design and deploy containerized services using Docker
  • Orchestrate services using Kubernetes
  • Implement CI/CD pipelines
  • Ensure system scalability, resilience, and fault tolerance
  • Apply distributed systems principles:
  • Circuit breakers
  • API gateway patterns
  • Load balancing
  • Horizontal scaling
  • Saga patterns
  • Zero-downtime deployments


Read more
Bengaluru (Bangalore)
4 - 8 yrs
₹30L - ₹45L / yr
skill iconPython
skill iconGo Programming (Golang)

Strong Senior Full Stack Engineer profiles

Mandatory (Experience 1) – Must have 4+ years of hands-on full stack engineering experience (avoid frontend-heavy only profiles, backend heavy will work)

Mandatory (Experience 2) – Must have strong backend engineering experience using Python, including designing and owning APIs, services, and data models in production environments

Mandatory (Experience 3) – Must have strong frontend development experience using React (or equivalent), including component architecture, state management, and building production-grade user interfaces

Mandatory (Experience 4) – Must have end-to-end ownership experience, building and shipping features across the full stack (UI + API + database) without clear handoff boundaries

Mandatory (Experience 5) – Must have strong web fundamentals, including understanding of browser rendering, performance optimization, and accessibility best practices

Mandatory (Experience 6) – Must be able to demonstrate solving complex problems on both frontend and backend, clearly articulating trade-offs, decisions, and outcomes

Mandatory (Experience 7) – Must have solid experience with databases (SQL/NoSQL), including schema design, query optimization, and handling performance bottlenecks

Mandatory (Company) - Must have worked in product-based companies, (preferred early-stage startups with Seed to Series C/D with fast-paced shipping culture)

Mandatory (Education) - Strong CS fundamentals required (CS degree or equivalent). Candidates from Tier-1 institutes (IITs, BITS, IIITs) are preferred but not mandatory

Read more
Bengaluru (Bangalore), Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Chennai
4 - 15 yrs
₹30L - ₹40L / yr
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Generative AI
skill iconPython
Scikit-Learn
+4 more

About the Role

 

We are looking for a highly skilled Data Scientist with strong expertise in Machine Learning, MLOps, and Generative AI. The ideal candidate will have hands-on experience in building scalable ML models, deploying them in production, and working with modern AI frameworks, including GenAI technologies.

 

 

 

Key Responsibilities

 

·      Design, develop, and deploy machine learning models for real-world business problems

·      Work on end-to-end ML lifecycle: data preprocessing, model building, evaluation, deployment, and monitoring

·      Implement and manage MLOps pipelines for scalable and reproducible workflows

·      Utilize tools like MLflow for experiment tracking, model versioning, and lifecycle management

·      Develop and integrate Generative AI (GenAI) solutions such as LLM-based applications

·      Collaborate with cross-functional teams (engineering, product, business) to translate requirements into AI solutions

·      Optimize model performance and ensure production stability

·      Stay updated with the latest advancements in AI/ML and GenAI ecosystems

 

 

 

Required Skills & Qualifications

 

·      4+ years of experience in Data Science / Machine Learning

·      Strong programming skills in Python

·      Hands-on experience with ML modeling techniques (supervised, unsupervised, NLP, etc.)

·      Solid understanding of MLOps practices and tools

·      Experience with MLflow or similar model lifecycle tools 

·      Practical experience in Generative AI (GenAI), including working with LLMs

·      Experience with libraries/frameworks like Scikit-learn, TensorFlow, PyTorch

·      Strong understanding of data structures, algorithms, and statistics

·      Experience with cloud platforms (AWS/GCP/Azure) is a plus


Good to Have

 

·      Experience with LLM fine-tuning, prompt engineering, or RAG pipelines

·      Exposure to Docker, Kubernetes, and CI/CD pipelines

·      Knowledge of data engineering workflows 



Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort