Cutshort logo

50+ Python Jobs in India

Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!

icon
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
6 - 10 yrs
Upto ₹38L / yr (Varies
)
skill iconPython
Generative AI
Microservices
RESTful APIs
skill iconMongoDB
+3 more

We are seeking a highly skilled Senior Backend Developer with deep expertise in Python and FastAPI to join our team. This role focuses on building high-performance, scalable backend services capable of handling high request volumes while integrating advanced LLM technologies.


The ideal candidate will design robust distributed systems, implement efficient data storage solutions, and ensure enterprise-grade security within an Azure-based infrastructure. This is a great opportunity to work on AI/ML integrations and mission-critical applications requiring high performance and reliability.


Key Responsibilities:


Backend Development

  • Design and maintain high-performance backend services using Python and FastAPI
  • Implement advanced FastAPI features such as dependency injection, middleware, and async programming
  • Write comprehensive unit tests using pytest
  • Design and maintain Pydantic schemas

High-Concurrency Systems

  • Implement asynchronous code for high-volume request processing
  • Apply concurrency patterns and atomic operations to ensure efficient system performance

Data & Storage

  • Optimize MongoDB operations
  • Implement Redis caching strategies (TTL, performance tuning, caching patterns)

Distributed Systems

  • Implement rate limiting, retry logic, failover mechanisms, and region routing
  • Build microservices and event-driven architectures
  • Work with EventHub, Blob Storage, and Databricks

AI/ML Integration

  • Integrate OpenAI API, Gemini API, and Claude API
  • Manage LLM integrations using LiteLLM
  • Optimize AI service usage within the Azure ecosystem

Security

  • Implement JWT authentication
  • Manage API keys and encryption protocols
  • Implement PII masking and data security mechanisms

Collaboration

  • Work with cross-functional teams on architecture and system design
  • Contribute to engineering best practices and technical improvements
  • Mentor junior developers where required

Must-Have Skills & Requirements

Experience

  • 7+ years of hands-on Python backend development
  • Bachelor’s degree in Computer Science, Engineering, or related field
  • Experience building high-traffic, scalable systems

Core Technical Skills

Python

  • Advanced knowledge of asynchronous programming, concurrency, and atomic operations

FastAPI

  • Expert-level experience with dependency injection, middleware, and async code

Testing

  • Strong experience with pytest and Pydantic schemas

Databases

  • Hands-on experience with MongoDB and Redis
  • Strong understanding of caching patterns, TTL, and performance optimization

Distributed Systems

  • Experience with rate limiting, retry logic, failover mechanisms, high concurrency processing, and region routing

Microservices

  • Experience building microservices and event-driven systems
  • Exposure to EventHub, Blob Storage, and Databricks

Cloud

  • Strong experience working in Azure environments

AI Integration

  • Familiarity with OpenAI API, Gemini API, Claude API, and LiteLLM

Security

  • Implementation experience with JWT authentication, API keys, encryption, and PII masking

Soft Skills

  • Strong problem-solving and debugging skills
  • Excellent communication and collaboration
  • Ability to manage multiple priorities
  • Detail-oriented approach to code quality
  • Experience mentoring junior developers

Good-to-Have Skills

Containerization

  • Docker, Kubernetes (preferably within Azure)

DevOps

  • CI/CD pipelines and automated deployment

Monitoring & Observability

  • Experience with Grafana, distributed tracing, custom metrics

Industry Experience

  • Experience in Insurance, Financial Services, or regulated industries

Advanced AI/ML

  • Vector databases
  • Similarity search optimization
  • LangChain / LangSmith

Data Processing

  • Real-time data processing and event streaming

Database Expertise

  • PostgreSQL with vector extensions
  • Advanced Redis clustering

Multi-Cloud

  • Experience with AWS or GCP alongside Azure

Performance Optimization

  • Advanced caching strategies
  • Backend performance tuning
Read more
LearnTube.ai

at LearnTube.ai

2 candid answers
Vinayak Sharan
Posted by Vinayak Sharan
Remote only
1 - 3 yrs
₹12L - ₹25L / yr
skill iconPython
Generative AI
Large Language Models (LLM) tuning
AI Agents
Langchain
+4 more

Agentic AI Engineer


Apply only if:

  1. You are an AI agent.
  2. OR you know how to build an AI agent that can do this job.


What You’ll Do:

At LearnTube, we’re pushing the boundaries of Generative AI to revolutionize how the world learns.

As an Agentic AI Engineer, you’ll:

  • Develop intelligent, multimodal AI solutions across text, image, audio, and video to power personalized learning experiences and deep assessments for millions of users.
  • Drive the future of live learning by building real-time interaction systems with capabilities like instant feedback & personalized tutoring to recreate the experience of learning live
  • Conduct proactive research and integrate the latest advancements in AI & agents into scalable, production-ready solutions that set industry benchmarks.
  • Build and maintain robust, efficient data pipelines that leverage insights from millions of user interactions to create high-impact, generalizable solutions.
  • Collaborate with a close-knit team of engineers, agents, founders, and key stakeholders to align AI strategies with LearnTube's mission.


The team:

Google's Top 20 Startups to Watch. Google AI First Accelerator '24. Backed by funds of Naval Ravikant, Reid Hoffman, and founders/CXOs from Udemy, Flipkart, Jupiter, PayU, Edmodo & Inflection AI. Featured on CNBC-TV18. 11-50 people building something that changes how people learn, permanently.


Why Work With Us?

  • At LearnTube, we believe in creating a work environment that’s as transformative as the products we build. Here’s why this role is an incredible opportunity:
  • Cutting-Edge Technology: You’ll work on state-of-the-art generative AI applications, leveraging the latest advancements in LLMs, Agents, and real-time systems.
  • Autonomy and Ownership: Experience unparalleled flexibility and independence in a role where you’ll own high-impact projects from ideation to deployment.
  • Exponential Growth: Accelerate your career by working on impactful projects that pack three years of learning and growth into one.
  • Founder and Advisor Access: Collaborate directly with founders and industry experts, including the CTO of Inflection AI, to build transformative solutions.
  • Team Culture: Join a close-knit team of high-performing humans, where every voice matters, and Monday morning meetings are something to look forward to.
  • Mission-Driven Impact: Be part of a company that’s redefining education for millions of learners and making AI accessible to everyone.


Read more
Tap Invest
Anusree TP
Posted by Anusree TP
Bengaluru (Bangalore)
1 - 2 yrs
₹3L - ₹5L / yr
SQL
skill iconPython
pandas
skill iconData Analytics
Business Analysis

As an Analyst at Tap Invest, you’ll turn data into decisions. You’ll work with teams across

Product, Ops, Marketing, and Sales to uncover insights, solve real business problems, and

drive strategy.

This role is for someone who is comfortable working with data independently and can

support business teams with reliable analysis and reporting.

Key Responsibilities

● Gather, organize, and clean data from various sources including databases,

spreadsheets, and external sources to ensure accuracy and completeness.

● Write SQL queries to pull, validate, and clean data from production databases.

● Build and maintain dashboards, and generate KPI reports. Track performance

against targets and identify areas for optimization.

● Analyze user funnels and investment patterns to surface actionable insights.

● Prepare and present clear, concise reports and visualizations to communicate

findings and recommendations to stakeholders across teams.

● Document data definitions, metrics, and assumptions clearly for consistency and

reuse.

What We’re Looking For

● 1 to 2 years of experience in Data Analytics, Business Analytics, or a similar role.

● Comfortable writing in SQL and validating queries.

● Solid with Excel / Google Sheets (pivot tables, lookups, charts).

● Genuine curiosity about how businesses use data to make decisions.

● Experience with scripts for data automations.

● Prior projects involving production datasets.

Nice to Have

● Familiarity with pandas or any data manipulation library for advanced automations.

● Interest in capital markets, bonds, fixed income or FinTech.

● Exposure to AI tools

Read more
Bengaluru (Bangalore)
5 - 8 yrs
₹38L - ₹45L / yr
Node.js
skill iconPython
Field Engineer
Forward Deployed
skill iconDocker
+1 more

Role & Responsibilities

Own the Client’s Outcome:

  • Embed with enterprise customers – on-site and remotely – to understand their supply chain operations, data estate, and what success actually looks like for their business.
  • Scope and design technical solutions for messy, real-world logistics problems – with a clear line to measurable impact: cost per delivery, SLA performance, empty kilometres.
  • Own the full deployment lifecycle: architecture through go-live through steady-state. You’re accountable for the outcome, not just the code.

Build and Ship:

  • Design, build, and maintain backend services in Node.js or Python that power routing, planning, and execution at enterprise scale.
  • Build and own the integrations connecting Locus to client ERPs, TMS, WMS, and OMS platforms – these integrations are often the riskiest part of a deployment.
  • Write production code that runs under real load. If it isn’t in production, it hasn’t shipped.

Be the Technical Interface with the Client:

  • Run architecture reviews, lead integration workshops, and represent Locus in executive steering meetings. You need to be credible at every level of the client organisation.
  • Bring field learnings back into the product and platform teams. Some of Locus’s best features started as a client workaround.
  • Push back when a client request would compromise platform integrity – and propose a better alternative.

Show Up On-Site:

  • Travel to client sites – domestic and international, up to ~30% of the time – for kick-offs, integration sprints, go-lives, and post-live reviews.
  • Build the kind of relationship where the client’s ops lead calls you directly when something goes wrong at 2am, not a support ticket.
  • Be comfortable wherever the work is: a warehouse floor, a logistics control tower, a C-suite boardroom.

Make the Next Deployment Easier:

  • Document architecture decisions, integration patterns, and deployment playbooks – every engagement should make the next one faster.
  • Work closely with Product, Customer Success, and Platform Engineering. Share what you’re seeing in the field; don’t wait to be asked.
  • Mentor junior FDEs and raise the technical bar across the team.

Ideal Candidate

  • Strong Forward Deployed / Field Engineer
  • Mandatory (Experience 1): Must have 5+ years of backend engineering experience with hands-on coding in Node.js or Python, building production-grade systems
  • Mandatory (Experience 2): Must have minimum 2+ years in client-facing / deployment-heavy roles, where they worked directly with enterprise customers
  • Mandatory (Experience 3): Must have experience shipping and owning production systems end-to-end: From design → build → deployment → post-production support
  • Mandatory (Tech Skills 1 - Backend & Systems): Strong in: Node.js or Python (must-have), Building scalable backend services
  • Mandatory (Tech Skills 2 - Integrations): Must have experience with: Enterprise integrations (APIs, third-party systems), Systems like ERP / TMS / WMS / OMS
  • Mandatory (Tech Skills 3 - Data & Messaging): Hands-on with: Relational + NoSQL databases, Event streaming / queues (Kafka / RabbitMQ or similar)
  • Mandatory (Tech Skills 4 - Cloud & Deployment): Experience with: Cloud platforms (AWS / GCP / Azure), Docker + Kubernetes (or containerised deployments)
  • Mandatory (Company): Top Product companies / Startups / SaaS / platform companies


Read more
Coimbatore
6 - 15 yrs
₹20L - ₹35L / yr
skill iconPython
skill iconReact.js
skill iconAmazon Web Services (AWS)

Role: Senior Software Developer (Full Stack) - Python 

Location: Coimbatore 

YOE: 6+ years 

Mandatory Skills: Python, AWS 

Good to have: React, SQL, React Native, Knowledge in Flutter/Android Native Benefits: Learn more about our perks below 

Compensation: Competitive compensation as per industry standards. 


About the Role: 

We aspire to build high-quality, innovative & robust software. If you are a hands-on platform builder with significant experience in developing scalable data platforms, look no further. Click on Apply and we will reach out to you soon. 


Responsibilities: 

● Determines operational feasibility by evaluating analysis, problem definition, requirements, solution development, and proposed solutions. 

● Documents and demonstrates solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code. 

● Prepares and installs solutions by determining and designing system specifications, standards, and programming. 

● Improves operations by conducting systems analysis; recommending changes in policies and procedures. 

● Obtains and licenses software by obtaining required information from vendors; recommending purchases; testing and approving products. 

● Updates job knowledge by studying state-of-the-art development tools, programming techniques, and computing equipment 

● Participate in educational opportunities & read professional publications;

● Protects operations by keeping information confidential. 

● Provides information by collecting, analyzing, and summarizing development and service issues. 

● Accomplishes engineering and organization mission by completing related results as needed. 

● Develops software solutions by studying information needs; conferring with users; studying systems flow, data usage, and work processes; investigating problem areas; following the software development lifecycle.


Requirements: 

● Proven work experience as a Full Stack Engineer or Senior Software Developer

● Strong experience designing and developing scalable and interactive applications

● Hands-on expertise in React or similar UI technologies for frontend development and Python or other modern backend languages 

● Experience in mobile app development (e.g., React Native, Flutter, or Native Android/iOS) 

● Deep understanding of relational databases (e.g., PostgreSQL/MySQL) with strong proficiency in SQL 

● Experience with ORM frameworks (e.g., TypeORM, SQLAlchemy or similar)

● Familiarity with NoSQL databases (e.g., MongoDB) and caching systems like Redis is a plus 

● Test-driven development and automated testing experience is a plus ● Proficiency with modern software engineering tools, Git-based workflows, and CI/CD pipelines 

● Strong ownership mindset with ability to lead teams, mentor developers, and drive end-to-end delivery 

● Excellent communication and collaboration skills with cross-functional stakeholders

● Working knowledge of AWS or other cloud platforms is an added advantage 



Read more
ThoughtsCrest Software

at ThoughtsCrest Software

1 candid answer
Ariba Khan
Posted by Ariba Khan
Bengaluru (Bangalore)
6 - 15 yrs
Best in industry
Agentic AI
Generative AI
Large Language Models (LLM)
skill iconPython
skill iconMachine Learning (ML)

About the Role

We are looking for a hands-on AI Agentic Lead to drive Agentic AI implementations on the Lyzr platform and lead in-house Agentic AI infusion into our products. This role is ideal for someone who combines strong technical depth with product thinking and has experience taking AI solutions from concept to deployment.


What We Are Looking For

  • 6+ years to 15 years of overall experience
  • At least 2 years of Agentic AI experience with product deployment exposure
  • Strong experience in designing, building, and deploying AI agents/workflows for real business use cases
  • Ability to lead architecture, development, deployment, and optimization of agentic solutions
  • Strong problem-solving, ownership, and stakeholder-handling skills
  • Interested to work in BENGALURU - WFO only.


Key Responsibilities

  • Lead end-to-end delivery of Agentic AI solutions on the Lyzr platform
  • Drive Agentic AI adoption across in-house products
  • Design multi-agent workflows, orchestration patterns, tool usage, memory, guardrails, and evaluation approaches
  • Work closely with product, business, and engineering teams to identify high-impact AI use cases
  • Build scalable, production-ready solutions with focus on reliability, performance, and business value
  • Mentor the team and shape best practices for Agentic AI delivery


Preferred Skills

  • Hands-on experience with LLMs, AI agents, RAG, orchestration frameworks, prompt design, tool calling, and evaluation
  • Exposure to production deployments, monitoring, debugging, and optimization of AI systems
  • Experience integrating AI into enterprise products/platforms
  • ML background is a plus, but not mandatory


Why Join Us

  • Opportunity to work on live Agentic AI implementations
  • Play a key role in building next-generation AI capabilities for both client solutions and internal products
  • High ownership, strong growth opportunity, and direct impact on product direction
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Shrutika SaileshKumar
Posted by Shrutika SaileshKumar
Bengaluru (Bangalore)
4 - 8 yrs
Best in industry
Snowflake
Data Transformation Tool (DBT)
SQL
Snow flake schema
skill iconPython
+1 more

JD - 

 

We are looking for a strong Data Engineer having hands on experience in building pipelines using Snowflake and DBT.

Key Responsibilities:

  • Develop, maintain, and optimize data pipelines using DBT and SQL on Snowflake DB.
  • Collaborate with data analysts, QA and business teams to build scalable data models.
  • Implement data transformations, testing, and documentation within the DBT framework.
  • Work on Snowflake for data warehousing tasks, including data ingestion, query optimization, and performance tuning.
  • Use Python (preferred) for automation, scripting, and additional data processing as needed.

Required Skills:

  • 6+ years of experience in building data engineering pipelines.
  • Strong hands-on expertise with DBT and advanced SQL.
  • Experience working with modern columnar/MPP data warehouses, preferably Snowflake.
  • Knowledge of Python for data manipulation and workflow automation (preferred).
  • Good understanding of data modeling concepts, ETL/ELT processes, and best practice.


Read more
TestMu AI (Formely LambdaTest)
Aliya Akhtar
Posted by Aliya Akhtar
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2 - 3 yrs
₹12L - ₹20L / yr
Systems design
skill iconPython

Backend Developer


📍 Noida | 🕐 Full-Time | 🧭 Experience: 2–3 Years


The Mission

We aren't building traditional backend systems — we're powering the infrastructure behind Agentic Intelligence. TestMu AI is building the world's first AI-native platform where backend systems don't just serve requests, they enable autonomous decision-making, execution, and scale.

The name "TestMu" comes from our community conference. Our users and team aren't an audience — they're the heartbeat of what we build. We believe AI augments human potential. It doesn't replace it.


You'll be building the core backend systems that power AI-driven workflows — ensuring high performance, scalability, and reliability at every layer.


The Pillars of Impact

🚀 1. Core Backend & System Architecture (50%)


  • Build and scale high-performance backend services and APIs
  • Design efficient database schemas, query optimization, and data flows
  • Write clean, logical, production-grade code (Python, Golang, or similar)
  • Own system performance — latency, throughput, and reliability

⚙️ 2. Backend for AI Systems (30%)


  • Develop backend systems supporting AI agents and autonomous workflows
  • Handle large-scale data processing, async tasks, and event-driven systems
  • Integrate backend infrastructure seamlessly with AI/ML components

🧠 3. Scalability & Distributed Systems (20%)


  • Contribute to microservices architecture and service decomposition
  • Build fault-tolerant, highly available distributed systems
  • Optimize systems for high concurrency and real-time execution

Your Engineering Stack

Tech/ToolsPython / GolangBuilding core backend services and logic-heavy systemsAWS / GCPDeploying and scaling distributed backend infrastructureKafka / RabbitMQHandling asynchronous processing and event-driven workflows


The Bar

SignalCore Backend Experience2–3 years of hands-on experience building APIs, backend systems, and scalable servicesProblem-Solving AbilityStrong fundamentals in data structures, algorithms, and logical thinkingSystem Design UnderstandingAbility to design scalable backend systems with clear architectural thinkingOwnership & ExecutionExperience owning backend features end-to-end in a fast-paced environment


The Interview Loop · Screening for the Top 1%

RoundsRound I · Recruiter ScreenEvaluation of backend experience, problem-solving approach, and project depthRound II · Hiring ManagerDeep dive into backend projects, APIs, databases, and system design thinkingRound III · Domain LeadLive coding + backend problem-solving + discussion on scalability and distributed systemsFinal · LeadershipCulture fit, ownership mindset, and ability to operate in a high-velocity startup environment



Your Growth Trajectory TestMu AI is a high-growth environment where we promote based on complexity solved, not years of tenure. As a Backend Developer, you have a massive runway to scale from an Individual Contributor (IC) into a core Engineering Leadership role, working alongside pioneers in agentic intelligence.


Perks of the Future

  • Health & Wellness: 100% premium covered insurance for you + family (spouse, kids, parents) with annual check-ups.
  • Fuel for Innovation: Fresh, daily gourmet lunch and dinner served at our Noida HQ.
  • Seamless Transit: Safe, GPS-enabled cab facilities for eligible shifts (home-office-home).
  • POD Culture: Dedicated quarterly budgets for team-building, offsites, and collaborative celebrations.



Read more
reodev
Richa Kukar
Posted by Richa Kukar
Bengaluru (Bangalore)
3 - 6 yrs
₹12L - ₹38L / yr
skill iconGo Programming (Golang)
skill iconPython

Backend Engineer at Reo.Dev : Job Description

[Disclaimer: This is a longish read. However, we felt you might be interested to read in detail, about what you could be doing for the next 5ish years 😊]

Job Function: Backend Engineer

Experience: 2 – 4 years [number of years of experience is not a filter]

Salary and Incentives: Open for discussion

Location: Bangalore, India [Hybrid work - Remote + Office]

👋 Meet Reo.Dev

  • Reo.Dev was founded in January 2023. So we are quite young 😊
  • Reo was started by Achintya, Gaurav and Piyush – All of them have successfully built companies before [more on the Founding team below]
  • We are building a Revenue Operating System for the Developer Focussed Companies (Think of us like a 6sense.com for Dev Focussed Companies).
  • What we are building is quite innovative. Currently, no other company offers the capabilities Reo.Dev is building
  • We recently closed our Seed round with top early stage investors (not disclosed yet)
Read more
ChicMic Studios
Akanksha Mittal
Posted by Akanksha Mittal
Mohali
3 - 6 yrs
₹6L - ₹12L / yr
skill iconPython
skill iconDjango
skill iconFlask
FastAPI
skill iconPostgreSQL
+1 more

Experience Required: 3-5 Years

No. of vacancies: 2

Job Type: Full Time

Vacancy Role: WFO

Job Category: Development

Job Description

ChicMic Studios is hiring for a highly skilled and experienced Sr. Python Developer to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with expertise in deploying and managing applications on AWS. Proficiency in Django Rest Framework (DRF), a solid understanding of machine learning concepts, and hands-on experience with tools like PyTorch, TensorFlow, and transformer architectures are essential.

Roles & Responsibilities

  • Develop, maintain, and scale web applications using Django & DRF.
  • Implement and manage payment gateway integrations and ensure secure transaction handling.
  • Design and optimize SQL queries, transaction management, and data integrity.
  • Work with Redis and Celery for caching, task queues, and background job processing.
  • Develop and deploy applications on AWS services (EC2, S3, RDS, Lambda, Cloud Formation).
  • Implement strong security practices including CSRF token generation, SQL injection prevention, JWT authentication, and other security mechanisms.
  • Build and maintain microservices architectures with scalability and modularity in mind.
  • Develop Web Socket-based solutions including real-time chat rooms and notifications.
  • Ensure robust application testing with unit testing and test automation frameworks.
  • Collaborate with cross-functional teams to analyze requirements and deliver effective solutions.
  • Monitor, debug, and optimize application performance, scalability, and reliability.
  • Stay updated with emerging technologies, frameworks, and industry best practices.
  • Create scalable machine learning models using frameworks like PyTorch, TensorFlow, and scikit-learn.
  • Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases.
  • Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization.
  • Deploy and manage machine learning models in production environments using tools like TensorFlow Serving, TorchServe, and AWS SageMaker.

Qualifications

  • Bachelor’s degree in Computer Science, Engineering, or a related field.
  • 3-5 years of professional experience as a Python Developer.
  • Proficient in Python with a strong understanding of its ecosystem.
  • Extensive experience with Django and Flask frameworks. 
  • Hands-on experience with AWS services for application deployment and management.
  • Strong knowledge of Django Rest Framework (DRF) for building APIs.
  • Expertise in machine learning frameworks such as PyTorch, TensorFlow, and scikit-learn.
  • Experience with transformer architectures for NLP and advanced AI solutions.
  • Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB).
  • Familiarity with MLOps practices for managing the machine learning lifecycle.
  • Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus.
  • Excellent problem-solving skills and the ability to work independently and as part of a team.
  • Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders.


Read more
Vivanet

at Vivanet

1 candid answer
Ashish Uikey
Posted by Ashish Uikey
Pune, Hyderabad
8 - 12 yrs
Best in industry
Playwright
Selenium
cypress
jest
skill iconJavascript
+12 more

Responsibilities:

•Design and execute automated test scripts using Playwright.

•Build and maintain test frameworks for web applications.

•Perform regression, functional, and performance testing.

•Collaborate with developers to resolve defects.

•Ensure CI/CD integration of automated tests.


Skills:

•Strong experience with Playwright automation.

•Knowledge of testing frameworks (PyTest, Jest, Mocha).

•Proficiency in JavaScript/TypeScript or Python.

•Familiarity with CI/CD tools (Jenkins, GitHub Actions).

•Understanding of Agile/Scrum methodologies.

Read more
Vivanet

at Vivanet

1 candid answer
Ashish Uikey
Posted by Ashish Uikey
Pune, Hyderabad
8 - 12 yrs
Best in industry
Agentic AI
LangChain
CrewAI
Large Language Models (LLM)
Llama
+9 more

Responsibilities:

•Develop and deploy agentic AI solutions for automation and decision-making.

•Build intelligent agents capable of reasoning, planning, and interacting with environments.

•Integrate AI models with enterprise applications.

•Collaborate with data scientists to fine-tune models.

•Ensure ethical AI practices and compliance.


Skills:

•Strong knowledge of agentic AI frameworks.

•Proficiency in Python and ML libraries (TensorFlow, PyTorch).

•Experience with LLMs and reinforcement learning.

•Familiarity with APIs, cloud AI services, and orchestration tools.

Read more
TIFIN FINTECH INDIA

at TIFIN FINTECH INDIA

1 candid answer
1 recruiter
Vrishali Mishra
Posted by Vrishali Mishra
Bengaluru (Bangalore)
3 - 5 yrs
₹10L - ₹30L / yr
skill iconPython
skill iconGo Programming (Golang)
Generative AI
Prompt engineering

About TIFIN

TIFIN is an AI-first fintech platform transforming wealth management through data science, machine learning, and intelligent automation. With strong global backing and a rapidly growing India hub, TIFIN is building scalable, next-gen financial products used by global institutions.


Role Overview

We are looking for a Senior Software Engineer with strong backend and AI integration experience to build scalable, high-performance systems. This role involves working closely with product, data science, and AI teams to develop intelligent platforms leveraging modern technologies and LLMs.


Key Responsibilities

  • Design, develop, and scale backend systems and APIs using Golang and Python
  • Build and integrate AI-driven features, including prompt-based workflows (Claude or similar LLMs)
  • Work with MongoDB and Elasticsearch for high-performance data handling and search capabilities
  • Optimize system performance, scalability, and reliability
  • Collaborate with cross-functional teams (Product, AI/ML, Data Engineering)
  • Contribute to architecture decisions and best engineering practices
  • Write clean, maintainable, and production-grade code


Required Skills & Experience

  • 3–5 years of experience in backend engineering
  • Strong proficiency in Golang and/or Python
  • Hands-on experience with MongoDB and Elasticsearch
  • Experience working with LLMs / AI tools (Claude, OpenAI, etc.) and prompt engineering
  • Good understanding of REST APIs, microservices architecture, and distributed systems
  • Strong problem-solving and debugging skills


Good to Have

  • Experience in fintech / SaaS platforms
  • Exposure to AI/ML pipelines or data platforms
  • Knowledge of cloud platforms (AWS/GCP/Azure)
  • Familiarity with CI/CD and DevOps practices



Read more
TalentXO
Bengaluru (Bangalore)
6 - 9 yrs
₹36L - ₹45L / yr
skill iconPython
TypeScript
skill iconNodeJS (Node.js)
skill iconReact.js
fullstack profile
+2 more

Role & Responsibilities

We are currently seeking a Senior Engineer to join our Financial Services team, contributing to the design and development of scalable system.

The Ideal Candidate Will Be Able To-

  • Take ownership of delivering performant, scalable and high quality cloud based software, both frontend and backend side.
  • Mentor team members to develop in line with product requirements.
  • Collaborate with Senior Architect for design and technology choices for product development roadmap.
  • Do code reviews.

Ideal Candidate

  • Strong Software Engineer fullstack profile using NodeJS / Python and React
  • Mandatory (Experience) - Must have 6+ YOE in Software Development using Python OR NodeJS (For backend) & React (For frontend)
  • Mandatory (Core Skills 1): Must have strong experience in working on Typescript
  • Mandatory (Core Skills 2): Must have experience in message based systems like Kafka, RabbitMq, Redis
  • Mandatory (Core Skills 3): Databases - PostgreSQL & NoSQL databases like MongoDB
  • Mandatory (Company) - Product Companies Only
  • Mandatory (Education) - B.Tech or Dual degree (Btech and Mtech or Integrated Msc/MS) from Tier 1 Engineering Institutes. Candidates from other institutions will not be considered unless they come from top-tier product companies
  • Mandatory (Note) : This role is a hybrid role (2 days WFO)
  • Preferred (Experience): Experience in Fin-Tech, Payment, POS and Retail products is highly preferred
  • Preferred (Mentoring): Experience in mentoring, coaching the team.


Read more
Alpha

at Alpha

2 candid answers
Yash Makhecha
Posted by Yash Makhecha
Remote only
1 - 3 yrs
₹3L - ₹8L / yr
skill iconPostgreSQL
skill iconRedis
skill iconReact.js
skill iconDocker
skill iconNodeJS (Node.js)
+6 more

Software Development Engineer 1 (SDE1)


Location: Remote (India preferred) | Type: Full-time | Compensation: Competitive salary + early-stage stock options



🧠 About Alpha


Modern revenue teams juggle 10+ point-solutions. Alpha unifies them into an agent-powered platform that plans, executes, and optimises GTM campaigns—so every touch happens on the right channel, at the right time, with the right context.


Alpha is building the world’s most intuitive AI stack for revenue teams —to engage, convert & scale revenue with an AI powered GTM team. l

Our mission is to make AI not just accessible, but dependable and truly useful.


We’re early, funded, and building with urgency. Join us to help define what work looks like when AI works for you.



🔧 What You’ll Do


You’ll lead the development of our AI GTM platform and underlying AI agents to power seamless multi-channel GTMs.


This is a hybrid UX-engineering role: you’ll translate high-level user journeys into interfaces that feel clear, powerful, and trustworthy.


Your responsibilities:


  • Design & implement end-to-end features across React-TS/Next.js, Node.js, Postgres, Redis, and NestJs micro-services for LLM agents.
  • Build & document scalable GraphQL / REST APIs that expose our data model (Company, Person, Campaign, Sequence, Asset, ActivityRecord, InferenceSnippet).
  • Integrate third-party APIs (CRM, email, ads, CMS) and maintain data sync reliability > 98 %.
  • Implement the dynamic agent flow builder with configurable steps, HITL checkpoints, and audit trails.
  • Instrument product analytics, error tracking, and CI pipelines for fast feedback and safe releases.
  • Work directly with the founder on product scoping, technical roadmap, and hiring pipeline.


✅ What We’re Looking For

  • 1–3 years experience building polished web apps (React, Vue, or similar)
  • Strong eye for design fidelity, UX decisions, and motion
  • Experience integrating frontend with backend APIs and managing state
  • Experience with visual builders, workflow editors, or schema UIs is a big plus
  • You love taking complex systems and making them feel simple


💎 What You’ll Get

  • Competitive salary + high-leverage early equity
  • Ownership of user experience at the most critical phase
  • A tight feedback loop with real users from Day 1
  • Freedom to shape UI decisions, patterns, and performance for the long haul
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Shrutika SaileshKumar
Posted by Shrutika SaileshKumar
Bengaluru (Bangalore)
4 - 8 yrs
₹14L - ₹28L / yr
SQL
skill iconPython
Informatica
Data Transformation Tool (DBT)

Job Description:

We are looking for a skilled Database Developer with strong hands-on experience in SQL + Informatica and programming knowledge in Java or Python. The ideal candidate will design, develop, and maintain robust ETL pipelines and database solutions while collaborating with cross-functional teams to support business data needs and analytics initiatives. 

Key Responsibilities: 

  • Design, develop, and optimize SQL queries, stored procedures, triggers, and views for high performance and scalability. 
  • Develop and maintain ETL workflows using Informatica PowerCenter (or Informatica Cloud). 
  • Integrate and automate data flows between systems using Java or Python for custom scripts and applications. 
  • Perform data analysis, validation, and troubleshooting to ensure data accuracy and consistency across systems. 
  • Work closely with business analysts, data engineers, and application teams to understand data requirements and translate them into efficient database solutions. 
  • Implement performance tuning, query optimization, and indexing strategies for large datasets. 
  • Maintain data security, compliance, and documentation of ETL and database processes. 

Required Skills & Experience: 

  • Bachelor’s degree in Computer Science, Information Technology, or related field. 
  • 5–8 years of hands-on experience as a SQL Developer or ETL Developer
  • Strong proficiency in SQL (Oracle, SQL Server, or PostgreSQL). 
  • Hands-on experience with Informatica PowerCenter / Informatica Cloud
  • Programming experience in Java or Python (for automation, data integration, or API handling). 
  • Good understanding of data warehousing concepts, ETL best practices, and performance tuning
  • Experience working with version control systems (e.g., Git) and Agile/Scrum methodologies. 

Good to Have: 

  • Exposure to cloud data platforms (AWS, Azure, or GCP). 
  • Familiarity with Unix/Linux scripting
  • Experience in data modeling and data governance frameworks.

 

Read more
INI8 Labs
Shwetha K
Posted by Shwetha K
Bengaluru (Bangalore)
4.5 - 8 yrs
₹20L - ₹38L / yr
skill iconPython
Microservices
Test Automation (QA)
API
API testing

Job Title: Test Automation Engineer Location: Bangalore Experience: 6+ years Immediate Joiners are Preferred. We're building systems where correctness, performance, and reliability are non-negotiable. We need an engineer who treats testing as a first-class discipline - not a checklist activity. About the role: ● This is not a test-case writing role. You'll own the entire approach to testing — architecture, tooling, and outcomes. ● You'll work across backend-heavy, distributed systems where failures are nuanced and the stakes are real. ● You'll have direct access to leadership, no layers, and genuine influence over engineering quality standards. What we're looking for: ● Ownership mindset.You own outcomes, not just test cases. You identify gaps without being asked. ● Engineering depth.You design test systems, not just scripts. You think in architectures. ● Systems intuition.You understand how distributed systems fail at scale — not just on the happy path. ● Observability fluency.You're comfortable with logs, metrics, and tracing to debug failures in production-like environments. ● Self-direction.You figure things out and move. You don't wait for instructions. ● AI-augmented workflow.You use AI tools intelligently to accelerate your work — not as a substitute for thinking What you'll work on: ● End-to-end test automation for backend-heavy, distributed systems ● Building test frameworks for APIs, microservices, and event-driven architectures ● Load testing, failure scenario simulation, and edge-case validation ● Deep CI/CD pipeline integration — testing as a continuous engineering activity ● Kernel, firmware, and hardware-level validation(advantageous, not required) Tech Stack & Expectations: ● Languages: Python / Go (or strong scripting expertise) ● Frameworks: PyTest, custom frameworks, or similar ● Infrastructure: Docker, Kubernetes ● Systems: Distributed systems, APIs, async/event-driven architectures ● Databases: PostgreSQL, Redis ● Messaging: Kafka, NATS, RabbitMQ ● Observability: Logging, metrics, tracing ● Bonus: Experience with kernel-level or firmware-level testing What you will get: ● Full ownership over how testing is designed and implemented — your decisions stick ● Hard, interesting problems on systems where quality genuinely matters ● Direct access to leadership with no bureaucratic layers ● Early-stage influence — your work defines engineering quality standards here ● Compensation calibrated to your level of expertise 

Read more
Navitas Business Consulting
Solomon Yericherla
Posted by Solomon Yericherla
Hyderabad
4 - 5 yrs
₹12L - ₹14L / yr
MS-Excel
skill iconPython

RESPONSIBILITIES SKILLS REQUIRED Build automated pipelines from TikTok Seller Center — live dashboards, no manual exports. Script and configure API-based automation flows (Make, Zapier, Python) from scratch. Use Claude AI daily to accelerate analysis, write code, and generate report narratives. Own the Growth team's full reporting cadence — weekly, monthly, ad-hoc. Support TikTok Shop daily operations: monitor KPIs, flag anomalies, coordinate data needs. Present insights and strategy to C-level stakeholders — concise, visual, confident. Excel / Sheets — Power Query, macros, dynamic dashboards Scripting — Python, JS, or Apps Script (write & debug code) Automation — Make / Zapier / API integrations BI Tools — Looker Studio, Power BI, or Tableau Claude AI — required, daily co-pilot (non-negotiable) E-commerce fluency — CAC, LTV, ROAS, funnel metrics Advanced English — C1/C2, written & spoken Proactive ownership — no hand-holding required

Read more
Techjays

at Techjays

4 candid answers
1 product
SREEHARIVASU S
Posted by SREEHARIVASU S
Coimbatore
6 - 10 yrs
Best in industry
Retrieval Augmented Generation (RAG)
skill iconPython
Generative AI
Agentic AI
Data Structures
+10 more

About Techjays

At Techjays, we build production-grade AI platforms for global clients. We operate at the intersection of backend engineering, distributed systems, and applied AI — delivering secure, scalable, and enterprise-ready intelligent systems. Our team has built and scaled products at Google, Akamai, NetApp, ADP, Cognizant, and Capgemini.

About the Role

This is not a feature-delivery role. We are looking for an AI Lead who can architect, own, and scale intelligent backend systems end-to-end. You will drive both technical direction and execution — working across LLM integrations, RAG pipelines, agentic AI workflows, and cloud-native backend systems for global clients.

What You'll Do

  • Architect and scale backend systems powering AI-driven applications
  • Design and implement RAG pipelines, AI agents, and LLM integrations
  • Own systems end-to-end — from architecture to deployment and scaling
  • Integrate and optimize LLMs (Claude, GPT, Gemini) for real-world production use cases
  • Build high-performance distributed systems with observability and cost efficiency
  • Lead backend and AI initiatives with strong technical ownership
  • Mentor engineers and raise the technical bar across teams
  • Collaborate with product and AI teams to deliver AI-native solutions

What We're Looking For

  • 6–10 years of strong backend engineering experience
  • Hands-on expertise in Python (FastAPI / Django / Flask)
  • Deep understanding of Generative AI and LLM-based systems
  • Strong experience with RAG pipelines and Vector Databases (Pinecone, FAISS, ChromaDB, Weaviate)
  • Solid knowledge of Agentic AI — building autonomous agents and multi-agent workflows
  • Proficiency in AWS or GCP in production environments
  • Experience with distributed systems, microservices, and system design
  • Strong grasp of Data Structures, Algorithms, and Design Patterns
  • Familiarity with WebSockets, Git, Linux/Unix, and CI/CD

Nice to Have

  • Experience with Anthropic Claude API and Claude Code
  • Familiarity with real-time data systems or streaming (Kafka, etc.)
  • MLOps and AI system lifecycle experience
  • Optimizing AI systems for latency, cost, and scalability

Who You Are

  • You think in systems, not just features
  • You take full ownership of what you build
  • You are comfortable navigating fast-moving, ambiguous environments
  • You stay updated with the latest in Generative AI and backend technologies
  • Strong communicator who can collaborate across teams and global clients

What We Offer

  • Competitive compensation (Best in Industry)
  • Work on production-grade AI systems used by global clients
  • Exposure to cutting-edge AI tools and frameworks
  • A culture that values clarity, integrity, and continuous growth
Read more
Leading provider of Capital Market solutions in India

Leading provider of Capital Market solutions in India

Agency job
via HyrHub by Neha Koshy
Bengaluru (Bangalore)
4 - 7 yrs
₹12L - ₹18L / yr
skill iconPython
skill iconGo Programming (Golang)
skill iconDocker
skill iconKubernetes
Object Oriented Programming (OOPs)
+2 more

Core Responsibilities:

  • Design & Development: Architect and implement scalable backend services and APIs using Python or Golang, ensuring high performance, resilience, and extensibility.
  • System Ownership: Take end-to-end ownership of critical modules, from design and development to deployment and support.
  • Technical Leadership: Conduct design and code reviews, enforce best practices, and mentor junior engineers to raise the team’s technical bar.
  • Collaboration: Work closely with product managers, architects, and other engineers to translate business requirements into technical solutions.
  • Performance & Reliability: Troubleshoot complex issues in production systems, identify root causes, and design sustainable long-term solutions.
  • Innovation: Evaluate new technologies, contribute to proof-of-concepts, and recommend tools that can improve developer productivity.
  • Process Improvement: Drive initiatives to improve coding standards, CI/CD pipelines, and automated testing practices.
  • Knowledge Sharing: Document designs, create technical guides, and share insights with the broader engineering team.


Experience and Expertise:

  • 4–7 years of backend development experience with Python or Golang.
  • Strong expertise in designing, developing, and scaling microservices and distributed systems.
  • Solid understanding of concurrency, multi-threading, and performance optimization.
  • Proficiency with databases (SQL/NoSQL), caching systems (Redis, Memcached), and messaging systems (Kafka, RabbitMQ, etc.).
  • Hands-on experience with Linux development, Docker, and Kubernetes.
  • Familiarity with cloud platforms (AWS/GCP/Azure) and related services.
  • Strong debugging, profiling, and optimization skills for production-grade systems.
  • Experience with AI-powered development tools is a strong plus; familiarity with concepts like 'agentic coding' for workflow automation or 'context engineering' for leveraging LLMs in system design is highly desirable.


Skills:

  • Strong problem-solving ability, with experience handling complex technical challenges.
  • Ability to lead technical initiatives and mentor junior engineers.
  • Excellent communication skills to collaborate with cross-functional teams and articulate trade-offs.
  • Self-motivated, proactive, and able to operate independently while aligning with team goals.
  • Passionate about engineering culture, quality, and developer productivity.


Read more
Leading provider of Capital Market solutions in India

Leading provider of Capital Market solutions in India

Agency job
via HyrHub by Neha Koshy
Bengaluru (Bangalore)
2 - 4 yrs
₹8L - ₹12L / yr
skill iconPython
skill iconGo Programming (Golang)
Linux/Unix
skill iconDocker
skill iconKubernetes
+3 more

Core Responsibilities:

  • Design, develop, and maintain backend services and APIs using Python or Golang.
  • Write high-quality, testable, and maintainable code with a focus on performance and scalability.
  • Implement automated tests and contribute to CI/CD pipelines.
  • Collaborate with product, QA, and DevOps teams for end-to-end feature delivery.
  • Troubleshoot production issues and provide timely resolutions.
  • Participate in design and architecture discussions to improve system efficiency.
  • Contribute to improving development processes, coding standards, and best practices.


Experience and Expertise:

  • 2–4 years of experience in backend development with Python or Golang.
  • Solid understanding of RESTful APIs, microservices, and distributed systems.
  • Strong knowledge of data structures, algorithms, and OOPS principles.
  • Hands-on experience with relational and/or NoSQL databases.
  • Familiarity with Linux development, Docker, and basic cloud concepts (AWS/GCP/Azure).
  • Proficiency with Git and version control workflows.
  • Familiarity with AI-powered development tools or exposure to projects involving large language models (LLMs) is a plus.


Skills:

  • Strong analytical and debugging skills with the ability to solve complex problems.
  • Good communication and collaboration skills across teams.
  • Ability to work independently with minimal supervision while being a strong team player.
  • Growth mindset – eagerness to learn new technologies and improve continuously.


Read more
Leading provider of Capital Market solutions in India

Leading provider of Capital Market solutions in India

Agency job
via HyrHub by Neha Koshy
Bengaluru (Bangalore)
1 - 2 yrs
₹2L - ₹7L / yr
skill iconPython
skill iconGo Programming (Golang)
skill iconDocker
skill iconKubernetes
Linux/Unix
+3 more

Core Responsibilities:

  • Design, develop, and maintain backend services using Python or Golang.
  • Write clean, efficient, and well-documented code following best practices.
  • Build and consume RESTful APIs and microservices.
  • Collaborate with QA, DevOps, and product teams for smooth feature delivery.
  • Participate in peer code reviews and technical discussions.
  • Debug and fix issues, ensuring system stability and performance.
  • Continuously learn and apply new technologies and tools in backend development.


Experience and Expertise:

  • 0–2 years of software development experience (internships or projects acceptable).
  • Proficiency in at least one backend programming language (Python or Golang).
  • Strong understanding of object-oriented programming and software fundamentals.
  • Knowledge of data structures, algorithms, and database concepts.
  • Familiarity with Linux-based development environments.
  • Exposure to Git and version control workflows.


Skills:

  • Strong analytical and problem-solving ability.
  • Willingness to learn, adapt, and take ownership.
  • Effective communication and teamwork skills.
  • Curiosity for emerging technologies, including AI-driven development, backend technologies, distributed systems, and modern engineering practices.
Read more
Bengaluru (Bangalore), Pune, Chennai
1 - 3 yrs
₹3L - ₹4L / yr
skill iconPython
Shell Scripting
IP Networking
Application Deployment

Application Deployment Engineers / Deployment Engineer – Video Analytics / CCTV Solutions / Application Implementation Engineer


Company Name

Paralaxiom Technologies Private Limited

Company Website

https://www.vast.vision/

https://www.linkedin.com/company/paralaxiom


Company details

Paralaxiom Technologies is involved in deep learning algorithms to develop video analytics-based security and compliance applications. They offer OCR products and image classification tools enhanced by machine learning algorithms and robust statistical analysis. We are among the earliest practitioners of AI software and we have world-class credentials in these technologies. They offer products like Paralaxiom VAST(Video Analytics and Surveillance Toolkit) and Paralaxiom AMPLE(Paralaxiom natural language processing platform).


In today's world, all premises be it manufacturing, hospitals, offices, hotels, cities, airports, shops, warehouses etc. are covered with CCTV Cameras. Continuous monitoring through dedicated command center or e-surveillance is proving to be ineffective as well as costly to manage.

We have pioneered the use of AI / ML technologies to headlessly live monitor CCTV cameras to generate very accurate alerts, alarms & insights and deliver them directly to the right stakeholders for quick, proactive action.

We have worked closely with hundreds of customers from diverse industries, AI Hardware Partners, CCTV OEMs, VMSs, System Integrators & Consultants to bring to the world VAST, an enterprise-ready Video Surveillance as a Service (VSaaS) solution.


Location: Pune / Bangalore / Chennai

Mode of Working: Work From Office 

Days of Working: 5 Days a week


Responsibilities

Position Overview:

Paralaxiom is a video analytics and machine vision company with its VAST product line a path-breaking product for safety, security, and operations for CCTV operations.

We are looking for application engineers for this product line.

Experience: 1 -2 years

Key Responsibilities:

1) Gathering information from customers on their needs and understanding how VAST software matches their desires

2) Design the solution and installation of the software

3 ) Making sure the VAST software continues to work properly after maintenance and testing

4) Take notes of all aspects of the application for future upgrades and maintenance

5) Troubleshooting the software

6) Training the end users

7) Excellent Knowledge of Python and shell scripting

8) Working knowledge of IP networking and troubleshooting

We need someone with 1-2 years of experience with the following skillset:

Great communication skills

Debugging and analytical skills

Knowledge of hardware and software integration will be a plus

Knowledge of Camera NVR will be a plus

Hands-on system and functional testing will be a plus


Interview process: 3 Video + Final Discussion - F2F


Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
8 - 15 yrs
Best in industry
skill iconJavascript
skill iconReact.js
skill iconNodeJS (Node.js)
TypeScript
skill iconAmazon Web Services (AWS)
+5 more

About NonStop io Technologies:

NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.


Brief Description:

We are looking for an Engineering Manager who combines technical depth with leadership strength. This role involves leading one or more product engineering pods, driving architecture decisions, ensuring delivery excellence, and working closely with stakeholders to build scalable web and mobile technology solutions. As a key part of our leadership team, you’ll play a pivotal role in mentoring engineers, improving processes, and fostering a culture of ownership, innovation, and continuous learning.


Roles and Responsibilities:

● Team Management: Lead, coach, and grow a team of 30-50 software engineers, tech leads, and QA engineers

● Technical Leadership: Guide the team in building scalable, high-performance web and mobile applications using modern frameworks and technologies

● Architecture Ownership: Architect robust, secure, and maintainable technology solutions aligned with product goals

● Project Execution: Ensure timely and high-quality delivery of projects by driving engineering best practices, agile processes, and cross-functional collaboration

● Stakeholder Collaboration: Act as a bridge between business stakeholders, product managers, and engineering teams to translate requirements into technology plans

● Culture & Growth: Help build and nurture a culture of technical excellence, accountability, and continuous improvement

● Hiring & Onboarding: Contribute to recruitment efforts, onboarding, and learning & development of team members.


Requirements:

● 8+ years of software development experience, with 2+ years in a technical leadership or engineering manager role

● Proven experience in architecting and building web and mobile applications at scale

● Hands-on knowledge of technologies such as JavaScript/TypeScript, Angular, React, Node.js, .NET, Java, Python, or similar stacks

● Solid understanding of cloud platforms (AWS/Azure/GCP) and DevOps practices

● Strong interpersonal skills with a proven ability to manage stakeholders and lead diverse teams

● Excellent problem-solving, communication, and organizational skills

● Nice to have:

  • Prior experience in working with startups or product-based companies
  • Experience mentoring tech leads and helping shape engineering culture
  • Exposure to AI/ML, data engineering, or platform thinking


Why Join Us?

● Opportunity to work on a cutting-edge healthcare product

● A collaborative and learning-driven environment

● Exposure to AI and software engineering innovations

● Excellent work ethics and culture.



If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!

Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
4 - 12 yrs
Best in industry
skill iconPython
SQL
ETL
Google Cloud Platform (GCP)
Windows Azure
+1 more

We are seeking a skilled Data Engineer to join the AI Platform Capabilities team supporting the UDP Uplift program.

In this role, you will design, build, and test standardized data and AI platform capabilities across a multi-cloud environment (Azure & GCP).

You will collaborate closely with AI use case teams to develop:

  • Scalable data pipelines
  • Reusable data products
  • Foundational data infrastructure

Your work will support advanced AI solutions such as:

  • GenAI
  • RAG (Retrieval-Augmented Generation)
  • Document Intelligence

Key Responsibilities

  • Design and develop scalable ETL/ELT pipelines for AI workloads
  • Build and optimize data pipelines for structured & unstructured data
  • Enable context processing & vector store integrations
  • Support streaming data workflows and batch processing
  • Ensure adherence to enterprise data models, governance, and security standards
  • Collaborate with DataOps, MLOps, Security, and business teams (LBUs)
  • Contribute to data lifecycle management for AI platforms

Required Skills

  • 5–7 years of hands-on experience in Data Engineering
  • Strong expertise in Python and advanced SQL
  • Experience with GCP and/or Azure cloud-native data services
  • Hands-on experience with PySpark / Spark SQL
  • Experience building data pipelines for ML/AI workloads
  • Understanding of CI/CD, Git, and Agile methodologies
  • Knowledge of data quality, governance, and security practices
  • Strong collaboration and stakeholder management skills

Nice-to-Have Skills

  • Experience with Vector Databases / Vector Stores (for RAG pipelines)
  • Familiarity with MLOps / GenAIOps concepts (feature stores, model registries, prompt management)
  • Exposure to Knowledge Graphs / Context Stores / Document Intelligence workflows
  • Experience with DBT (Data Build Tool)
  • Knowledge of Infrastructure-as-Code (Terraform)
  • Experience in multi-cloud deployments (Azure + GCP)
  • Familiarity with event-driven systems (Kafka, Pub/Sub) & API integrations

Ideal Candidate Profile

  • Strong data engineering foundation with AI/ML exposure
  • Experience working in multi-cloud environments
  • Ability to build production-grade, scalable data systems
  • Comfortable working in cross-functional, fast-paced environments
Read more
Service based company

Service based company

Agency job
via Codemind Staffing Solutions by Krishna kumar
Chennai
9 - 14 yrs
₹20L - ₹30L / yr
databricks
Spark
Apache Spark
skill iconPython
ETL

Key Responsibilities


 Architect and implement enterprise-grade Lakehouse solutions using Databricks

 Design and deliver scalable batch and real-time data pipelines using Apache Spark (PySpark/SQL)

 Build ETL/ELT pipelines, incremental data loads, and metadata-driven ingestion frameworks

 Implement and optimize Databricks components: Delta Lake, Delta Live Tables, Autoloader, Structured Streaming, and Workflows

 Design large-scale data warehousing solutions with 3NF and dimensional modeling

 Establish data governance, security, and data quality frameworks, including Unity Catalog

 Lead ML lifecycle management using MLflow and drive AI use cases (RAG, AI/BI)

 Manage cloud-native deployments on Microsoft Azure and integrate with enterprise systems (e.g., ServiceNow)

 Drive CI/CD, DevOps practices, and performance optimization of Spark workloads

 Provide technical leadership, mentor teams, and ensure successful delivery

 Collaborate with stakeholders to translate business requirements into scalable solutions


Required Skills & Experience


 10+ years in Data Engineering / Analytics / AI with strong delivery ownership

 Deep expertise in Databricks ecosystem (Notebooks, Delta Lake, Workflows, AI/BI, Apps, Genie)

 Strong hands-on experience with: 

a. Apache Spark (performance tuning & scalability)

b. Python and SQL

 Proven experience in: 

a. Solution architecture and large-scale data platforms

b. Data warehousing and advanced data modeling

c. Batch and real-time processing systems

 Experience with: 

a. Azure Databricks and Azure data services

b. MLflow and MLOps practices

c. ServiceNow or enterprise integrations

 Exposure to AI technologies (RAG, LLM-based solutions)

 Strong stakeholder management and leadership skills


Certifications (Preferred)


 Databricks certifications aligned to data engineering and AI tracks, such as: 

a. Databricks Certified Data Engineer Associate (validates foundational ETL, Spark, and Lakehouse capabilities)

b. Databricks Certified Data Engineer Professional (advanced expertise in pipeline design, optimization, and governance)

 Certifications in Databricks Machine Learning or Generative AI tracks (e.g., ML Associate / Professional) for AI-driven use cases

 Relevant cloud certifications in Microsoft Azure or Amazon Web Services for platform deployment and architecture


Read more
Bengaluru (Bangalore)
2 - 5 yrs
₹20.4L - ₹24L / yr
skill iconPython
API
SQL
Systems design
Software deployment

Location: Bangalore

Experience: 2–5 years

Type: Full-time | On-site

Open Roles: 2

Start: Immediate

Why this role exists

Most systems work at a low scale.

Very few survive real production load, complex workflows, and enterprise edge cases.

We are building a platform that must:

  • Scale from 500K → 20M+ interactions/month
  • Handle complex insurance workflows reliably
  • Become easier to deploy as it grows, not harder

This role exists to build the backend foundation that makes this possible.

What you’ll do

You will not just write services.

You will design and own core platform systems.

1. Scale the platform without breaking architecture

  • Scale from 50K → 2M+ interactions/month
  • Ensure:
  • High availability
  • Low latency
  • Fault tolerance
  • Avoid large rewrites — build systems that evolve cleanly

2. Build the workflow automation (WA) engine

  • Design a flexible system with:
  • States
  • Stages
  • Cohorts
  • Dynamic workflows
  • Ensure workflows:
  • Handle edge cases reliably
  • Can be configured easily
  • Move from:
  • Hardcoded flows → configurable execution engine

3. Build the insurance-specific data layer

  • Design data models for:
  • Policy states
  • Claim workflows
  • Consent tracking
  • Ensure the system works across:
  • Multiple insurers
  • Multiple use cases
  • Build a platform-first data layer, not use-case-specific hacks

4. Make deployment and setup simple

  • Ensure workflows and data models are:
  • Easy to configure
  • Easy to launch
  • Reduce friction for:
  • Product teams
  • Deployment teams

5. Create a compounding data advantage

  • Build a data layer that:
  • Improves with every deployment
  • Captures structured signals
  • Ensure data becomes a long-term edge, not just storage

6. Own production reliability

  • Participate in on-call rotation across 3 engineers
  • Ensure:
  • Incidents are handled quickly
  • Root causes are fixed permanently
  • Build systems where reliability is shared, not individual

What success looks like

  • Platform scales to 2M+ interactions/month smoothly
  • Workflow engine supports complex, dynamic use cases
  • Data layer enables fast deployment across accounts
  • Edge cases are handled without constant firefighting
  • System becomes easier to use as it grows
  • Production issues are rare and predictable

Who you are

  • You have 2-5 years of backend engineering experience
  • You have built:
  • Scalable systems
  • Distributed services
  • You think in:
  • Systems
  • Data models
  • Trade-offs
  • You are comfortable owning:
  • Architecture
  • Production systems

What will make you stand out

  • Experience building:
  • Workflow engines
  • State machines
  • Data-heavy platforms
  • Strong understanding of:
  • System design
  • Distributed systems
  • Failure handling
  • Experience working in:
  • High-scale production environments

Why join

  • You will build the core backend of an AI platform
  • Your work directly impacts:
  • Scale
  • Reliability
  • Product capability
  • You will design systems that move from:
  • Use-case specific → platform-level infrastructure

What this role is not

  • Not just API development
  • Not limited to feature-level work
  • Not disconnected from production realities

What this role is

  • A system architect
  • A builder of scalable platforms
  • A driver of long-term technical advantage

One question to self-evaluate

Can you design backend systems that scale, handle edge cases, and become easier to use as they grow?


Read more
Bengaluru (Bangalore)
4 - 7 yrs
Best in industry
Selenium
skill iconJava
skill iconPython
Test Automation (QA)
Mobile App Testing (QA)
+2 more

Role Overview


As a Senior QA Engineer (Automation), you will drive product quality across all stages of development and deployment. You’ll take complete ownership of defining QA strategy, implementing robust automation frameworks, and ensuring every release meets our high standards of reliability, performance, and user delight.


This role is ideal for someone who thrives in a fast-paced startup environment, loves solving problems, and is passionate about building scalable and flawless user experiences.


Key Responsibilities

Define & Execute QA Strategy:

Develop and implement test strategies covering functional, regression, integration, and exploratory testing.


Automation Leadership:

Build and maintain scalable automation frameworks integrated into CI/CD pipelines to improve speed, reliability, and test coverage.


Collaborate Early:

Partner closely with Product and Engineering teams to ensure testable requirements and early QA involvement in the development cycle.


Release Readiness:

Own end-to-end release validation, including regression testing, defect triage, and final sign-off on product quality.


Quality Metrics & Reporting:

Define, track, and communicate key QA metrics (defect leakage, build health, test coverage) to drive data-backed improvements.


Performance & Security Testing:

Conduct basic performance and security validation to ensure system robustness.


Mentorship & Best Practices:

Guide junior QA engineers, promoting test design excellence, automation best practices, and continuous improvement.


Process Optimization:

Continuously enhance QA processes through retrospectives, automation expansion, and shift-left testing principles.


Documentation:

Maintain comprehensive documentation of test cases, strategies, bug reports, and quality incident postmortems.


What We’re Looking For

  • 5 - 10 years of QA experience in product-based startups, ideally in B2C environments.
  • Proven expertise in test automation (e.g., Selenium, Appium, Cypress, Playwright, etc.).
  • Strong understanding of CI/CD pipelines, API testing, and test design principles.
  • Hands-on experience with manual and exploratory testing.
  • Ability to handle multiple projects independently and drive them to completion.
  • High sense of ownership, accountability, and attention to detail.
  • Excellent communication and collaboration skills.
  • Willingness to work from the office (HSR Layout, Bangalore).


Why Join Us

  • Opportunity to impact millions of users in India’s devotional and spiritual space.
  • Work with a talented, passionate, and mission-driven team.
  • High ownership role with end-to-end accountability.
  • Fast-paced, collaborative, and growth-oriented culture.


Build seamless, trusted experiences that bring faith and technology together.


Read more
TestMu AI (Formely LambdaTest)
Himanshi Tomer
Posted by Himanshi Tomer
Noida
1 - 3 yrs
₹10L - ₹25L / yr
skill iconGo Programming (Golang)
skill iconPython

Backend Developer


📍 Noida | 🕐 Full-Time | 🧭 Experience: 2–3 Years


The Mission

We aren't building traditional backend systems — we're powering the infrastructure behind Agentic Intelligence. TestMu AI is building the world's first AI-native platform where backend systems don't just serve requests, they enable autonomous decision-making, execution, and scale.

The name "TestMu" comes from our community conference. Our users and team aren't an audience — they're the heartbeat of what we build. We believe AI augments human potential. It doesn't replace it.


You'll be building the core backend systems that power AI-driven workflows — ensuring high performance, scalability, and reliability at every layer.


The Pillars of Impact

🚀 1. Core Backend & System Architecture (50%)


  • Build and scale high-performance backend services and APIs
  • Design efficient database schemas, query optimization, and data flows
  • Write clean, logical, production-grade code (Python, Golang, or similar)
  • Own system performance — latency, throughput, and reliability

⚙️ 2. Backend for AI Systems (30%)


  • Develop backend systems supporting AI agents and autonomous workflows
  • Handle large-scale data processing, async tasks, and event-driven systems
  • Integrate backend infrastructure seamlessly with AI/ML components

🧠 3. Scalability & Distributed Systems (20%)


  • Contribute to microservices architecture and service decomposition
  • Build fault-tolerant, highly available distributed systems
  • Optimize systems for high concurrency and real-time execution

Your Engineering Stack

Tech/ToolsPython / GolangBuilding core backend services and logic-heavy systemsAWS / GCPDeploying and scaling distributed backend infrastructureKafka / RabbitMQHandling asynchronous processing and event-driven workflows


The Bar

SignalCore Backend Experience2–3 years of hands-on experience building APIs, backend systems, and scalable servicesProblem-Solving AbilityStrong fundamentals in data structures, algorithms, and logical thinkingSystem Design UnderstandingAbility to design scalable backend systems with clear architectural thinkingOwnership & ExecutionExperience owning backend features end-to-end in a fast-paced environment


The Interview Loop · Screening for the Top 1%

RoundsRound I · Recruiter ScreenEvaluation of backend experience, problem-solving approach, and project depthRound II · Hiring ManagerDeep dive into backend projects, APIs, databases, and system design thinkingRound III · Domain LeadLive coding + backend problem-solving + discussion on scalability and distributed systemsFinal · LeadershipCulture fit, ownership mindset, and ability to operate in a high-velocity startup environment



Your Growth Trajectory TestMu AI is a high-growth environment where we promote based on complexity solved, not years of tenure. As a Backend Developer, you have a massive runway to scale from an Individual Contributor (IC) into a core Engineering Leadership role, working alongside pioneers in agentic intelligence.


Perks of the Future

  • Health & Wellness: 100% premium covered insurance for you + family (spouse, kids, parents) with annual check-ups.
  • Fuel for Innovation: Fresh, daily gourmet lunch and dinner served at our Noida HQ.
  • Seamless Transit: Safe, GPS-enabled cab facilities for eligible shifts (home-office-home).
  • POD Culture: Dedicated quarterly budgets for team-building, offsites, and collaborative celebrations.


Read more
LearnTube.ai

at LearnTube.ai

2 candid answers
Vinayak Sharan
Posted by Vinayak Sharan
Remote, Mumbai
3 - 6 yrs
₹14L - ₹32L / yr
skill iconPython
FastAPI
skill iconDocker
skill iconAmazon Web Services (AWS)
SQL
+3 more

Role Overview:


As a Backend Developer at LearnTube.ai, you will ship the backbone that powers 2.3 million learners in 64 countries—owning APIs that crunch 1 billion learning events & the AI that supports it with <200 ms latency.


Skip the wait and get noticed faster by completing our AI-powered screening. Click this link to start your quick interview. It only takes a few minutes and could be your shortcut to landing the job! -https://bit.ly/LT_Python


What You'll Do:


At LearnTube, we’re pushing the boundaries of Generative AI to revolutionize how the world learns. As a Backend Engineer, your roles and responsibilities will include:

  • Ship Micro-services – Build FastAPI services that handle ≈ 800 req/s today and will triple within a year (sub-200 ms p95).
  • Power Real-Time Learning – Drive the quiz-scoring & AI-tutor engines that crunch millions of events daily.
  • Design for Scale & Safety – Model data (Postgres, Mongo, Redis, SQS) and craft modular, secure back-end components from scratch.
  • Deploy Globally – Roll out Dockerised services behind NGINX on AWS (EC2, S3, SQS) and GCP (GKE) via Kubernetes.
  • Automate Releases – GitLab CI/CD + blue-green / canary = multiple safe prod deploys each week.
  • Own Reliability – Instrument with Prometheus / Grafana, chase 99.9 % uptime, trim infra spend.
  • Expose Gen-AI at Scale – Publish LLM inference & vector-search endpoints in partnership with the AI team.
  • Ship Fast, Learn Fast – Work with founders, PMs, and designers in weekly ship rooms; take a feature from Figma to prod in < 2 weeks.


What makes you a great fit?


Must-Haves:

  • 3+ yrs Python back-end experience (FastAPI)
  • Strong with Docker & container orchestration
  • Hands-on with GitLab CI/CD, AWS (EC2, S3, SQS) or GCP (GKE / Compute) in production
  • SQL/NoSQL (Postgres, MongoDB) + You’ve built systems from scratch & have solid system-design fundamentals

Nice-to-Haves

  • k8s at scale, Terraform,
  • Experience with AI/ML inference services (LLMs, vector DBs)
  • Go / Rust for high-perf services
  • Observability: Prometheus, Grafana, OpenTelemetry


About Us: 


At LearnTube, we’re on a mission to make learning accessible, affordable, and engaging for millions of learners globally. Using Generative AI, we transform scattered internet content into dynamic, goal-driven courses with:

  • AI-powered tutors that teach live, solve doubts in real time, and provide instant feedback.
  • Seamless delivery through WhatsApp, mobile apps, and the web, with over 1.4 million learners across 64 countries.


Meet the Founders: 


LearnTube was founded by Shronit Ladhani and Gargi Ruparelia, who bring deep expertise in product development and ed-tech innovation. Shronit, a TEDx speaker, is an advocate for disrupting traditional learning, while Gargi’s focus on scalable AI solutions drives our mission to build an AI-first company that empowers learners to achieve career outcomes. We’re proud to be recognised by Google as a Top 20 AI Startup and are part of their 2024 Startups Accelerator: AI First Program, giving us access to cutting-edge technology, credits, and mentorship from industry leaders.


Why Work With Us? 


At LearnTube, we believe in creating a work environment that’s as transformative as the products we build. Here’s why this role is an incredible opportunity:

  • Cutting-Edge Technology: You’ll work on state-of-the-art generative AI applications, leveraging the latest advancements in LLMs, multimodal AI, and real-time systems.
  • Autonomy and Ownership: Experience unparalleled flexibility and independence in a role where you’ll own high-impact projects from ideation to deployment.
  • Rapid Growth: Accelerate your career by working on impactful projects that pack three years of learning and growth into one.
  • Founder and Advisor Access: Collaborate directly with founders and industry experts, including the CTO of Inflection AI, to build transformative solutions.
  • Team Culture: Join a close-knit team of high-performing engineers and innovators, where every voice matters, and Monday morning meetings are something to look forward to.
  • Mission-Driven Impact: Be part of a company that’s redefining education for millions of learners and making AI accessible to everyone.
Read more
Optimo Capital

at Optimo Capital

2 candid answers
Ajinkya Pokharkar
Posted by Ajinkya Pokharkar
Bengaluru (Bangalore)
2 - 4 yrs
₹5.5L - ₹12L / yr
skill iconPython
skill iconReact.js
skill iconJavascript
RESTful APIs
skill iconPostgreSQL
+7 more

About us:

Optimo Capital is a newly established NBFC founded by Prashant Pitti, who is also a co-founder of EaseMyTrip (a billion-dollar listed startup that grew profitably without any funding).

Our mission is to serve the underserved MSME businesses in India with their credit needs. With less than 15% of MSMEs having access to formal credit, we aim to bridge this credit gap by employing a phygital model (physical branches + digital decision-making).

As a technology and data-first company, tech and data enthusiasts play a crucial role in building the infrastructure at Optimo, and help the company thrive.


What we offer:

Join our dynamic startup team as a Full Stack Developer and play a crucial role in web application & API developments, customer journeys, tech integrations, building robust credit risk and underwriting decision engines, cloud infrastructure, and more.

This is an exceptional opportunity to learn, grow, and make a significant impact in a fast-paced startup environment. We believe that the freedom and accountability to make decisions in technology, software, system architecture, and other design aspects bring out the best in you and help us build the best for the company.

This environment will not only offer you a steep learning curve but also allow you to experience the direct impact of your technological contributions. In addition, we offer industry-standard compensation.


What we look for:

We are looking for individuals with strong proficiency in Python, React, and Django. Any experience in a startup, front-end/back-end development, tech-integrations, or open-source contributions will be highly valued.

We focus not only on your skills but also on your attitude and your hunger to learn, grow, lead, and thrive—both as an individual and as part of a team. We encourage taking on challenges, learning new technologies, understanding, building, and implementing them within a short period of time. Your willingness to put in the extra effort to build the best systems will be highly appreciated.


Skills:

Excellent proficiency with the ability to write clean, robust, production-level code. Experience in designing, developing, and maintaining web apps and rule engines is required. At least one year of experience as a developer in any engineering / software-based role is required.


1) Frontend Development

  • JavaScript: Strong proficiency in JavaScript, including ES6+ features
  • React: Experience building complex user interfaces using React and its ecosystem (e.g., Redux, Context API)
  • HTML/CSS: Solid understanding of HTML5 and CSS3 for creating responsive and accessible web pages


2) Backend Development

  • Python: Proficiency in Python for server-side development
  • Django: Working knowledge in Django, Django Rest Framework
  • Flask (or FastAPI): Experience building RESTful APIs using Flask or FastAPI is a plus


3) REST APIs: A strong understanding of APIs is required, along with prior experience in API development or integration. Writing REST APIs from scratch is highly desirable.


4) Databases: A basic understanding of both relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., MongoDB) databases is required. Basic knowledge of database management, optimization, and query design is expected.


5) Git: Proficiency in Git is essential, with experience in branching, merging, pull requests, and conflict resolution. Experience in collaborative projects using Git is highly valued.


6) Good to have: 

  • Basic understanding of data pipelines/ETLs, dashboarding, and AWS is beneficial but not required.
  • Experience in building WhatsApp chat/flow journeys, Working with maps, and creating data layers (e.g., Google Maps API, Mapbox) is highly valued. (not mandatory)


What you'll be working on:

  1. Design and build systems focused on creating straight-through processes for lending (specifically property loans), from customer onboarding to disbursement, with an emphasis on accurate and efficient credit and risk assessment.
  2. Take projects from ideation to production, including web applications, rule engines, third-party API integrations, and other technology developments.
  3. Take initiative and ownership of engineering projects, ensuring a seamless user experience.
  4. Manage and coordinate the cloud infrastructure and application setup, including source code repositories, CI/CD pipelines, servers, and deployments.


Other Requirements:

  1. Availability for full-time work in Bangalore. Advantage for immediate joiners.
  2. Strong passion for technology and problem-solving.
  3. Ability to translate requirements into intuitive interfaces is highly appreciated 
  4. At least 1 year of industry experience in a technical role specifically as a developer is a must.
  5. Self-motivated and capable of working both independently and collaboratively.



If you are ready to embark on an exciting journey of growth, learning, and innovation, apply now to join our pioneering team in Bangalore.


Read more
Incubyte

at Incubyte

4 recruiters
Sandli Srivastava
Posted by Sandli Srivastava
Remote only
6 - 9 yrs
Best in industry
skill iconPython
skill iconReact.js
Artificial Intelligence (AI)
Generative AI

About Us

We are a company where the ‘HOW’ of building software is just as important as the ‘WHAT.’ We partner with large organizations to modernize legacy codebases and collaborate with startups to launch MVPs, scale, or act as extensions of their teams. Guided by Software Craftsmanship values and eXtreme Programming Practices, we deliver high-quality, reliable software solutions tailored to our clients' needs.


We thrive to: 

  • Bring our clients' dreams to life by being their trusted engineering partners, crafting innovative software solutions.
  • Challenge offshore development stereotypes by delivering exceptional quality, and proving the value of craftsmanship.
  • Empower clients to deliver value quickly and frequently to their end users.
  • Ensure long-term success for our clients by building reliable, sustainable, and impactful solutions.
  • Raise the bar of software craft by setting a new standard for the community.


Job Description

This is a remote position.


Our Core Values

  • Quality with Pragmatism: We aim for excellence with a focus on practical solutions.  
  • Extreme Ownership: We own our work and its outcomes fully.  
  • Proactive Collaboration: Teamwork elevates us all.  
  • Pursuit of Mastery: Continuous growth drives us.  
  • Effective Feedback: Honest, constructive feedback fosters improvement.  
  • Client Success: Our clients’ success is our success. 


Experience Level

This role is ideal for engineers with 6+ years of hands-on software development experience, particularly in Python and ReactJs at scale. 

 

Role Overview 

If you’re a Software Craftsperson who takes pride in clean, test-driven code and believes in Extreme Programming principles, we’d love to meet you. At Incubyte, we’re a DevOps organization where developers own the entire release cycle, meaning you’ll get hands-on experience across programming, cloud infrastructure, client communication, and everything in between. Ready to level up your craft and join a team that’s as quality-obsessed as you are? Read on!   


What You'll Do

  • Write Tests First: Start by writing tests to ensure code quality 
  • Clean Code: Produce self-explanatory, clean code with predictable results 
  • Frequent Releases: Make frequent, small releases 
  • Pair Programming: Work in pairs for better results 
  • Peer Reviews: Conduct peer code reviews for continuous improvement 
  • Product Team: Collaborate in a product team to build and rapidly roll out new features and fixes 
  • Full Stack Ownership: Handle everything from the front end to the back end, including infrastructure and DevOps pipelines 
  • Never Stop Learning: Commit to continuous learning and improvement


  • AI-First Development Focus
  • Leverage AI tools like GitHub Copilot, Cursor, Augment, Claude Code, etc., to accelerate development and automate repetitive tasks.
  • Use AI to detect potential bugs, code smells, and performance bottlenecks early in the development process.
  • Apply prompt engineering techniques to get the best results from AI coding assistants.
  • Evaluate AI generated code/tools for correctness, performance, and security before merging.
  • Continuously explore, stay ahead by experimenting and integrating new AI powered tools and workflows as they emerge.



Requirements


What We're Looking For

  • Proficiency in some or all of the following: ReactJS,  JavaScript, Object Oriented Programming in JS
  • 6+ years of Object-Oriented Programming with Python or equivalent
  • 6+ years of experience working with relational (SQL) databases
  • 6+ years of experience using Git to contribute code as part of a team of Software Craftspeople


  • AI Skills & Mindset
  • Power user of AI assisted coding tools (e.g., GitHub Copilot, Cursor, Augment, Claude Code).
  • Strong prompt engineering skills to effectively guide AI in crafting relevant, high-quality code.
  • Ability to critically evaluate AI generated code for logic, maintainability, performance, and security.
  • Curiosity and adaptability to quickly learn and apply new AI tools and workflows.
  • AI evaluation mindset balancing AI speed with human judgment for robust solutions.



Benefits


What We Offer

  • Dedicated Learning & Development Budget: Fuel your growth with a budget dedicated solely to learning.
  • Conference Talks Sponsorship: Amplify your voice! If you’re speaking at a conference, we’ll fully sponsor and support your talk.
  • Cutting-Edge Projects: Work on exciting projects with the latest AI technologies
  • Employee-Friendly Leave Policy: Recharge with ample leave options designed for a healthy work-life balance.
  • Comprehensive Medical & Term Insurance: Full coverage for you and your family’s peace of mind.
  • And More: Extra perks to support your well-being and professional growth.


Work Environment 

  • Remote-First Culture: At Incubyte, we thrive on a culture of structured flexibility — while you have control over where and how you work, everyone commits to a consistent rhythm that supports their team during core working hours for smooth collaboration and timely project delivery. By striking the perfect balance between freedom and responsibility, we enable ourselves to deliver high-quality standards our customers recognize us by. With asynchronous tools and push for active participation, we foster a vibrant, hands-on environment where each team member’s engagement and contributions drive impactful results.
  • Work-In-Person: Twice a year, we come together for two-week sprints to collaborate in person, foster stronger team bonds, and align on goals. Additionally, we host an annual retreat to recharge and connect as a team. All travel expenses are covered.
  • Proactive Collaboration: Collaboration is central to our work. Through daily pair programming sessions, we focus on mentorship, continuous learning, and shared problem-solving. This hands-on approach keeps us innovative and aligned as a team.

 

Incubyte is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

Read more
Jaipur
5 - 10 yrs
₹1L - ₹11L / yr
Software Testing (QA)
Manual testing
Test Automation (QA)
pytest
skill iconPython
+1 more

POSITION: Sr. QA Engineer


We are looking for a seasoned and results-driven Senior QA Engineer with 5 to 7 years of

experience in Manual and Automation Testing. The candidate should have deep expertise in QA

processes, strong automation skills using Python or equivalent, and the ability to lead quality

initiatives for our core product suite. You won't just be finding bugs — you will be building a

resilient quality ecosystem that leverages modern tools.


What You’ll Be Doing:

● Understand business requirements and convert them into test scenarios and test cases

● Perform Manual Testing including Functional, Regression, Integration, & System Testing

● Develop, maintain, and execute Automation Scripts using Python

● Identify, report, and track defects using defect management tools

● Work closely with Developers, Product Managers, and QA team members

● Lead requirement analysis, test planning, and test case reviews

● Contribute to improving QA processes and automation coverage

● Participate in sprint planning, retrospectives, and cross-functional reviews

● Identify, report, and track defects using defect management tools; manage triage and

resolution with development teams

● Catch edge cases before they become production issues

● Co-ordination is release processes

Automation Skills:

● Maintain, and extend robust Automation Frameworks ( PyTest / Selenium) for UI and

backend services as well as design patterns, and CI/CD integration

● Monitor nightly automation runs, troubleshoot defects

● Ability to design, extend, debug, and maintain test frameworks independently

API and Database Testing:

● Perform contract testing and functional validation of REST APIs using Postman or similar

tools.

● Write complex SQL queries to validate data pipelines, migrations

Qualifications:

● Strong understanding of Software Testing concepts

● Experience in writing Test Cases and Test Scenarios

● Experience in Defect Tracking tools (JIRA, etc.)

● Experience in CI/CD tools (Jenkins, GitHub Actions, GitLab CI)

● Experience in Agile / Scrum methodology


● Strong analytical and problem-solving skills with a 'break-it' mentality

● Good communication skills — ability to articulate quality risks to non-technical

stakeholders

● Self-motivated, quick learner, and proactive in driving quality culture

● Strong team player with empathy to help developers 'fix-it'

● Exposure to Agentic AI testing frameworks

REPORTING: This position will report to Sr. Project Manager or as assigned by Management.

EMPLOYMENT TYPE: Full-Time, Permanent

LOCATION: Jaipur (Work from Office)

SHIFT TIMINGS: 10:00 AM - 07:00 PM IST

WHO WE ARE:

SalesIntel is an agentic pipeline generation platform that helps go-to-market teams focus on

accounts that are ready to buy and the buyers who matter most. We enable thousands of users

by turning buying signals into actionable insights that drive revenue.

For more information, please visit – www.salesintel.io

WHAT WE OFFER:

SalesIntel’s workplace is all about diversity. Different countries and cultures are represented in

our workforce. We are growing at a fast pace and our work environment is constantly evolving

with changing times. We motivate our team to better themselves by offering all the good stuff

you’d expect like Holidays, Paid Leaves, Bonuses, Incentives, Medical Policy and company paid

Training Programs.

SalesIntel is an Equal Opportunity Employer. We prohibit discrimination and harassment of any

type and offer equal employment opportunities to employees and applicants without regard to

race, color, religion, sex, sexual orientation, gender identity or expression, pregnancy, age,

national origin, disability status, genetic information, protected veteran status, or any other

characteristic protected by law.

Read more
Remote, Noida, Gurugram, Pune, Nagpur, Jaipur, Gandhinagar
8 - 14 yrs
₹12L - ₹18L / yr
skill iconPython
SQL
PySpark
databricks
Snow flake schema
+6 more

Senior Data Engineer (Databricks, BigQuery, Snowflake)

Experience: 8+ Years in Data Engineering

Location: Remote | Onsite (Noida, Gurgaon, Pune, Nagpur, Jaipur, Gandhinagar)

Budget: Open / Competitive


Job Summary:

We are seeking a highly skilled Senior Data Engineer to design, build, and optimize scalable data solutions that support advanced analytics and machine learning initiatives. You will lead the development of reliable, high-performance data systems and collaborate closely with data scientists to enable data-driven decision-making.

In this role, we expect a forward-thinking professional who utilizes AI-augmented development tools (such as Cursor, Windsurf, or GitHub Copilot) to increase engineering velocity and maintain high code standards in a modern enterprise environment.


Key Responsibilities:

  • Scalable Pipelines: Design, develop, and optimize end-to-end data pipelines using SQL, Python, and PySpark.
  • ETL/ELT Workflows: Build and maintain workflows to transform raw data into structured, analytics-ready datasets.
  • ML Integration: Partner with data scientists to deploy and integrate machine learning models into production environments.
  • Cloud Infrastructure: Manage and scale data infrastructure within AWS and Azure ecosystems.
  • Data Warehousing: Utilize Databricks and Snowflake for big data processing and enterprise warehousing.
  • Automation & IaC: Implement workflow orchestration using Apache Airflow and manage infrastructure as code via Terraform.
  • Performance Tuning: Optimize data storage, retrieval, and system performance across data warehouse platforms.
  • Governance & Compliance: Ensure data quality and security using tools like Unity Catalog or Hive Metastore.
  • AI-Augmented Development: Integrate AI tools and LLM APIs into data pipelines and use AI IDEs to streamline debugging and documentation.


Technical Requirements:

  • Experience: 8+ years of core Data Engineering experience in large-scale enterprise or consulting environments.
  • Languages: Expert proficiency in SQL and Python for complex data processing.
  • Big Data: Hands-on experience with PySpark and large-scale distributed computing.
  • Architecture: Strong understanding of ETL frameworks, data pipeline architecture, and data warehousing best practices.
  • Cloud Platforms: Deep working knowledge of AWS and Azure.
  • Modern Tooling: Proven experience with Databricks, Snowflake, and Apache Airflow.
  • Infrastructure: Experience with Terraform or similar IaC tools for scalable deployments.
  • AI Competency: Proficiency in using AI IDEs (Cursor/Windsurf) and integrating AI/ML models into production data flows.


Preferred Qualifications:

  • Exposure to data governance and cataloging tools (e.g., Unity Catalog).
  • Knowledge of performance tuning for massive-scale big data systems.
  • Familiarity with real-time data processing frameworks.
  • Experience in digital transformation and sustainability-focused data projects.
Read more
Metron Security Private Limited
Chanchal Kale
Posted by Chanchal Kale
Pune, Bengaluru (Bangalore)
2.5 - 6 yrs
₹3L - ₹10L / yr
skill iconNodeJS (Node.js)
skill iconGo Programming (Golang)
skill iconPython
Data Structures
CI/CD
+1 more

Job Summary:


We are looking for a highly motivated and skilled Software Engineer to join our team.

This role requires a strong understanding of the software development lifecycle, proficiency in coding, and excellent communication skills.

The ideal candidate will be responsible for production monitoring, resolving minor technical issues, collecting client information, providing effective client interactions, and supporting our development team in resolving challenges



Key Responsibilities:


Client Interaction: Serve as the primary point of contact for client queries, provide excellent communication, and ensure timely issue resolution.

Issue Resolution: Troubleshoot and resolve minor issues related to software applications in a timely manner.

Information Collection: Gather detailed technical information from clients, understand the problem context, and relay the information to the development leads for further action.

Collaboration: Work closely with development leads and cross-functional teams to provide timely support and resolution for customer issues.

Documentation: Document client issues, actions taken, and resolutions for future reference and continuous improvement.

Software Development Lifecycle: Be involved in maintaining, supporting, and optimizing software through its lifecycle, including bug fixes and enhancements.

Automating Redundant Support Tasks: (good to have) Should be able to automate the redundant repetitive tasks Required Skills and Qualifications:



Mandatory Skills:


Expertise in at least one Object Oriented Programming language (Python, Java, C#, C++, Reactjs, Nodejs).

Good knowledge on Data Structure and their correct usage.

Open to learn any new software development skill if needed for the project.

Alignment and utilization of the core enterprise technology stacks and integration capabilities throughout the transition states.

Participate in planning, definition, and high-level design of the solution and exploration of solution alternatives.

Define, explore, and support the implementation of enablers to evolve solution intent, working directly with Agile teams to implement them.

Good knowledge on the implications.

Experience architecting & estimating deep technical custom solutions & integrations.



Added advantage:


You have developed software using web technologies.

You have handled a project from start to end.

You have worked in an Agile Development project and have experience of writing and estimating User Stories

Communication Skills: Excellent verbal and written communication skills, with the ability to clearly explain technical issues to non-technical clients.

Client-Facing Experience: Strong ability to interact with clients, gather necessary information, and ensure a high level of customer satisfaction.

Problem-Solving: Quick-thinking and proactive in resolving minor issues, with a focus on providing excellent user experience.

Team Collaboration: Ability to collaborate with development leads, engineering teams, and other stakeholders to escalate complex issues or gather additional technical support when required.



Preferred Skills:


Familiarity with Cloud Platforms and Cyber Security tools: Knowledge of cloud computing platforms and services (AWS, Azure, Google Cloud) and Cortex XSOAR, SIEM, SOAR, XDR tools is a plus.

Automation and Scripting: Experience with automating processes or writing scripts to support issue resolution is an advantage.



Read more
TalentXO
Bengaluru (Bangalore)
4 - 8 yrs
₹27L - ₹30L / yr
Camunda Developer
skill iconPython
Backend Development
Microservices
REST API
+2 more

Role & Responsibilities

We are looking for a hands-on Camunda Developer with strong experience in workflow orchestration and backend development. The ideal candidate should be able to design, build, and optimize end-to-end business processes using Camunda (preferably Camunda 8) and work closely with engineering and business teams to implement scalable and resilient workflows.

Key Responsibilities:

  • Translate business requirements into BPMN workflows using Camunda (preferably Camunda 8)
  • Design and implement end-to-end process orchestration across systems
  • Build and manage service integrations (REST APIs, event-driven systems)
  • Develop and maintain Zeebe workers / microservices (Python)
  • Collaborate with stakeholders to refine workflows and handle edge cases
  • Implement error handling, retries, and compensation mechanisms
  • Analyse and improve workflows for scalability, reliability, and performance
  • Ensure data consistency and idempotent process execution
  • Work with cross-functional teams including data and analytics for process observability

Ideal Candidate

  • Strong Senior Camunda Developer / Workflow Orchestration Engineer Profiles
  • Mandatory (Experience 1) – Must have 4+years of hands-on experience in backend development and workflow systems, demonstrable through production-grade work on business process automation or backend service development.
  • Mandatory (Experience 2) – Must have strong hands-on experience with Camunda and BPMN 2.0, including designing, building, and deploying end-to-end business process workflows in production.
  • Mandatory (Experience 3) – Must have hands-on experience with Zeebe workers and the Camunda 8 stack, built and maintained as part of real orchestration systems.
  • Mandatory (Experience 4) – Must have strong production-level coding skills in Python, used for building and maintaining Zeebe workers and microservices.
  • Mandatory (Experience 5) – Must have experience designing and working within microservices architecture and distributed systems, with clear understanding of service decomposition, inter-service communication, and distributed system failure modes.
  • Mandatory (Experience 6) – Must have hands-on experience building and consuming REST APIs and working with event-driven systems (message brokers, pub/sub, event streams).
  • Mandatory (Skills) – Must have strong debugging and problem-solving skills in production workflow environments, with specific examples of resolving complex issues such as stuck processes, race conditions, or data inconsistency bugs.
  • Preferred (Experience 1) – Exposure to cloud platforms (AWS / GCP / Azure) and experience with data platforms (e.g., Snowflake).
  • Preferred (Experience 2) – Understanding of finance-related workflows (billing, reconciliation, etc.).


Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Mumbai, Trivandrum
4 - 8 yrs
Best in industry
skill iconNodeJS (Node.js)
skill iconPython
Dialog Flow
rasa
yellow.ai
+1 more

Responsible for developing, enhancing, modifying, and maintaining chatbot applications in the Global Markets environment. The role involves designing, coding, testing, debugging, and documenting conversational AI solutions, along with supporting activities aligned to the corporate systems architecture.

You will work closely with business partners to understand requirements, analyze data, and deliver optimal, market-ready conversational AI and automation solutions.


Key Responsibilities

  • Design, develop, test, debug, and maintain chatbot and virtual agent applications
  • Collaborate with business stakeholders to define and translate requirements into technical solutions
  • Analyze large volumes of conversational data to improve chatbot accuracy and performance
  • Develop automation workflows for data handling and refinement
  • Train and optimize chatbots using historical chat logs and user-generated content
  • Ensure solutions align with enterprise architecture and best practices
  • Document solutions, workflows, and technical designs clearly

Required Skills

  • Hands-on experience in developing virtual agents (chatbots/voicebots) and Natural Language Processing (NLP)
  • Experience with one or more AI/NLP platforms such as:
  • Dialogflow, Amazon Lex, Alexa, Rasa, LUIS, Kore.AI
  • Microsoft Bot Framework, IBM Watson, Wit.ai, Salesforce Einstein, Converse.ai
  • Strong programming knowledge in Python, JavaScript, or Node.js
  • Experience training chatbots using historical conversations or large-scale text datasets
  • Practical knowledge of:
  • Formal syntax and semantics
  • Corpus analysis
  • Dialogue management
  • Strong written communication skills
  • Strong problem-solving ability and willingness to learn emerging technologies

Nice-to-Have Skills

  • Understanding of conversational UI and voice-based processing (Text-to-Speech, Speech-to-Text)
  • Experience building voice apps for Amazon Alexa or Google Home
  • Experience with Test-Driven Development (TDD) and Agile methodologies
  • Ability to design and implement end-to-end pipelines for AI-based conversational applications
  • Experience in text mining, hypothesis generation, and historical data analysis
  • Strong knowledge of regular expressions for data cleaning and preprocessing
  • Understanding of API integrations, SSO, and token-based authentication
  • Experience writing unit test cases as per project standards
  • Knowledge of HTTP, REST APIs, sockets, and web services
  • Ability to perform keyword and topic extraction from chat logs
  • Experience training and tuning topic modeling algorithms such as LDA and NMF
  • Understanding of classical Machine Learning algorithms and appropriate evaluation metrics
  • Experience with NLP frameworks such as NLTK and spaCy
Read more
TIFIN FINTECH INDIA

at TIFIN FINTECH INDIA

1 candid answer
1 recruiter
Vrishali Mishra
Posted by Vrishali Mishra
Mumbai
5 - 8 yrs
Best in industry
skill iconPython
skill iconReact.js
skill iconFlutter


About TIFIN

TIFIN is an AI-powered fintech platform backed by industry leaders including JP Morgan, Morningstar, Broadridge, Hamilton Lane, Franklin Templeton, and Motive Partners.

We are building engaging wealth experiences that help improve financial lives through AI and investment intelligence-powered personalization. Our mission is to transform the world of wealth by combining technology, behavioral design, and investment intelligence to deliver better outcomes for investors.

At TIFIN, we use software and APIs to create personalized financial experiences, and algorithmic intelligence to power smarter financial decisions.


Our Values: Go with your GUT

  • Grow at the Edge – We are driven by personal growth, step outside our comfort zones, and strive to be the best version of ourselves with self-awareness and integrity.
  • Understanding through Listening and Speaking the Truth – We value transparency, authenticity, and radical candor to create shared understanding and strong decision-making.
  • Win for TeamWin – We take ownership, stay in our genius zones, and work together with energy and commitment to succeed as a team.


Role Overview

We are looking for a talented and hands-on Fullstack Engineer to join our growing engineering team in Mumbai. This role is ideal for someone who enjoys building across the stack — from scalable backend systems and intuitive web applications to mobile-first experiences.

You will work closely with cross-functional teams to build high-impact products that power modern wealth and financial experiences. This is a high-ownership role where your work will directly influence product quality, user experience, and platform scalability.

Key Responsibilities

  • Design, build, and maintain scalable full-stack applications using Python on the backend and React.js on the frontend.
  • Develop and ship mobile features using Flutter, ensuring a seamless cross-platform experience across iOS and Android.
  • Collaborate with Product, Design, Data Science, and Engineering teams to translate business requirements into robust technical solutions.
  • Build and optimize RESTful APIs, backend services, and reusable frontend components.
  • Ensure high standards of performance, security, scalability, and reliability across applications.
  • Take end-to-end ownership of features from design and development to testing, deployment, and monitoring.
  • Participate in code reviews, architecture discussions, sprint planning, and engineering best practices.
  • Contribute to internal documentation, development processes, and knowledge sharing across teams.

Required Skills & Qualifications

  • 5+ years of experience in fullstack software development.
  • Strong hands-on experience with Python and backend frameworks such as FastAPI, Django, or Flask.
  • Solid experience with React.js, including hooks, component-based architecture, state management, and performance optimization.
  • Practical hands-on experience building mobile applications using Flutter.
  • Experience working with REST APIs, microservices, and modern frontend/backend integration patterns.
  • Good understanding of relational and/or NoSQL databases such as PostgreSQL, MySQL, MongoDB, or Redis.
  • Familiarity with cloud platforms such as AWS, GCP, or Azure.
  • Experience with Docker, CI/CD pipelines, Git, and agile development workflows.
  • Strong grasp of software engineering fundamentals, system design, and clean coding practices.

Good to Have

  • Experience in fintech, wealth management, or financial services.
  • Exposure to GraphQL, WebSockets, or real-time applications.
  • Familiarity with AI/ML integrations or experience working closely with Data Science teams.
  • Published apps or prior experience delivering production-grade mobile applications.
  • Startup experience or experience working in high-growth product environments.

What We Offer

  • Opportunity to work at the intersection of AI, fintech, and investment intelligence.
  • A collaborative and high-ownership work culture.
  • Competitive compensation and performance-linked incentives.
  • Exposure to a global fintech ecosystem backed by top financial institutions and investors.
  • The chance to build products that meaningfully improve financial lives.


TIFIN is an equal opportunity employer and values diverse perspectives, experiences, and backgrounds.

Read more
TIFIN FINTECH INDIA

at TIFIN FINTECH INDIA

1 candid answer
1 recruiter
Vrishali Mishra
Posted by Vrishali Mishra
Remote only
10 - 15 yrs
Best in industry
skill iconPython
PyTorch
Agentic AI
Chatbot
Fine-tuning LLMs
+2 more

About TIFIN:


TIFIN is a cutting-edge fintech platform transforming financial lives through AI and investment intelligence. Backed by industry leaders like JP Morgan and Morningstar, we're dedicated to personalizing wealth experiences, akin to how AI has revolutionized entertainment, but with the critical responsibility of delivering superior financial outcomes. We blend design and behavioral science with investment intelligence to create engaging software and APIs that empower better investor outcomes. Our mission is to recognize each individual's unique needs and goals, matching them to tailored financial advice and investments across our marketplace and various divisions.


Our Values: Go with your GUT

  • Grow at the Edge: We embrace personal growth, stepping out of comfort zones, and putting ego aside to unlock genius. We operate with self-awareness and integrity, striving for excellence without excuses.
  • Understanding through Listening and Speaking the Truth: Transparency is key. We communicate with radical candor, authenticity, and precision to foster shared understanding. We challenge ideas, but once a decision is made, we commit fully.
  • Win for Teamwin: We thrive in our genius zones and take full ownership of our work. We inspire each other with energy and attitude, collaborating seamlessly to achieve collective success.

The Opportunity:

TIFIN is seeking a highly skilled and experienced LLM Engineer to join our innovative, remote-first team. This is a unique opportunity to shape the future of personalized financial experiences by leveraging your expertise in Large Language Models (LLMs) and Generative AI. As an early-stage startup, we're looking for an independent contributor and leader who is ready to build systems from the ground up and own outcomes.

What You'll Do:

  • Collaborate closely with design and product teams to craft intuitive and engaging conversational AI experiences for our users.
  • Work autonomously to deliver high-quality features, taking full ownership of project outcomes.
  • Analyze and leverage our extensive data to create highly personalized experiences.
  • Fine-tune LLMs with proprietary data to enhance model performance and relevance for our specific use cases.
  • Implement various RAG (Retrieval-Augmented Generation) approaches to augment LLMs with relevant, up-to-date, and domain-specific information.
  • Act as both a technical leader and an individual contributor, embodying the startup mentality of doing "whatever it takes" to succeed.
  • Design and set up new workflows, systems, and tools from scratch, with support from the wider team.

What You'll Bring:

  • 8+ years of professional experience in software engineering or a related field.
  • Proven experience working with Large Language Models (LLMs) and Generative AI technologies.
  • Demonstrated experience in building and deploying conversational bots.
  • Hands-on experience with fine-tuning machine learning models, specifically LLMs.
  • Proficiency in utilizing RAG-based approaches for LLM augmentation.
  • A strong understanding of financial concepts and investing is a significant plus, though not strictly required.
  • Ability to thrive in a fast-paced, startup environment, with a proactive and problem-solving mindset.
  • Excellent communication skills and the ability to articulate complex technical concepts clearly.

Our Benefits Package Includes:

  • Competitive salary with performance-linked variable compensation.
  • Comprehensive medical insurance.
  • Tax-saving benefits.
  • Flexible Paid Time Off (PTO) policy and company-paid holidays.
  • Generous Parental Leave: 6 months paid maternity leave, 2 weeks paid paternity leave.


TIFIN is an equal-opportunity employer, valuing diverse talents and perspectives. We encourage all qualified applicants to apply, regardless of background.

Read more
Bengaluru (Bangalore)
5 - 10 yrs
₹1L - ₹10L / yr
databricks
PySpark
Apache Spark
ETL
CI/CD
+10 more

Profile - Databricks Developer

Experience- 5+ years

Location- Bangalore (On site)

PF & BGV is Mandatory


Job Description: -

* Design, build, and optimize data pipelines and ETL/ELT workflows using Databricks and

Apache Spark (PySpark).

* Develop scalable, high performance data solutions using Spark distributed processing.

* Lead engineering initiatives focused on automation, performance tuning, and platform

modernization.

* Implement and manage CI/CD pipelines using Git-based workflows and tools such as

GitHub Actions or Jenkins.

* Collaborate with cross-functional teams to translate business needs into technical

solutions.

* Ensure data quality, governance, and security across all processes.

* Troubleshoot and optimize Spark jobs, Databricks clusters, and workflows.

* Participate in code reviews and develop reusable engineering frameworks.

* Should have knowledge of utilizing AI tools to improve productivity and support daily

engineering activities.

* Strong knowledge and hands-on experience in Databricks Genie, including prompt

engineering, workspace usage, and automation.

Required Skills & Experience:

* 5+ years of experience in Data Engineering or related fields.

* Strong hands-on expertise in Databricks (notebooks, Delta Lake, job orchestration).

* Deep knowledge of Apache Spark (PySpark, Spark SQL, optimization techniques).

* Strong proficiency in Python for data processing, automation, and framework

development.

* Strong proficiency in SQL, including complex queries, performance tuning, and analytical

functions.

* Strong knowledge of Databricks Genie and leveraging it for engineering workflows.

* Strong experience with CI/CD and Git-based development workflows.

* Proficiency in data modeling and ETL/ELT pipeline design.


* Experience with automation frameworks and scheduling tools.

* Solid understanding of distributed systems and big data concepts

Read more
Bootlabs Technologies Private Limited

at Bootlabs Technologies Private Limited

2 candid answers
1 recruiter
Aakanksha Soni
Posted by Aakanksha Soni
Mumbai, Bengaluru (Bangalore)
4 - 7 yrs
Best in industry
skill iconAmazon Web Services (AWS)
skill iconPython
ECS
AWS IAM
Amazon S3
+3 more

Job Title: AWS DevOps Engineer (MLOps)

We are looking for a highly skilled AWS + MLOps Engineer to design, build, and maintain scalable machine learning infrastructure and pipelines on AWS. The ideal candidate will have strong expertise in DevOps practices, cloud architecture, and MLOps frameworks, along with solid Python programming skills.

Job Description:

We are looking for an experienced AWS DevOps Engineer to join our team. You will be responsible for building and optimising CI/CD pipelines, managing AWS infrastructure, and automating tasks using AWS services.

Key Responsibilities:

  • CI/CD Pipelines: Develop CI/CD pipelines with AWS CodePipeline, build ECR images, and update services on ECS.
  • Automation: Create Python Lambda functions for automation and AWS Batch jobs for GPU processing.
  • Infrastructure Management: Manage AWS infrastructure using Terraform (IAM roles, RDSLambda, etc.) and deploy microservices on EKS with ALB Ingress.
  • Data Processing: Work with AWS Step Functions and EMR for data workflows; troubleshoot Spark jobs.
  • Microservices: Deploy ATLAS on ECS, and create AWS Glue crawlers for data integration.
  • Strong Experience with MLOps is an added advantage.

Required Skills:

  • Experience with AWS services (ECS, ECR, Lambda, Step Functions, EMR, Glue, etc.).
  • Proficient in CI/CDTerraform, and Python scripting.
  • Experience deploying EKS clusters and using AWS ALB for routing.
  • Strong troubleshooting skills with EMR and Spark.
  • Understanding/experience with AWS EMR, Sagemaker and Databricks would be added advantage

Preferred:

  • AWS Certification (DevOps, Solutions Architect, etc.).
  • Experience with microservices and GPU-intensive processes.
Read more
Gradera AI Technologies
Sirisha Jonnada
Posted by Sirisha Jonnada
Hyderabad
4 - 7 yrs
₹20L - ₹50L / yr
skill iconPython
SQL
databricks

Role & Responsibilities

 

·      Collect, clean, and analyze large structured and unstructured datasets from multiple internal and external sources

·      Conduct thorough exploratory data analysis (EDA) to understand data distributions, relationships, outliers, and missing value patterns

·      Profile and audit datasets to assess data quality, completeness, consistency, and fitness for modeling

·      Investigate and document data lineage — understanding where data originates, how it flows, and how it transforms across systems

·      Identify and resolve data anomalies, inconsistencies, and integrity issues in collaboration with data engineering teams

·      Develop a deep understanding of the business domain and the underlying data that represents it — including what each field means, how it is captured, and what its limitations are

·      Translate raw, messy, real-world data into clean, well-understood analytical datasets ready for modeling and reporting

·      Apply statistical techniques such as correlation analysis, hypothesis testing, variance analysis, and distribution fitting to extract meaningful signals from noise

·      Build and deploy machine learning models including regression, classification, clustering, NLP, and time-series analysis

·      Design, evaluate, and analyze A/B experiments and controlled tests using causal inference techniques

·      Develop data-driven recommendations backed by rigorous statistical reasoning

·      Write clean, production-ready code in Python or R

·      Collaborate with data engineers to build reliable data pipelines and feature stores

·      Deploy and monitor ML models using MLOps best practices on cloud infrastructure

·      Build dashboards and self-serve analytics tools to support stakeholder decision-making

 

Data Understanding & Analysis Skills

 

·      Strong ability to interrogate unfamiliar datasets and quickly develop a working understanding of their structure, semantics, and quirks

·      Experience working with messy, incomplete, or poorly documented real-world data

·      Skilled in identifying hidden patterns, trends, seasonality, and anomalies through visual and statistical exploration

·      Ability to ask the right questions about data — challenging assumptions, validating sources, and understanding the context in which data was collected

·      Proficiency in data profiling, descriptive statistics, and summary reporting to communicate the shape and health of a dataset

·      Experience creating data dictionaries, documentation, and data quality reports to support team-wide data understanding

·      Comfort working across structured (relational tables), semi-structured (JSON, XML), and unstructured (text, logs, sensor streams) data formats

 

Technical Skills Required

 

·      Proficiency in Python (pandas, NumPy, scikit-learn, PyTorch or TensorFlow) and/or R

·      Strong SQL skills with hands-on experience in DB2 and SQL Server

·      Experience with Databricks for large-scale data processing, feature engineering, and model training

·      Familiarity with cloud platforms: Azure or AWS

·      Experience with data warehouses and big data platforms (Databricks, Snowflake, or Redshift)

·      Knowledge of MLOps tools such as MLflow, Kubeflow, or Airflow

·      Experience with streaming data technologies such as Kafka or Spark

·      Solid foundation in probability, statistics, linear algebra, and experimental design

 

Nice to Have

 

·      Experience with deep learning, NLP, computer vision, or Bayesian methods

·      Familiarity with real-time or streaming data pipelines

·      Open-source contributions or published research

Read more
Global MNC serving 40+ Fortune 500 Companies

Global MNC serving 40+ Fortune 500 Companies

Agency job
Bengaluru (Bangalore)
5 - 8 yrs
₹20L - ₹26L / yr
Generative AI
Retrieval Augmented Generation (RAG)
skill iconMachine Learning (ML)
LangGraph
langchain
+11 more

Want to work on exciting GenAI projects for Fortune 500 companies across multiple sectors? Then read on..


About Company:

CSG is a multi-national company having a presence in 20 countries with 1600+ Engineers. Company works with more than 40 Fortune 500 customers such as Sony, Samsung, ABB, Thyssenkrup, Toyota, Mitsubishi and many more.


Job Description:

We are looking for a talented Generative AI Developer to join our dynamic AI/ML team. This position offers an exciting opportunity to leverage cutting-edge Generative AI (GenAI) technologies to drive innovation to solve real world problems. You will be responsible for developing and optimizing GenAI-based applications, implementing advanced techniques like Retrieval-Augmented Generation (RAG), RIG (Retrieval Interleaved Generation), Agentic Frameworks and vector databases. This is a collaborative role where you will work directly with customers cross-functional teams to design, implement, and optimize AI-driven solutions. Exposure to cloud-native AI platforms such as Amazon Bedrock and Microsoft Azure OpenAI is highly desirable.


Key Responsibilities

Generative AI Application Development:

Design, develop, and deploy GenAI-driven applications to address complex industrial challenges.

Implement Retrieval-Augmented Generation (RAG) and Agentic frameworks


Data Management & Optimization:

Design and optimize document chunking strategies tailored to specific datasets and use cases.

Build, manage, and optimize data embeddings for high-performance similarity searches across vector databases.


Collaboration & Integration:

Work closely with data engineers and scientists to integrate AI solutions into existing pipelines.

Collaborate with cross-functional teams to ensure seamless AI implementation.


Cloud & AI Platform Utilization:

Explore and implement best practices for utilizing cloud-native AI platforms, such as Amazon Bedrock and Azure OpenAI, to enhance solution delivery.

Continuous Learning & Innovation:

Stay updated with the latest trends and emerging technologies in the GenAI and AI/ML fields, ensuring our solutions remain cutting-edge.


Requirements:

The ideal candidate will have strong experience in Generative AI technologies, particularly in the areas of RAG, document chunking, and vector database management. They will be able to quickly adapt to evolving AI frameworks and leverage cloud-native platforms to create efficient, scalable solutions. You will be working in a fast-paced and collaborative environment, where innovation and the ability to learn and grow are key to success.

- 3 to 5 years of overall experience in software development, with 3 years focused on AI/ML.

- Minimum 2 years of experience specifically working with Generative AI (GenAI) technologies.

- Python, PySpark and SQL knowledge is necessary for tasks

- Proven ability to work in a collaborative, fast-paced, and innovative environment.


Technical Skills:

- Generative AI Frameworks & Technologies:

- Expertise in Generative AI frameworks, including prompt engineering, fine-tuning, and few-shot learning.

- Familiarity with frameworks such as T5 (Text-to-Text Transfer Transformation), LangChain, Lang Graph, Open-source tech stalk Ollama, Mistral, DeepSeek.

- Strong knowledge of Retrieval-Augmented Generation (RAG) for combining LLMs with external data retrieval systems.


Data Management:

- Experience in designing chunking strategies for different datasets.

- Expertise in data embedding techniques and experience with vector databases like Pinecone, ChromaDB etc

- Programming & AI/ML Libraries:

- Strong programming skills in Python.

- Experience with AI/ML libraries such as TensorFlow, PyTorch, and Hugging Face Transformers.


Cloud Platforms & Integration:

- Familiarity with cloud services for AI/ML workloads (AWS, Azure).

- Experience with API integration for AI services and building scalable applications.

- Certifications (Optional but Desirable):

- Certification in AI/ML (e.g., TensorFlow, AWS Certified Machine Learning Specialty).

- Certification or coursework in Generative AI or related technologies.

Read more
Product based company

Product based company

Agency job
Bengaluru (Bangalore)
4 - 9 yrs
₹12L - ₹13L / yr
skill icon.NET
ASP.NET
ASP.NET MVC
Microservices
FastAPI
+6 more

Technical Lead – Full Stack 

Work Location (WFO):

Nagar, Bengaluru, Karnataka

Interview Process:

L1 Interview – Face-to-Face at Office

Experience Required:

4-6 Years (Minimum1+ years in Technical Leadership role)

Budget:

Up to 13 LPA

Role Overview:

The candidate will lead the technical vision and architecture of a compliance platform by designing scalable, secure, and high-performance systems. The role involves driving full-stack development across .NET and open-source technologies, enabling unified AI Agent capabilities, Single Authentication (SSO), and a One-UI experience.

Key Responsibilities:

  • Define and own end-to-end architecture including micro-frontends, .NET services, FastAPI APIs, and microservices
  • Lead full-stack development using .NET and modern open-source technologies
  • Modernize legacy systems (ASP.NET, .NET Core, MS SQL Server) to cloud-native architecture
  • Design and implement AI Agents, SSO, and unified UI experiences
  • Manage sprint planning, backlogs, and collaborate with Product Owners
  • Implement CI/CD pipelines using Jenkins, GitHub Actions
  • Drive containerization and orchestration using Docker & Kubernetes
  • Ensure secure deployments and cloud infrastructure management
  • Establish engineering best practices, code reviews, and architecture governance
  • Mentor teams on Clean Architecture, SOLID principles, and DevOps practices

 

 

Required Skills:

  • ReactJS, FastAPI, Python, REST/GraphQL
  • ASP.NET, MVC, .NET Core, Entity Framework, MS SQL Server
  • Strong experience in Microservices Architecture
  • DevOps: CI/CD, Jenkins, GitOps, Docker, Kubernetes
  • Cloud Platforms: AWS / Azure / GCP
  • AI/ML & LLM tools: OpenAI, Llama, LangChain, etc.
  • Security: RBAC, API security, secrets management

Qualifications:

  • BE / BTech in Computer Science

 

Read more
Travel Tech - IPO company

Travel Tech - IPO company

Agency job
via Recruiting Bond by Pavan Kumar
Bengaluru (Bangalore)
12 - 16 yrs
₹80L - ₹130L / yr
Distributed Systems
Search systems
Pricing & Fare Engine
Booking & Ticketing
Airline Integrations
+47 more

Director of Engineering — Flights Platform

AI-First Travel Commerce · High-Scale Distributed Systems · Marketplace Infrastructure


🌏 The Problem Space

A flight search looks trivially simple. It is anything but.


Every query you fire triggers a choreography of distributed systems operating in real-time — integrating with a dozen airline GDS/NDC providers, computing dynamic fares across inventory buckets and fare rules, ranking thousands of itineraries by relevance and business intent, and returning a ranked, priced, bookable result set — all in under 100ms.


→ Millions of search queries per minute

→ <100ms end-to-end SLA with external API dependencies

→ High-value transactions — a bug here means a missed booking, not a failed render

→ Pricing errors erode trust faster than any other failure mode


We are rebuilding the Flights platform as a real-time commerce engine for Bharat — AI-native from day zero, built to power both B2C consumer journeys and high-stakes B2B enterprise corridors.


This is a once-in-a-decade opportunity to build national-scale flight infrastructure from first principles.

🧠 What You Will Own

You will own the full Flights platform — systems, architecture, and the teams that build them.


Core System Domains:

•Search Systems — high-throughput, low-latency query pipelines returning ranked, bookable options

•Pricing & Fare Engine — dynamic pricing logic, fare rules, promotional overlays, and real-time validation

•Booking & Ticketing — transaction-critical flows requiring strict consistency, idempotency, and zero data loss

•Airline Integrations — managing unreliable external GDS/NDC APIs with retries, circuit-breakers, and reconciliation

•Post-Booking Flows — cancellations, modifications, refunds — correctness at the margin is non-negotiable


Platform Scope:

•High-scale APIs serving consumer apps, B2B enterprise clients, and third-party partners

•Event-driven state machines managing booking workflows across async boundaries

•Observability and reliability infrastructure across all mission-critical flows


Team Scope:

•Lead 15–30+ engineers across multiple product and platform teams

•Manage Engineering Managers and Principal/Staff engineers

•Own hiring, org design, and technical direction


⚙️ Core Engineering Challenges

This role is fundamentally about making the right trade-offs under uncertainty — at scale.


Latency vs. Accuracy — when do you serve a cached fare vs. call a live airline API?

Availability vs. Consistency — graceful degradation at booking time vs. strict price validation

Cost vs. Performance — when is an external API call worth it vs. a cache hit?

Scalability vs. Simplicity — the best system is the one your team can reason about under incident


🤖 AI-First Engineering

AI is not an afterthought. It is load-bearing architecture.

•LLM-powered pricing intelligence — dynamic fare prediction and demand signals

•RAG pipelines for fare rules, refund policy, and support automation

•Agentic booking resolution workflows — autonomous exception handling at scale

•MCP-based orchestration layers for multi-provider integration


⚖️ Key Responsibilities

Architecture & Distributed Systems

•Design and evolve sub-100ms distributed query systems serving millions of concurrent searches

•Build fault-tolerant booking pipelines with strong consistency and durability guarantees

•Drive Kafka-based event architectures for booking state management


Reliability & Observability

•Own 99.99%+ availability for booking and pricing systems

•Build deep observability — metrics, distributed tracing, structured logging, SLOs/SLAs

•Lead post-incident reviews and drive systemic reliability improvements


Business Partnership

•Partner with Product, Revenue, and Partnerships to translate commercial goals into architecture

•Influence platform roadmap, supplier strategy, and long-term technical investment


🛠️ Technology Stack

Backend: Java · Kotlin · Go · Python

Architecture: Microservices · Event-Driven (Kafka) · gRPC

Data: Redis · Aerospike · DynamoDB · Elasticsearch

Cloud: AWS (EKS, EC2, S3)

Observability: Prometheus · Grafana · OpenTelemetry


👤 Who You Are

•12–16 years in backend/distributed systems; 5+ Years in an Engineering Leadership role, led teams of 15–50 engineers

•Built and scaled large B2C + B2B platforms — Travel Tech, FinTech, or high-scale Consumer

•Deep expertise in real-time systems, marketplace dynamics, and external API integration

•Tier-I institute background strongly preferred (IIT / IIIT / NIT / IISC / BITS / VIT / SRM — CSE/ISE)


🚀 Why This Matters

Build national-scale infrastructure for 1.4 billion people

Sit at the intersection of AI · distributed systems · marketplace economics

Define the future of travel commerce in India — from architecture to product



Read more
Thingularity

Thingularity

Agency job
via Thomasmount Consulting by Shirin Shahana
Bengaluru (Bangalore)
4 - 8 yrs
₹18L - ₹20L / yr
skill iconPython
SQL
ETL

Job Summary

We are seeking a skilled Data Engineer with 4+ years of experience in building scalable data pipelines and working with modern data platforms. The ideal candidate should have strong expertise in Python, SQL, and cloud-based data solutions, with hands-on experience in ETL/ELT processes and data warehousing.

Key Responsibilities

  • Design, build, and maintain scalable data pipelines using Python
  • Develop and optimize ETL/ELT workflows for data ingestion and transformation
  • Work with structured and unstructured data from multiple sources
  • Build and manage data warehouses/data lakes
  • Perform data validation, cleansing, and quality checks
  • Optimize SQL queries and improve data processing performance
  • Collaborate with data analysts, data scientists, and business teams
  • Implement data governance, security, and best practices
  • Monitor pipelines and troubleshoot production issues

Required Skills

  • Strong programming experience in Python (Pandas, NumPy, PySpark preferred)
  • Excellent SQL skills (joins, window functions, performance tuning)
  • Experience with ETL tools like Informatica, Talend, or DBT
  • Hands-on experience with cloud platforms (Azure / AWS / GCP)
  • Experience in data warehousing solutions like Snowflake, Redshift, BigQuery
  • Knowledge of workflow orchestration tools like Apache Airflow
  • Familiarity with version control tools like Git

Preferred Skills

  • Experience with Big Data technologies (Spark, Hadoop)
  • Knowledge of streaming tools like Kafka
  • Exposure to CI/CD pipelines and DevOps practices
  • Experience in data modeling (Star/Snowflake schema)
  • Understanding of APIs and data integration


Read more
AI-Powered Platform

AI-Powered Platform

Agency job
via Peak Hire Solutions by Dharati Thakkar
Remote only
5 - 10 yrs
₹35L - ₹45L / yr
skill iconMachine Learning (ML)
skill iconPython
Artificial Intelligence (AI)
Natural Language Processing (NLP)
Scikit-Learn
+10 more

Budget: 35 LPA to 45 LPA

Work schedule is Mon to Fri, 3:30am to 12:30pm IST


Key Responsibilities:

  • Design, develop, and deploy computer vision and machine learning models for analyzing visual and document-based data.
  • Build pipelines that convert unstructured visual inputs into structured and usable information.
  • Develop and evaluate models for tasks such as object detection, segmentation, document parsing, and image understanding.
  • Apply OCR and related techniques to extract meaningful information from complex documents and imagery.
  • Work with large datasets and build efficient training and evaluation pipelines.
  • Handle real-world visual datasets that may contain noise, inconsistencies, incomplete information, or varying formats.
  • Experiment with different approaches to solve challenging computer vision problems and evaluate tradeoffs between accuracy, performance, and complexity.
  • Collaborate with product and engineering teams to integrate machine learning models into scalable production systems.
  • Continuously improve model performance, accuracy, and robustness in real-world environments.
  • Stay up to date with the latest developments in AI and computer vision and apply relevant techniques where appropriate.
  • Actively leverage modern AI tools and frameworks to accelerate experimentation, development, and engineering workflows.


Requirements:

  • 5+ years of hands-on experience building and deploying machine learning models, particularly in Computer Vision or document understanding.
  • Strong proficiency in Python for machine learning and data processing.
  • Hands-on experience with modern ML frameworks such as PyTorch and libraries in the Hugging Face ecosystem.
  • Experience with computer vision tooling such as OpenCV.
  • Experience with common ML and data science libraries such as scikit-learn, NumPy, and Pandas.
  • Experience developing models for tasks such as segmentation, object detection, or document analysis.
  • Experience working with large image datasets and building training pipelines.
  • Solid understanding of model evaluation, data preprocessing, and performance optimization.
  • Strong problem-solving skills and ability to work in a fast-paced product environment.
  • Ability to collaborate effectively with cross-functional engineering and product teams.
  • The candidate should be based in India
  • Willing to work remotely full-time
  • Work schedule is Mon to Fri, 3:30am to 12:30pm IST


Preferred Qualifications:

  • Experience with TensorFlow or other deep learning frameworks.
  • Experience working with OCR pipelines or document analysis systems.
  • Experience deploying machine learning models in production environments.
  • Experience with containerized deployments such as Docker or Kubernetes.
  • Experience working with complex technical documents, diagrams, or structured visual data.
  • Familiarity with spatial or geometry-related data problems.
  • Experience with libraries such as Detectron2, MMDetection, or similar.
  • Familiarity with frameworks used to integrate modern AI models into applications (e.g., LangChain or similar tooling).
  • Contributions to open-source ML or computer vision projects are a plus.


Additional Information:

  • The problems we work on involve complex visual and document-based data, so we value engineers who enjoy tackling challenging technical problems and experimenting with different approaches to reach practical solutions.
  • Candidates are required to include links to relevant projects, GitHub repositories, research work, or examples of machine learning systems they have built.


Benefits:

  • Flexible remote work opportunities with career development opportunities
  • Engagement with a supportive and collaborative global team
  • Competitive market based salary
Read more
Honeybee Digital
Remote only
0 - 1 yrs
₹0.5L - ₹1L / yr
skill iconPython
FastAPI

Job Title: Python Development Intern

Company: Honeybee Digital

Location: Remote

Internship Duration: 3 Months

Job Type: Internship

Working Hours

  • Full-time: 9:00 AM – 6:00 PM
  • Part-time: 9:00 AM – 1:00 PM / 1:00 PM – 6:00 PM

Note: Internship certificate will be provided only after successful completion of the internship duration.

About the Role

We are looking for a passionate and motivated Python Development Intern who is eager to gain hands-on experience in real-world projects. This role is ideal for candidates interested in backend development, automation, data handling, and API integration.

Key Responsibilities

  • Assist in developing applications using Python
  • Work on data handling, automation scripts, and backend logic
  • Support API development and integration
  • Assist in web scraping and data processing tasks
  • Debug, test, and optimize existing code
  • Collaborate with development and data teams
  • Document code and maintain project updates

Requirements

  • Basic knowledge of Python programming
  • Understanding of data structures and logic building
  • Familiarity with libraries such as Pandas, NumPy (preferred)
  • Basic understanding of APIs and web frameworks (Flask/Django is a plus)
  • Problem-solving mindset and willingness to learn
  • Ability to work independently and meet deadlines

Skills You Will Gain

  • Hands-on experience in Python development and real projects
  • Exposure to automation, Fast APIs, and backend systems
  • Practical knowledge of data processing and scripting
  • Debugging and optimization techniques
  • Experience working in a professional development environment

Who Can Apply

  • Students pursuing Computer Science, IT, Data Science, or related fields
  • Freshers interested in Python development and backend roles
  • Candidates looking to build a strong technical portfolio


Read more
Bengaluru (Bangalore)
4 - 10 yrs
₹1L - ₹10L / yr
skill icon.NET
SSO
ASP.NET
ASP.NET MVC
MySQL
+16 more

Dear Candidates,


We have an urgent requirement for a Technical Lead – Full Stack role based in Bangalore. Please find the details below:


Work Location (WFO):

Nagar, Bengaluru, Karnataka


Interview Process:

L1 Interview – Face-to-Face at Office


Experience Required:

4-6 Years (Minimum1+ years in Technical Leadership role)


Role Overview:

The candidate will lead the technical vision and architecture of a compliance platform by designing scalable, secure, and high-performance systems. The role involves driving full-stack development across .NET and open-source technologies, enabling unified AI Agent capabilities, Single Authentication (SSO), and a One-UI experience.

Key Responsibilities:

  • Define and own end-to-end architecture including micro-frontends, .NET services, FastAPI APIs, and microservices
  • Lead full-stack development using .NET and modern open-source technologies
  • Modernize legacy systems (ASP.NET, .NET Core, MS SQL Server) to cloud-native architecture
  • Design and implement AI Agents, SSO, and unified UI experiences
  • Manage sprint planning, backlogs, and collaborate with Product Owners
  • Implement CI/CD pipelines using Jenkins, GitHub Actions
  • Drive containerization and orchestration using Docker & Kubernetes
  • Ensure secure deployments and cloud infrastructure management
  • Establish engineering best practices, code reviews, and architecture governance
  • Mentor teams on Clean Architecture, SOLID principles, and DevOps practices

Required Skills:

  • ReactJS, FastAPI, Python, REST/GraphQL
  • ASP.NET, MVC, .NET Core, Entity Framework, MS SQL Server
  • Strong experience in Microservices Architecture
  • DevOps: CI/CD, Jenkins, GitOps, Docker, Kubernetes
  • Cloud Platforms: AWS / Azure / GCP
  • AI/ML & LLM tools: OpenAI, Llama, LangChain, etc.
  • Security: RBAC, API security, secrets management

Qualifications:

  • BE / BTech in Computer Science
Read more
Euphoric Thought Technologies
Bengaluru (Bangalore)
3 - 5 yrs
₹6L - ₹10L / yr
skill iconPython
skill iconDjango
skill iconPostgreSQL
RESTful APIs

Job Summary

We are looking for a skilled Python Developer with 3 years of experience to join our team in Bangalore. The ideal candidate should have strong expertise in Python, Django, and PostgreSQL, along with a good understanding of backend development. Knowledge of Java will be an added advantage.


Key Responsibilities

Develop, test, and maintain scalable backend applications using Python and Django

Design and manage databases using PostgreSQL

Write clean, efficient, and reusable code

Collaborate with cross-functional teams to define, design, and ship new features

Debug and resolve technical issues and optimize application performance

Participate in code reviews and ensure best coding practices


Required Skills

Strong experience in Python

Hands-on experience with Django framework

Good knowledge of PostgreSQL database

Understanding of REST APIs and web services

Familiarity with version control systems (e.g., Git)


Good to Have

Basic knowledge of Java

Experience with cloud platforms or deployment processes

Understanding of front-end technologies is a plus


Qualifications

Bachelor’s degree in Computer Science, Engineering, or related field


Additional Requirements

Immediate joiners or candidates with short notice period preferred

Strong problem-solving and analytical skills

Good communication and teamwork abilities

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort