Cutshort logo

50+ Python Jobs in India

Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!

icon
CloudThat

at CloudThat

1 recruiter
shubhangi shrivastava
Posted by shubhangi shrivastava
Bengaluru (Bangalore)
3 - 6 yrs
₹7L - ₹10L / yr
skill iconHTML/CSS
skill iconPython
skill iconJava
SQL
skill iconC++
+2 more

About CloudThat:-

At CloudThat, we are driven by our mission to empower professionals and businesses to harness the full potential of cloud technologies. As a leader in cloud training and consulting services in India, our core values guide every decision we make and every customer interaction we have.


Role Overview:-

We are looking for a passionate and experienced Technical Trainer to join our expert team and help drive knowledge adoption across our customers, partners, and internal teams.


Key Responsibilities:

• Deliver high-quality, engaging technical training sessions both in-person and virtually to customers, partners, and internal teams.

• Design and develop training content, labs, and assessments based on business and technology requirements.

• Collaborate with internal and external SMEs to draft course proposals aligned with customer needs and current market trends.

• Assist in training and onboarding of other trainers and subject matter experts to ensure quality delivery of training programs.

• Create immersive lab-based sessions using diagrams, real-world scenarios, videos, and interactive exercises.

• Develop instructor guides, certification frameworks, learner assessments, and delivery aids to support end-to-end training delivery.

• Integrate hands-on project-based learning into courses to simulate practical environments and deepen understanding.

• Support the interpersonal and facilitation aspects of training fostering an inclusive, engaging, and productive learning environment


Skills & Qualifications:

• Experience developing content for professional certifications or enterprise skilling programs.

• Familiarity with emerging technology areas such as cloud computing, AI/ML, DevOps, or data engineering.


Technical Competencies:

  • Expertise in languages like C, C++, Python, Java
  • Understanding of algorithms and data structures 
  • Expertise on SQL 

Or Directly Apply-https://cloudthat.keka.com/careers/jobdetails/95441


Read more
Deqode

at Deqode

1 recruiter
Samiksha Agrawal
Posted by Samiksha Agrawal
Remote only
9 - 15 yrs
₹9L - ₹16L / yr
skill iconMachine Learning (ML)
MLOps
CI/CD
skill iconPython
Generative AI
+1 more

Job Description -

Profile: Senior ML Lead

Experience Required: 10+ Years

Work Mode: Remote

Key Responsibilities:

  • Design end-to-end AI/ML architectures including data ingestion, model development, training, deployment, and monitoring
  • Evaluate and select appropriate ML algorithms, frameworks, and cloud platforms (Azure, Snowflake)
  • Guide teams in model operationalization (MLOps), versioning, and retraining pipelines
  • Ensure AI/ML solutions align with business goals, performance, and compliance requirements
  • Collaborate with cross-functional teams on data strategy, governance, and AI adoption roadmap

Required Skills:

  • Strong expertise in ML algorithms, Linear Regression, and modeling fundamentals
  • Proficiency in Python with ML libraries and frameworks
  • MLOps: CI/CD/CT pipelines for ML deployment with Azure
  • Experience with OpenAI/Generative AI solutions
  • Cloud-native services: Azure ML, Snowflake
  • 8+ years in data science with at least 2 years in solution architecture role
  • Experience with large-scale model deployment and performance tuning

Good-to-Have:

  • Strong background in Computer Science or Data Science
  • Azure certifications
  • Experience in data governance and compliance


Read more
Newmi Care
Parnika Sangwar
Posted by Parnika Sangwar
Gurugram
4 - 7 yrs
₹10L - ₹19L / yr
skill iconPython
skill iconDjango
skill iconReact.js
skill iconJavascript

Domain: Digital Health | EHR & Care Management Platforms


Professional Summary

Full Stack Software Engineer with hands-on experience in building and scaling healthcare technology platforms. Strong backend expertise in Python and Django, with mandatory proficiency in JavaScript and working knowledge of React for frontend development. Experienced in developing APIs, managing databases, and collaborating with cross functional teams to deliver reliable, production-grade solutions for patient care and clinical operations.


Technical Skills

Backend:

• Python, Django, Django-based frameworks (Zango or similar meta-frameworks)

• RESTful API development

• PostgreSQL and relational database design

Frontend (Mandatory):

• JavaScript (ES6+)

• HTML, CSS

• React.js (working knowledge)

• API integration with frontend components

Tools & Platforms:

• Git and version control

• CI/CD fundamentals

• Docker (basic exposure)

• Cloud platforms (AWS/GCP – exposure)


Professional Experience:

• Designed, developed, and maintained backend services using Python and Django

• Built and optimized RESTful APIs for patient management, scheduling, and care workflows

• Developed frontend components using JavaScript and integrated APIs with React-based interfaces

• Collaborated with product, clinical, and operations teams

• Integrated external systems such as labs and communication services • Contributed to data modeling aligned with clinical workflows

• Debugged production issues and improved platform performance

• Maintained internal documentation


Key Contributions:

• Enabled end-to-end patient journeys through backend and frontend integrations

• Improved operational efficiency via workflow automation

• Delivered production-ready features in a regulated healthcare environment

Read more
MindInventory

at MindInventory

1 video
Uzer Khan
Posted by Uzer Khan
Ahmedabad
5 - 8 yrs
₹4L - ₹12L / yr
skill iconPython
FastAPI
skill iconDjango
skill iconFlask

Shortened version:

  • 5+ years experience in Python with strong core concepts (OOP, data structures, exception handling, problem-solving)
  • Experience with FastAPI (preferred) or Flask/Django
  • Strong REST API development & authentication (JWT/OAuth)
  • SQL (MySQL/PostgreSQL) & NoSQL (MongoDB/Firestore) experience
  • Basic cloud knowledge (GCP or AWS)
  • Git, code reviews, clean coding & performance optimization
  • Good communication and teamwork skills

Responsibilities:

  • Build and maintain scalable backend apps and REST APIs
  • Work with databases, third-party integrations, and cloud deployments
  • Write clean, testable, optimized code
  • Debug, troubleshoot, and improve performance
  • Collaborate with team on technical solutions

Good to have:

  • GCP (BigQuery, Dataflow, Cloud Functions)
  • Microservices, Redis/Kafka/RabbitMQ
  • Docker, CI/CD
  • Basic Pandas/Numpy for data handling


Perks & Benefits

  • 5 Days Working
  • Family Health Insurance
  • Relaxation Area
  • Affordable Lunch
  • Free Snacks & Drinks
  • Open Work Culture
  • Competitive Salary & Benefits
  • Festival Celebrations
  • International Exposure Opportunities
  • 20 Paid Leaves per Year
  • Marriage Leave & Parental Leave Policy


Read more
Healthcare Industry

Healthcare Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 10 yrs
₹25L - ₹30L / yr
MLOps
Generative AI
skill iconPython
Natural Language Processing (NLP)
skill iconMachine Learning (ML)
+22 more

JOB DETAILS:

* Job Title: Principal Data Scientist

* Industry: Healthcare

* Salary: Best in Industry

* Experience: 6-10 years

* Location: Bengaluru

 

Preferred Skills: Generative AI, NLP & ASR, Transformer Models, Cloud Deployment, MLOps

 

Criteria:

  1. Candidate must have 7+ years of experience in ML, Generative AI, NLP, ASR, and LLMs (preferably healthcare).
  2. Candidate must have strong Python skills with hands-on experience in PyTorch/TensorFlow and transformer model fine-tuning.
  3. Candidate must have experience deploying scalable AI solutions on AWS/Azure/GCP with MLOps, Docker, and Kubernetes.
  4. Candidate must have hands-on experience with LangChain, OpenAI APIs, vector databases, and RAG architectures.
  5. Candidate must have experience integrating AI with EHR/EMR systems, ensuring HIPAA/HL7/FHIR compliance, and leading AI initiatives.

 

Job Description

Principal Data Scientist

(Healthcare AI | ASR | LLM | NLP | Cloud | Agentic AI)

 

Job Details

  • Designation: Principal Data Scientist (Healthcare AI, ASR, LLM, NLP, Cloud, Agentic AI)
  • Location: Hebbal Ring Road, Bengaluru
  • Work Mode: Work from Office
  • Shift: Day Shift
  • Reporting To: SVP
  • Compensation: Best in the industry (for suitable candidates)

 

Educational Qualifications

  • Ph.D. or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field
  • Technical certifications in AI/ML, NLP, or Cloud Computing are an added advantage

 

Experience Required

  • 7+ years of experience solving real-world problems using:
  • Natural Language Processing (NLP)
  • Automatic Speech Recognition (ASR)
  • Large Language Models (LLMs)
  • Machine Learning (ML)
  • Preferably within the healthcare domain
  • Experience in Agentic AI, cloud deployments, and fine-tuning transformer-based models is highly desirable

Role Overview

This position is part of company, a healthcare division of Focus Group specializing in medical coding and scribing.

We are building a suite of AI-powered, state-of-the-art web and mobile solutions designed to:

  • Reduce administrative burden in EMR data entry
  • Improve provider satisfaction and productivity
  • Enhance quality of care and patient outcomes

Our solutions combine cutting-edge AI technologies with live scribing services to streamline clinical workflows and strengthen clinical decision-making.

The Principal Data Scientist will lead the design, development, and deployment of cognitive AI solutions, including advanced speech and text analytics for healthcare applications. The role demands deep expertise in generative AI, classical ML, deep learning, cloud deployments, and agentic AI frameworks.

 

Key Responsibilities

AI Strategy & Solution Development

  • Define and develop AI-driven solutions for speech recognition, text processing, and conversational AI
  • Research and implement transformer-based models (Whisper, LLaMA, GPT, T5, BERT, etc.) for speech-to-text, medical summarization, and clinical documentation
  • Develop and integrate Agentic AI frameworks enabling multi-agent collaboration
  • Design scalable, reusable, and production-ready AI frameworks for speech and text analytics

Model Development & Optimization

  • Fine-tune, train, and optimize large-scale NLP and ASR models
  • Develop and optimize ML algorithms for speech, text, and structured healthcare data
  • Conduct rigorous testing and validation to ensure high clinical accuracy and performance
  • Continuously evaluate and enhance model efficiency and reliability

Cloud & MLOps Implementation

  • Architect and deploy AI models on AWS, Azure, or GCP
  • Deploy and manage models using containerization, Kubernetes, and serverless architectures
  • Design and implement robust MLOps strategies for lifecycle management

Integration & Compliance

  • Ensure compliance with healthcare standards such as HIPAA, HL7, and FHIR
  • Integrate AI systems with EHR/EMR platforms
  • Implement ethical AI practices, regulatory compliance, and bias mitigation techniques

Collaboration & Leadership

  • Work closely with business analysts, healthcare professionals, software engineers, and ML engineers
  • Implement LangChain, OpenAI APIs, vector databases (Pinecone, FAISS, Weaviate), and RAG architectures
  • Mentor and lead junior data scientists and engineers
  • Contribute to AI research, publications, patents, and long-term AI strategy

 

Required Skills & Competencies

  • Expertise in Machine Learning, Deep Learning, and Generative AI
  • Strong Python programming skills
  • Hands-on experience with PyTorch and TensorFlow
  • Experience fine-tuning transformer-based LLMs (GPT, BERT, T5, LLaMA, etc.)
  • Familiarity with ASR models (Whisper, Canary, wav2vec, DeepSpeech)
  • Experience with text embeddings and vector databases
  • Proficiency in cloud platforms (AWS, Azure, GCP)
  • Experience with LangChain, OpenAI APIs, and RAG architectures
  • Knowledge of agentic AI frameworks and reinforcement learning
  • Familiarity with Docker, Kubernetes, and MLOps best practices
  • Understanding of FHIR, HL7, HIPAA, and healthcare system integrations
  • Strong communication, collaboration, and mentoring skills

 

 

Read more
Peliqan

at Peliqan

3 recruiters
Bharath Kumar
Posted by Bharath Kumar
Bengaluru (Bangalore)
3 - 6 yrs
₹10L - ₹15L / yr
skill iconPython
RESTful APIs
API
Data integration
JSON
+4 more

Python API Connector Developer


Peliqan is a highly scalable and secure cloud solution for data collaboration in the modern data stack. We are on a mission to reinvent how data is shared in companies.


We are looking for a Python Developer to join our team and help us build a robust and reliable Connectors that connect with existing REST APIs. The ideal candidate has a strong background in consuming APIs, working with REST APIs and GraphQL APIs, using Postman, and general development skills in Python.


In this role, you will be responsible for developing data connectors, these are Python wrappers that consume existing APIs from various data sources such as SaaS CRM systems, accounting software, ERP systems, eCommerce platforms etc. You will become an expert in handling APIs to perform ETL data extraction from these sources into the Peliqan data warehouse.


You will also maintain documentation, and provide technical support to our internal and external clients. Additionally, you will be required to collaborate with other teams such as Product, Engineering, and Design to ensure the successful implementation of our connectors.If you have an eye for detail and a passion for technology, we want to hear from you!


Your responsibilities


  • As a Python API Connector Developer at Peliqan.io, you are responsible for developing high-quality ETL connectors to extract data from SaaS data sources by consuming REST APIs and GraphQL APIs.
  • Develop and maintain Connector documentation, including code samples and usage guidelines
  • Troubleshoot and debug complex technical problems related to APIs and connectors
  • Collaborate with other developers to ensure the quality and performance of our connectors

What makes you a great candidate


  • Expert knowledge of technologies such as RESTful APIs, GraphQL, JSON, XML, OAuth2 flows, HTTP, SSL/TLS, Webhooks, API authentication methods, rate limiting, paging strategies in APIs, headers, response codes
  • Basic knowledge of web services technologies, including SOAP and WSDL
  • Proficiency in database technologies such as MySQL, MongoDB, etc.
  • Experience with designing, building, and maintaining public and private APIs
  • Excellent understanding of REST APIs (consuming APIs in Postman)
  • Coding in Python
  • Experienced in working with JSON, JSON parsing in Python and JSON path
  • Good understanding of SaaS software, CRM, Marketing Automation, Accounting, and ERP systems (as they will be the main source of data)
  • Analytical mindset: you are capable of discussing technical requirements with customers and implementing these in the Peliqan platform
  • Customer-driven, great communication skills
  • You are motivated, proactive, you have eyes for details


Read more
Rapidclaims

Rapidclaims

Agency job
via Mergenomincs by Recruitment Team2
Bengaluru (Bangalore)
5 - 12 yrs
₹30L - ₹55L / yr
skill iconJava
skill iconPython
Artificial Intelligence (AI)
Engineering Management

About RapidClaims

RapidClaims is a leader in AI-driven revenue cycle management, transforming medical

coding and revenue operations with cutting-edge technology.

The company has raised $11 million in total funding from top investors, including Accel

and Together Fund.

Join us as we scale a cloud-native platform that runs transformer-based Large Language

Models rigorously fine-tuned on millions of clinical notes and claims every month. You’ll

engineer autonomous coding pipelines that parse ICD-10-CM, CPT® and HCPCS at

lightning speed, deliver reimbursement insights with sub-second latency and >99 %

accuracy, and tackle the deep-domain challenges that make clinical AI one of the

hardest—and most rewarding—problems in tech.


Engineering Manager- Job Overview

We are looking for an Engineering Manager who can lead a team of engineers while

staying deeply involved in technical decisions. This role requires a strong mix of people

leadership, system design expertise, and execution focus to deliver high-quality product

features at speed. You will work closely with Product and Leadership to translate

requirements into scalable technical solutions and build a high-performance engineering

culture.

Key Responsibilities:

● Lead, mentor, and grow a team of software engineers

● Drive end-to-end ownership of product features from design to deployment

● Work closely with Product to translate requirements into technical solutions

● Define architecture and ensure scalability, reliability, and performance

● Establish engineering best practices, code quality, and review standards

● Improve development velocity, sprint planning, and execution discipline

● Hire strong engineering talent and build a solid team

● Create a culture of accountability, ownership, and problem-solving

● Ensure timely releases without compromising quality

● Stay hands-on with critical technical decisions and reviews

.


Requirements:

● 5+ years of software engineering experience, with 2+ years in team leadership

● Strong experience in building and scaling backend systems and APIs

● Experience working in a product/startup environment

● Good understanding of system design, architecture, and databases

● Ability to manage engineers while remaining technically credible

● Strong problem-solving and decision-making skills

● Experience working closely with Product teams

● High ownership mindset and bias for action


Good to Have

● Experience in healthcare tech / automation / RPA / AI tools

● Experience building internal tools and workflow systems

● Exposure to cloud infrastructure (AWS/GCP/Azure)

Read more
CAW.Tech

at CAW.Tech

5 recruiters
Ranjana Singh
Posted by Ranjana Singh
Hyderabad
8 - 12 yrs
Best in industry
skill iconPython
skill iconJava
Microservices
Distributed Systems
skill iconPostgreSQL
+7 more

Role Overview

We are hiring a Principal Datacenter Backend Developer to architect and build highly scalable, reliable backend platforms for modern data centers. This role owns control-plane and data-plane services powering orchestration, monitoring, automation, and operational intelligence across large-scale on-prem, hybrid, and cloud data center environments.

This is a hands-on principal IC role with strong architectural ownership and technical leadership responsibilities.


Key Responsibilities

  • Own end-to-end backend architecture for datacenter platforms (orchestration, monitoring, DCIM, automation).
  • Design and build high-availability distributed systems at scale.
  • Develop backend services using Java (Spring Boot / Micronaut / Quarkus) and/or Python (FastAPI / Flask / Django).
  • Build microservices for resource orchestration, telemetry ingestion, capacity and asset management.
  • Design REST/gRPC APIs and event-driven systems.
  • Drive performance optimization, scalability, and reliability best practices.
  • Embed SRE principles, observability, and security-by-design.
  • Mentor senior engineers and influence technical roadmap decisions.


Required Skills

  • Strong hands-on experience in Java and/or Python.
  • Deep understanding of distributed systems and microservices.
  • Experience with Kubernetes, Docker, CI/CD, and cloud-native deployments.
  • Databases: PostgreSQL/MySQL, NoSQL, time-series data.
  • Messaging systems: Kafka / Pulsar / RabbitMQ.
  • Observability tools: Prometheus, Grafana, ELK/OpenSearch.
  • Secure backend design (OAuth2, RBAC, audit logging).


Nice to Have

  • Experience with DCIM, NMS, or infrastructure automation platforms.
  • Exposure to hyperscale or colocation data centers.
  • AI/ML-based monitoring or capacity planning experience.


Why Join

  • Architect mission-critical platforms for large-scale data centers.
  • High-impact principal role with deep technical ownership.
  • Work on complex, real-world distributed systems problems.


Read more
VirtuesTech
Budime Haripriya
Posted by Budime Haripriya
Hyderabad
8 - 10 yrs
₹10L - ₹30L / yr
.NET Compact Framework
skill iconPython
Software design
agile methodology

Title:TeamLead– Software Development

(Lead ateam of developers to deliver applications in line withproduct strategy and growth)

 Experience:8– 10 years

 Department:InformationTechnology

Classification: Full-Time

 Location:HybridinHyderabad,India (3days onsiteand2days remote)


Job Description:

Lookingforafull-time Software Development Team Lead to lead our high-performing Information

Technology team. Thisperson will play a key rolein Clarity’s business by overseeing a development

team, focusingonexisting systems and long-term growth.Thisperson will serveas the technical leader,

able to discuss data structures, new technologies, and methods of achieving system goals. This person

will be crucialin facilitating collaborationamong team members and providing mentoring.

Reporting to the Director, SoftwareDevelopment,thispersonwillberesponsible for theday-to-day

operations of their team, be the first point of escalation and technical contactfor theteam.


JobResponsibilities:

 Manages all activities oftheir software developmentteamand sets goals for each team

member to ensure timely project delivery.

 Performcode reviews andwrite code if needed.

 Collaborateswiththe InformationTechnologydepartmentand business management

team to establish priorities for the team’s plan and manage team performance.

 Provide guidance on project requirements,developer processes, andend-user

documentation.

 Supports anexcellent customer experience bybeingproactive in assessing escalations

and working with the team to respond appropriately.

 Uses technical expertise to contribute towards building best-in-class products. Analyzes

business needs and develops a mix of internal and externalsoftware systems that work

well together.

 Using Clarity platforms, writes, reviews, and revises product requirements and

specifications. Analyzes software requirements,implements design plans, andreviews

unit tests. Participates in other areas of the software developmentprocess.


RequiredSkills:

 ABachelor’s degree inComputerScience,InformationTechnology, Engineering,or a

related discipline.

 Excellentwritten and verbalcommunication skills.

 Experiencewith .Net Framework,WebApplications,WindowsApplications, andWeb

Services

 Experience in developing andmaintaining applications using C#.NetCore,ASP.NetMVC,

and Entity Framework

 Experience in building responsive front-endusingReact.js,Angular.js,HTML5, CSS3 and

JavaScript.

 Experience in creating andmanaging databases, stored procedures andcomplex queries

with SQL Server

 Experiencewith Azure Cloud Infrastructure

 8+years of experience indesigning andcoding software inabove technology stack.

 3+years ofmanaging a teamwithin adevelopmentorganization.

 3+years of experience in Agile methodologies.


Preferred Skills:

 Experience in Python,WordPress,PHP

 Experience in using AzureDevOps

 Experience working with Salesforce, orany othercomparable ticketing system

 Experience in insurance/consumer benefits/file processing (EDI).

Read more
One2n

at One2n

3 candid answers
Krunali Lole
Posted by Krunali Lole
Remote, Pune
9 - 12 yrs
₹30L - ₹45L / yr
SRE
Monitoring
DevOps
Terraform
open telemetry
+7 more


About the role:

We are looking for a Staff Site Reliability Engineer who can operate at a staff level across multiple teams and clients. If you care about designing reliable platforms, influencing system architecture, and raising reliability standards across teams, you’ll enjoy working at One2N.

At One2N, you will work with our startups and enterprise clients, solving One-to-N scale problems where the proof of concept is already established and the focus is on scalability, maintainability, and long-term reliability. In this role, you will drive reliability, observability, and infrastructure architecture across systems, influencing design decisions, defining best practices, and guiding teams to build resilient, production-grade systems.


Key responsibilities:

  • Own and drive reliability and infrastructure strategy across multiple products or client engagements
  • Design and evolve platform engineering and self-serve infrastructure patterns used by product engineering teams
  • Lead architecture discussions around observability, scalability, availability, and cost efficiency.
  • Define and standardize monitoring, alerting, SLOs/SLIs, and incident management practices.
  • Build and review production-grade CI/CD and IaC systems used across teams
  • Act as an escalation point for complex production issues and incident retrospectives.
  • Partner closely with engineering leads, product teams, and clients to influence system design decisions early.
  • Mentor young engineers through design reviews, technical guidance, and best practices.
  • Improve Developer Experience (DX) by reducing cognitive load, toil, and operational friction.
  • Help teams mature their on-call processes, reliability culture, and operational ownership.
  • Stay ahead of trends in cloud-native infrastructure, observability, and platform engineering, and bring relevant ideas into practice


About you:

  • 9+ years of experience in SRE, DevOps, or software engineering roles
  • Strong experience designing and operating Kubernetes-based systems on AWS at scale
  • Deep hands-on expertise in observability and telemetry, including tools like OpenTelemetry, Datadog, Grafana, Prometheus, ELK, Honeycomb, or similar.
  • Proven experience with infrastructure as code (Terraform, Pulumi) and cloud architecture design.
  • Strong understanding of distributed systems, microservices, and containerized workloads.
  • Ability to write and review production-quality code (Golang, Python, Java, or similar)
  • Solid Linux fundamentals and experience debugging complex system-level issues
  • Experience driving cross-team technical initiatives.
  • Excellent analytical and problem-solving skills, keen attention to detail, and a passion for continuous improvement.
  • Strong written, communication, and collaboration skills, with the ability to work effectively in a fast-paced, agile environment.


Nice to have:

  • Experience working in consulting or multi-client environments.
  • Exposure to cost optimization, or large-scale AWS account management
  • Experience building internal platforms or shared infrastructure used by multiple teams.
  • Prior experience influencing or defining engineering standards across organizations.


Read more
Talent Pro
Remote only
8 - 10 yrs
₹40L - ₹90L / yr
skill iconPython

8+ years backend engineering experience in production systems


Proven experience architecting large-scale distributed systems (high throughput, low latency, high availability)


Deep expertise in system design including scalability, fault tolerance, and performance optimization


Experience leading cross-team technical initiatives in complex systems


Strong understanding of security, privacy, compliance, and secure coding practices

Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Pune
3 - 5 yrs
₹8L - ₹15L / yr
skill iconPython
skill iconMongoDB
RESTful APIs
skill iconFlask
skill iconDjango

Python Software Engineer (3–5 Years Experience)

Location: [Pune]



Role Overview

We are seeking skilled Python engineers to join our core product team. You will work on backend services, API development, and system integrations, contributing to a codebase of over 250,000 Python lines and collaborating with frontend, DevOps, and native code teams.


Key Responsibilities

·    Design, develop, and maintain scalable Python backend services and APIs

·    Optimize performance and reliability of large, distributed systems

·    Collaborate with frontend (JS/HTML/CSS) and native (C/C++/C#) teams

·    Write unit/integration tests and participate in code reviews

·    Troubleshoot production issues and implement robust solutions


Required Skills

·    3–5 years of professional Python development experience

·    Strong understanding of OOP, design patterns, and modular code structure

·    Experience with MongoDB (PyMongo), Mako, RESTful APIs, and asynchronous programming

·    Familiarity with code quality tools (flake8, pylint) and test frameworks (pytest, unittest)

·    Experience with Git and collaborative development workflows

·    Ability to read and refactor large, multi-module codebases


Nice to Have

·    Experience with web frameworks (web.py, Flask, Django)

·    Knowledge of C/C++ or C# for cross-platform integrations

·    Familiarity with CI/CD, Docker, and cloud deployment

·    Exposure to security, encryption, or enterprise SaaS products


What We Offer

·    Opportunity to work on a mission-critical, enterprise-scale product

·    Collaborative, growth-oriented engineering culture

·    Flexible work arrangements (remote/hybrid)

·    Competitive compensation and benefits

Read more
Resources Valley

at Resources Valley

1 recruiter
Manind Gupta
Posted by Manind Gupta
Jaipur, Indore
4 - 8 yrs
₹14L - ₹26L / yr
skill iconPython
skill iconReact.js


We are seeking a mature, proactive, and highly capable Senior Full Stack Engineer with over 5 years of experience in Python, React, Cloud Services, and Generative AI (LLM, RAG, Agentic AI). The ideal candidate can handle multiple challenges independently, think smartly, and build scalable end-to-end applications while also owning architecture and deployment.


Must Have Skills

  • Python (Fast API, Django REST Framework, Flask)
  • React JS 
  • Cloud Services (VM, Storage, Auth and Auth, Functions and Deployments) 
  • Microservices , Serverless Architecture
  • Docker, Container orchestration (Kubernetes)
  • API Development & Integration
  • Bitbucket or Git-based version control
  • Agile/Kanban working model
  • Familiarity with AI-powered coding assistants such as GitHub Copilot, Cursor AI, or Lovable AI.
  • Basic understanding of Generative AI concepts and prompt engineering.



Good to Have Skills

  • Experience with AI orchestration tools (Lang Chain, Llama Index, Semantic Kernel)
  • Generative AI (LLMs, RAG Framework, Vector DB, AI Chatbots, Agentic AI)
  • API Testing Tools (Postman)
  • CI/CD Pipelines
  • Advanced Cloud Networking & Security
  • Automation Testing (Playwright, Selenium) 


Preferred Personal Attributes

  • Highly proactive and self-driven
  • Smart problem solver with strong analytical ability
  • Ability to work independently in ambiguous and complex scenarios
  • Strong communication & stakeholder management skills
  • Ownership mindset and willingness to handle multiple challenges at once



Key Responsibilities


Full Stack Development

  • Build and maintain production-grade applications using Python (FastAPI/Django/Flask) and React / Next.js.
  • Develop reusable frontend components and optimized backend services/microservices.
  • Ensure clean architecture, maintainability, and code quality.
  • Own development across the lifecycle—design, build, testing, deployment.
  • Develop AI-driven applications using LLMs (OpenAI, LLaMA, Claude, Gemini, etc.).
  • Build and optimize RAG pipelines, vector searches, embeddings, and agent workflows.
  • Integrate vector databases: Pinecone, FAISS, Chroma, MongoDB Atlas Vector Search.
  • Build AI chatbots, automation agents, and intelligent Assistants.
  • Apply prompt engineering, fine-tuning, and model evaluation best practices.
  • Deploy, manage, and monitor cloud workloads on AWS/Azure/GCP.
  • Design and implement serverless architectures, microservices, event-driven flows.
  • Use Docker, CI/CD, and best DevOps practices.
  • Ensure scalability, security, cost optimization, and reliability.

Collaboration & Leadership

  • Comfortably handle ambiguity, break down problems, and deliver with ownership.
  • Lead technical initiatives and mentor junior team members.
  • Work closely with cross-functional teams in Agile/Kanban environments.




Read more
Resources Valley

at Resources Valley

1 recruiter
Manind Gupta
Posted by Manind Gupta
Pune
8 - 16 yrs
₹25L - ₹38L / yr
skill iconPython
FastAPI
API

Role Overview:

We are seeking a backend-focused Software Engineer with deep expertise in REST APIs,

real-time integrations, and cloud-based application services. The ideal candidate will build

scalable backend systems, integrate real-time data flows, and contribute to system design

and documentation. This is a hands-on role working with global teams in a fast-paced, Agile

environment.

Key Responsibilities:

• Design, develop, and maintain REST APIs and backend services using Python, FastAPI,

and SQLAlchemy.

• Build and support real-time integrations using AWS Lambda, API Gateway, and

EventBridge.

• Develop and maintain Operational Data Stores (ODS) for real-time data access.

• Write performant SQL queries and work with dimensional data models in PostgreSQL.

• Contribute to cloud-based application logic and data orchestration.

• Containerize services using Docker and deploy via CI/CD pipelines.

• Implement automated testing using pytest, pydantic, and related tools.

• Collaborate with cross-functional Agile teams using tools like Jira.

• Document technical workflows, APIs, and system integrations with clarity and

consistency.

• Should have experience in team management

Required Skills & Experience:

• 8+ years of backend or integrations engineering experience.

• Expert-level knowledge of REST API development and real-time system design.

• Strong experience with: Python (FastAPI preferred), SQLAlchemy.

• PostgreSQL and advanced SQL.

• AWS Lambda, API Gateway, EventBridge.

• Operational Data Stores (ODS) and distributed system integration.

• Experience with Docker, Git, CI/CD tools, and automated testing frameworks.

• Experience working in Agile environments and collaborating with cross-functional

teams.


• Comfortable producing and maintaining clear technical documentation.

• Working knowledge of React is acceptable but not a focus.

• Hands-on experience working with Databricks or similar data platforms.

Education & Certifications:

• Bachelor’s degree in Computer Science, Engineering, or a related field (required).

• Master’s degree is a plus.

• Certifications in AWS (e.g., Developer Associate, Solutions Architect) or Python

frameworks are highly preferred.

Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
7 - 10 yrs
₹21L - ₹30L / yr
Perforce
DevOps
skill iconGit
skill iconGitHub
skill iconPython
+7 more

JOB DETAILS:

* Job Title: Specialist I - DevOps Engineering

* Industry: Global Digital Transformation Solutions Provider

* Salary: Best in Industry

* Experience: 7-10 years

* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram

 

Job Description

Job Summary:

As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.

The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.

 

Key Responsibilities:

  • Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
  • Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
  • Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
  • Define migration scope — determine how much history to migrate and plan the repository structure.
  • Manage branch renaming and repository organization for optimized post-migration workflows.
  • Collaborate with development teams to determine migration points and finalize migration strategies.
  • Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.

 

Required Qualifications:

  • Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
  • Hands-on experience with P4-Fusion.
  • Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
  • Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
  • Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
  • Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
  • Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
  • Familiarity with CI/CD pipeline integration to validate workflows post-migration.
  • Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
  • Excellent communication and collaboration skills for cross-team coordination and migration planning.
  • Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.

 

Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools

 

Must-Haves

Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)

Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Trivandrum, Thiruvananthapuram, Pune
3 - 5 yrs
₹15L - ₹25L / yr
Terraform
Splunk
DevOps
Windows Azure
SQL Azure
+12 more

JOB DETAILS:

* Job Title: Lead I - Azure, Terraform, GitLab CI 

* Industry: Global Digital Transformation Solutions Provider

* Salary: Best in Industry

* Experience: 3-5 years

* Location: Trivandrum/Pune

 

Job Description

Job Title: DevOps Engineer

Experience: 4–8 Years 

Location: Trivandrum & Pune 

Job Type: Full-Time

Mandatory skills: Azure, Terraform, GitLab CI, Splunk

 

Job Description

We are looking for an experienced and driven DevOps Engineer with 4 to 8 years of experience to join our team in Trivandrum or Pune. The ideal candidate will take ownership of automating cloud infrastructure, maintaining CI/CD pipelines, and implementing monitoring solutions to support scalable and reliable software delivery in a cloud-first environment.

 

Key Responsibilities

  • Design, manage, and automate Azure cloud infrastructure using Terraform.
  • Develop scalable, reusable, and version-controlled Infrastructure as Code (IaC) modules.
  • Implement monitoring and logging solutions using Splunk, Azure Monitor, and Dynatrace.
  • Build and maintain secure and efficient CI/CD pipelines using GitLab CI or Harness.
  • Collaborate with cross-functional teams to enable smooth deployment workflows and infrastructure updates.
  • Analyze system logs, performance metrics, and s to troubleshoot and optimize performance.
  • Ensure infrastructure security, compliance, and scalability best practices are followed.

 

Mandatory Skills

Candidates must have hands-on experience with the following technologies:

  • Azure – Cloud infrastructure management and deployment
  • Terraform – Infrastructure as Code for scalable provisioning
  • GitLab CI – Pipeline development, automation, and integration
  • Splunk – Monitoring, logging, and troubleshooting production systems

 

Preferred Skills

  • Experience with Harness (for CI/CD)
  • Familiarity with Azure Monitor and Dynatrace
  • Scripting proficiency in Python, Bash, or PowerShell
  • Understanding of DevOps best practices, containerization, and microservices architecture
  • Exposure to Agile and collaborative development environments

 

Skills Summary

Azure, Terraform, GitLab CI, Splunk (Mandatory) Additional: Harness, Azure Monitor, Dynatrace, Python, Bash, PowerShell

 

Skills: Azure, Splunk, Terraform, Gitlab Ci

 

******

Notice period - 0 to 15days only

Job stability is mandatory

Location: Trivandrum/Pune

Read more
A robotics comapny working on industrial Robots

A robotics comapny working on industrial Robots

Agency job
via RedString by Kaushik Reddyshetty
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5 - 7 yrs
₹12L - ₹20L / yr
OrCAD
EAGLE
Raspberry Pi
Embedded C
skill iconPython
+15 more

Principal Electrical & Electronics Engineer – Robotics

About the Role

Octobotics Tech is seeking a Principal Electrical & Electronics Engineer to lead the design, development, and validation of high-reliability electronic systems for next-generation robotic platforms. This is a core engineering leadership position for professionals with strong multidisciplinary experience in power electronics, embedded systems, EMI/EMC compliance, and safety-critical design, ideally within marine, industrial, or hazardous environments.

Key Responsibilities

·       System Architecture & Leadership

·       Architect and supervise electrical and electronic systems for autonomous and remotely operated robotic platforms, ensuring reliability under challenging industrial conditions.

·       Lead end-to-end product development — from design, prototyping, and integration to validation and deployment.

·       Develop modular, scalable architectures enabling payload integration, sensor fusion, and AI-based control.

·       Collaborate with firmware, AI/ML, and mechanical teams to achieve seamless system-level integration.

·       Power & Safety Systems

·       Design robust, stable, and redundant power supply architectures for FPGA-based controllers, sensors, and high-value electronics.

·       Engineer surge-protected, isolated, and fail-safe electrical systems compliant with MIL-STD, ISO, and IEC safety standards.

·       Implement redundancy, grounding, and safety strategies for operations in Oil & Gas, Offshore, and Hazardous Zones.

·       Compliance & Validation

·       Ensure adherence to EMI/EMC, ISO 13485, IEC 60601, and industrial safety standards.

·       Conduct design simulations, PCB design reviews, and validation testing using tools such as KiCAD, OrCAD, MATLAB/Simulink, and LabVIEW.

·       Lead certification, quality, and documentation processes for mission-critical subsystems.

·       Innovation & Mentorship

·       Apply AI-enhanced signal processing, fuzzy control, and advanced filtering methods to embedded hardware.

·       Mentor and guide junior engineers and technicians to foster a knowledge-driven, hands-on culture.

·       Contribute to R&D strategy, product roadmaps, and technical proposals supporting innovation and fundraising.

Required Technical Skills

·       Hardware & PCB Design: KiCAD, OrCAD, EAGLE; 2–6 layer mixed-signal and power boards.

·       Embedded Systems: ARM Cortex A/M, STM32, ESP32, Nordic NRF52, NVIDIA Jetson, Raspberry Pi.

·       Programming: Embedded C, Python, MATLAB, PyQt, C/C++.

·       Simulation & Control: MATLAB/Simulink, LabVIEW; PID, fuzzy logic, and ML-based control.

·       Compliance Expertise: EMI/EMC, MIL-STD, ISO 13485, IEC 60601, and industrial safety standards.

·       Hands-On Skills: Soldering, circuit debugging, power testing, and system integration.

Qualifications

·       Education: B.Tech/M.S. in Electrical & Electronics Engineering (NIT/IIT preferred).

·       Experience: Minimum 5+ years in electronics system design, hardware-firmware integration, or robotics/industrial systems.

·       Preferred Background: Experience in R&D leadership, system compliance, and end-to-end hardware development.

Compensation & Growth

CTC: Competitive and flexible for exceptional candidates (aligned with ₹12–20 LPA range).

Engagement: Full-time, 5.5-day work week with fast-paced project timelines.

Rewards: Accelerated growth opportunities in a deep-tech robotics environment driving innovation in inspection and NDT automation.

Read more
Remote only
6 - 12 yrs
₹45L - ₹50L / yr
skill iconPython
skill iconReact.js
skill iconJavascript
API management
RESTful APIs

About the Role

We are seeking a hands-on Tech Lead to design, build, and integrate AI-driven systems that automate and enhance real-world business workflows. This is a high-impact role for someone who enjoys full-stack ownership — from backend AI architecture to frontend user experiences — and can align engineering decisions with measurable product outcomes.

You will begin as a strong individual contributor, independently architecting and deploying AI-powered solutions. As the product portfolio scales, you will lead a distributed team across India and Australia, acting as a System Integrator to align engineering, data, and AI contributions into cohesive production systems.

Example Project

Design and deploy a multi-agent AI system to automate critical stages of a company’s sales cycle, including:

  • Generating client proposals using historical SharePoint data and CRM insights
  • Summarizing meeting transcripts
  • Drafting follow-up communications
  • Feeding structured insights into dashboards and workflow tools

The solution will combine RAG pipelines, LLM reasoning, and React-based interfaces to deliver measurable productivity gains.

Key Responsibilities

  • Architect and implement AI workflows using LLMs, vector databases, and automation frameworks
  • Act as a System Integrator, coordinating deliverables across distributed engineering and AI teams
  • Develop frontend interfaces using React/JavaScript to enable seamless human-AI collaboration
  • Design APIs and microservices integrating AI systems with enterprise platforms (SharePoint, Teams, Databricks, Azure)
  • Drive architecture decisions balancing scalability, performance, and security
  • Collaborate with product managers, clients, and data teams to translate business use cases into production-ready systems
  • Mentor junior engineers and evolve into a broader leadership role as the team grows

Ideal Candidate Profile

Experience Requirements

  • 5+ years in full-stack development (Python backend + React/JavaScript frontend)
  • Strong experience in API and microservice integration
  • 2+ years leading technical teams and coordinating distributed engineering efforts
  • 1+ year of hands-on AI project experience (LLMs, Transformers, LangChain, OpenAI/Azure AI frameworks)
  • Prior experience in B2B SaaS environments, particularly in AI, automation, or enterprise productivity solutions

Technical Expertise

  • Designing and implementing AI workflows including RAG pipelines, vector databases, and prompt orchestration
  • Ensuring backend and AI systems are scalable, reliable, observable, and secure
  • Familiarity with enterprise integrations (SharePoint, Teams, Databricks, Azure)
  • Experience building production-grade AI systems within enterprise SaaS ecosystems




Read more
Mango Sciences
Remote only
5 - 7 yrs
₹10L - ₹15L / yr
skill iconPython
SQL
SQL quires

Database Programmer / Developer (SQL, Python, Healthcare)

Job Summary

We are seeking a skilled and experienced Database Programmer to join our team. The ideal candidate will be responsible for designing, developing, and maintaining our database systems, with a strong focus on data integrity, performance, and security. The role requires expertise in SQL, strong programming skills in Python, and prior experience working within the healthcare domain to handle sensitive data and complex regulatory requirements.

Key Responsibilities

  • Design, implement, and maintain scalable and efficient database schemas and systems.
  • Develop and optimize complex SQL queries, stored procedures, and triggers for data manipulation and reporting.
  • Write and maintain Python scripts to automate data pipelines, ETL processes, and database tasks.
  • Collaborate with data analysts, software developers, and other stakeholders to understand data requirements and deliver robust solutions.
  • Ensure data quality, integrity, and security, adhering to industry standards and regulations such as HIPAA.
  • Troubleshoot and resolve database performance issues, including query tuning and indexing.
  • Create and maintain technical documentation for database architecture, processes, and applications.

Required Qualifications

  • Experience:
  • Proven experience as a Database Programmer, SQL Developer, or a similar role.
  • Demonstrable experience working with database systems, including data modeling and design.
  • Strong background in developing and maintaining applications and scripts using Python.
  • Direct experience within the healthcare domain is mandatory, including familiarity with medical data (e.g., patient records, claims data) and related regulatory compliance (e.g., HIPAA).
  • Technical Skills:
  • Expert-level proficiency in Structured Query Language (SQL) and relational databases (e.g., SQL Server, PostgreSQL, MySQL).
  • Solid programming skills in Python, including experience with relevant libraries for data handling (e.g., Pandas, SQLAlchemy).
  • Experience with data warehousing concepts and ETL (Extract, Transform, Load) processes.
  • Familiarity with version control systems, such as Git.

Preferred Qualifications

  • Experience with NoSQL databases (e.g., MongoDB, Cassandra).
  • Knowledge of cloud-based data platforms (e.g., AWS, GCP, Azure).
  • Experience with data visualization tools (e.g., Tableau, Power BI).
  • Familiarity with other programming languages relevant to data science or application development.

Education

  • Bachelor’s degree in computer science, Information Technology, or a related field.

 

To process your resume for the next process, please fill out the Google form with your updated resume.


https://forms.gle/f7zgYAa632ww5Teb6

Read more
Chennai, Bengaluru (Bangalore)
4 - 10 yrs
Best in industry
skill iconData Science
skill iconPython
Forecasting
skill iconMachine Learning (ML)

Hi,


Greetings from Ampera!


we are looking for a Data Scientist with strong Python & Forecasting experience.


Title                               : Data Scientist – Python & Forecasting

Experience                   : 4 to 7 Yrs

Location                       : Chennai/Bengaluru

Type of hire                  : PWD and Non PWD

Employment Type     : Full Time

Notice Period             : Immediate Joiner

Working hours           : 09:00 a.m. to 06:00 p.m.

Workdays                   : Mon - Fri

 

 

Job Description:

 

We are looking for an experienced Data Scientist with strong expertise in Python programming and forecasting techniques. The ideal candidate should have hands-on experience building predictive and time-series forecasting models, working with large datasets, and deploying scalable solutions in production environments.


Key Responsibilities

  • Develop and implement forecasting models (time-series and machine learning based).
  • Perform exploratory data analysis (EDA), feature engineering, and model validation.
  • Build, test, and optimize predictive models for business use cases such as demand forecasting, revenue prediction, trend analysis, etc.
  • Design, train, validate, and optimize machine learning models for real-world business use cases.
  • Apply appropriate ML algorithms based on business problems and data characteristics
  • Write clean, modular, and production-ready Python code.
  • Work extensively with Python Packages & libraries for data processing and modelling.
  • Collaborate with Data Engineers and stakeholders to deploy models into production.
  • Monitor model performance and improve accuracy through continuous tuning.
  • Document methodologies, assumptions, and results clearly for business teams.

 

Technical Skills Required:

Programming

  • Strong proficiency in Python
  • Experience with Pandas, NumPy, Scikit-learn

Forecasting & Modelling

  • Hands-on experience in Time Series Forecasting (ARIMA, SARIMA, Prophet, etc.)
  • Experience with ML-based forecasting models (XGBoost, LightGBM, Random Forest, etc.)
  • Understanding of seasonality, trend decomposition, and statistical modeling

Data & Deployment

  • Experience handling structured and large datasets
  • SQL proficiency
  • Exposure to model deployment (API-based deployment preferred)
  • Knowledge of MLOps concepts is an added advantage

Tools (Preferred)

  • TensorFlow / PyTorch (optional)
  • Airflow / MLflow
  • Cloud platforms (AWS / Azure / GCP)


Educational Qualification

  • Bachelor’s or Master’s degree in Data Science, Statistics, Computer Science, Mathematics, or related field.


Key Competencies

  • Strong analytical and problem-solving skills
  • Ability to communicate insights to technical and non-technical stakeholders
  • Experience working in agile or fast-paced environments


Accessibility & Inclusion Statement

We are committed to creating an inclusive environment for all employees, including persons with disabilities. Reasonable accommodations will be provided upon request.

Equal Opportunity Employer (EOE) Statement

Ampera Technologies is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.

Read more
Remote only
6 - 12 yrs
₹45L - ₹60L / yr
skill iconPython
skill iconReact.js

Strong AI & Full-Stack Tech Lead

Mandatory (Experience 1): Must have 5+ years of experience in full-stack development, including Python for backend development and React/JavaScript for frontend, along with API/microservice integration.

Mandatory (Experience 2): Must have 2+ years of experience in leading technical teams, coordinating engineers, and acting as a system integrator across distributed teams.

Mandatory (Experience 3): Must have 1+ year of hands-on experience in AI projects, including LLMs, Transformers, LangChain, or OpenAI/Azure AI frameworks.

Mandatory (Tech Skills 1): Must have experience in designing and implementing AI workflows, including RAG pipelines, vector databases, and prompt orchestration.

Mandatory (Tech Skills 2): Must ensure backend and AI system scalability, reliability, observability, and security best practices.

Mandatory (Company): Must have experience working in B2B SaaS companies delivering AI, automation, or enterprise productivity solutions

Tech Skills (Familiarity): Should be familiar with integrating AI systems with enterprise platforms (SharePoint, Teams, Databricks, Azure) and enterprise SaaS environmentsx

Mandatory (Note): Both founders are based out of Australia, design (2) and developer (4) team in India. Indian shift timings.

Read more
KG Agile
Hiring HR
Posted by Hiring HR
Coimbatore
0 - 15 yrs
₹3.6L - ₹6.8L / yr
skill iconPython
skill iconJava
skill iconC++
Data Structures
skill iconMachine Learning (ML)
+5 more

Position: Assistant Professor


Department: CSE / IT

Experience: 0 – 15 Years

Joining: Immediate / Within 1 Month

Salary: As per norms and experience


🎓 Qualification:


ME / M.Tech in Computer Science Engineering / Information Technology


Ph.D. (Preferred but not mandatory)


First Class in UG & PG as per AICTE norms


🔍 Roles & Responsibilities:


Deliver high-quality lectures for UG / PG programs


Prepare lesson plans, course materials, and academic content


Guide student projects and internships


Participate in curriculum development and academic planning


Conduct internal assessments, evaluations, and result analysis


Mentor students for academic and career growth


Participate in departmental research activities


Publish research papers in reputed journals (Scopus/SCI preferred)


Attend Faculty Development Programs (FDPs), workshops, and conferences


Contribute to NAAC / NBA accreditation processes


Support institutional administrative responsibilities


💡 Required Skills:


Strong subject knowledge in CSE / IT domains


Programming proficiency (Python, Java, C++, Data Structures, AI/ML, Cloud, etc.)


Excellent communication and presentation skills


Research orientation and academic enthusiasm


Team collaboration and mentoring ability

Read more
CLOUDSUFI

at CLOUDSUFI

3 recruiters
Ayushi Dwivedi
Posted by Ayushi Dwivedi
Noida
5 - 7 yrs
₹25L - ₹32L / yr
Large Language Models (LLM)
skill iconPython
Retrieval Augmented Generation (RAG)
Large Language Models (LLM) tuning

Please send your resume at ayushi.dwivedi at cloudsufi.com


About Us

CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services, and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.


Job Summary

We are seeking a skilled and motivated AI/ML Engineer with a background in Natural Language Processing (NLP), speech technologies, and generative AI. The ideal candidate will have hands-on experience building AI projects from conception to deployment, including fine-tuning large language models (LLMs), developing conversational agents, and implementing machine learning pipelines. You will play a key role in building and enhancing our AI-powered products and services.


Key Responsibilities

Design, develop, and deploy advanced conversational AI systems, including customer onboarding agents and support chatbots for platforms like WhatsApp.

Process, transcribe, and diarize audio conversations to create high-quality datasets for fine-tuning large language models (LLMs).

Develop and maintain robust, scalable infrastructure for AI model serving, utilizing technologies like FastAPI, Docker, and cloud platforms (e.g., Google Cloud Platform).

Integrate and leverage knowledge graphs and contextual information systems to create more personalized, empathetic, and goal-oriented dialogues.

Engineer and implement retrieval-augmented generation (RAG) systems to enable natural language querying of internal company documents, optimizing for efficiency and informativeness.

Fine-tune and deploy generative models like Stable Diffusion for custom asset creation, with a focus on improving precision and reducing generative artifacts (FID score).

Collaborate with cross-functional teams, including product managers and designers, to build user-friendly interfaces and tools that enhance productivity and user experience.

Contribute to the research and publication of novel AI systems and models.


Qualifications and Skills

Education: Bachelor of Engineering (B.E.) in Computer Science or a related field.

Experience: 5+ years of relevant professional experience as a Machine Learning Engineer or in a similar role.

Programming Languages: Expertise in Python and a strong command of SQL, and JavaScript.

Machine Learning Libraries: Hands-on experience with PyTorch, Scikit-learn, Hugging Face Transformers, Diffusers, Librosa, LangGraph, and the OpenAI API.

Software and Tools: Proficiency with Docker and various databases including PostgreSQL, MongoDB, Redis, and Elasticsearch.


Core Competencies:

Experience developing end-to-end AI pipelines, from data processing and model training to API deployment and integration.

Familiarity with MLOps principles and tools for building and maintaining production-level machine learning systems.

A portfolio of projects, publications, or open-source contributions in the fields of NLP, Computer Vision, or Speech Analysis is a plus.


Excellent problem-solving skills and the ability to think strategically to deliver optimized and efficient solutions.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Mumbai, Pune
3 - 6 yrs
Best in industry
skill iconPython
PySpark
pandas
SQL
ADF
+2 more

* Python (3 to 6 years): Strong expertise in data workflows and automation

* Spark (PySpark): Hands-on experience with large-scale data processing

* Pandas: For detailed data analysis and validation

* Delta Lake: Managing structured and semi-structured datasets at scale

* SQL: Querying and performing operations on Delta tables

* Azure Cloud: Compute and storage services

* Orchestrator: Good experience with either ADF or Airflow

Read more
Amoolya Capital

at Amoolya Capital

2 candid answers
Shantanu Sharma
Posted by Shantanu Sharma
Remote only
0 - 3 yrs
₹10000 - ₹40000 / mo
skill iconC++
Operating systems
Probability
Thread
Data Structures
+2 more

🚀 Hiring: C++ Content Writer Intern

📍 Remote | ⏳ 3 Months | 💼 Internship

We’re looking for someone who has strong proficiency in C++, DSA and maths (probability, statistics).


You should be comfortable with:

1. Modern C++ (RAII, memory management, move semantics)

2. Concurrency & low-latency concepts (threads, atomics, cache behavior)

3. OS fundamentals (threads vs processes, virtual memory)

4. Strong Maths (probability, stats)

5. Writing, Reading and explaining real code


What you’ll do:

1. Write deep technical content on C++, coding.

2. Break down core computer science, HFT-style, low-latency concepts

3. Create articles, code deep dives, and explainers


What you get:

1. Good Pay as per industry standards

2. Exposure to real C++ applied in quant engineering

3. Mentorship from top engineering minds.

4. A strong public technical portfolio

5. Clear signal for Quant Developer / SDE/ Low-latency C++ roles.

Read more
Infosec Ventures
Gurugram
3 - 7 yrs
₹5L - ₹15L / yr
Fullstack Developer
skill iconReact.js
skill iconNextJs (Next.js)
skill iconNodeJS (Node.js)
skill iconPython

Job description

Position Overview


Job Title: Vibe Coder


Company: Infosec Ventures (www.infosecventures.com) — a fast-moving cybersecurity-focused technology and consulting firm that builds secure SaaS products, bespoke web applications and delivers security consulting, assessments and training to enterprise clients.


Join a small, highly collaborative on-site team in Gurugram where you will contribute end-to-end to product development, use cutting-edge frontend and backend technologies, leverage AI-assisted coding tools to accelerate delivery, and influence product strategy in a fast-paced, non-9-to-5, face-to-face working culture.


Key Responsibilities

  • Develop and ship high-quality frontend applications using Vite, ReactJS and NextJS; implement responsive, accessible UIs and reusable design-system components that align with product and UX goals.
  • Design and implement backend services in Node.js and Python (Express/Fastify, Flask/FastAPI), building scalable, maintainable APIs and microservices that integrate with PostgreSQL, MySQL and MongoDB.
  • Use AI-assisted coding tools (e.g., Cursor, Windsurf, Lovable.dev, AntiGravity or similar) to accelerate implementation, generate prototypes, automate repetitive tasks and create tests and documentation while maintaining strong engineering standards.
  • Own features end-to-end: contribute to architecture and technical design, implement code, write tests, deploy to CI/CD pipelines, instrument observability, and iterate based on user feedback and analytics.
  • Collaborate closely with product managers, UX/UI designers, QA and security stakeholders to translate requirements into technical deliverables, estimates and release plans; participate actively in roadmap and prioritization discussions.
  • Participate in rigorous code reviews, promote best practices for testing and security-first development, and help maintain an efficient, reliable development workflow to enable rapid iterations.
  • Write clear prompts for AI tools and produce high-quality documentation, README files and code comments to ensure team knowledge transfer and maintainability.


Required Qualifications

  • 2–4 years of professional software development experience, preferably with product or agency teams delivering web applications.
  • Proven experience building frontend applications with Vite, ReactJS and NextJS, including SSR/ISR patterns, routing, state management and performance optimization.
  • Strong backend development skills in Node.js and Python using modern frameworks (Express, Fastify, Flask, FastAPI) and experience designing RESTful APIs and services.
  • Solid practical experience with relational and document databases (PostgreSQL, MySQL, MongoDB): schema design, queries, indexing and migrations.
  • Demonstrable experience using AI coding assistants to accelerate development, refactor code, generate tests or documentation, and improve code quality.
  • Good software engineering fundamentals: data structures, algorithms, system design basics, debugging and writing maintainable, testable code.
  • Familiarity with Git, code review workflows and CI/CD; GraphQL knowledge is a plus.
  • Strong written and verbal communication skills, clear prompt-writing ability, and ability to explain technical trade-offs to non-technical stakeholders.
  • Ability to work on-site in Gurugram, Haryana, India; flexibility to work beyond strict 9–5 hours when project needs demand.
  • Bachelor’s degree in Computer Science, Engineering or equivalent practical experience.


Preferred Qualifications

  • Experience with TypeScript, GraphQL, WebSockets or other real-time systems.
  • Familiarity with cloud platforms (AWS, GCP or Azure), containerization with Docker and building CI/CD pipelines.
  • Experience with testing frameworks and automated test strategies (Jest, Cypress, PyTest) and a focus on test coverage and reliability.
  • Knowledge of performance optimization, front-end security best practices and accessibility (a11y) standards.
  • Prior exposure to product management or demonstrated product sense: defining MVPs, prioritization and customer-driven iteration.
  • Contributions to open-source projects, personal projects or a portfolio of shipped products demonstrating craftsmanship.
  • Experience in agency-style or client-facing roles where speed, communication and adaptability are essential.


What We Offer

  • Work Location: On-site in Gurugram, Haryana, India with a collaborative, face-to-face team culture.
  • Competitive salary commensurate with experience and performance-based bonuses.
  • Meaningful ownership of product features and direct influence on product direction and roadmap.
  • Opportunities for rapid skill growth through regular code reviews, mentorship, a learning budget and access to modern AI-enabled development tools.
  • Comprehensive health benefits, paid time off and support for work-life balance while acknowledging flexible hours outside a strict 9–5 schedule.
  • Modern development environment with investment in tooling, cloud credits and a stipend for conferences or training.
  • Small, collaborative teams with frequent cross-functional interaction—strong opportunity to grow into senior engineering or product roles.


Read more
Virtana

at Virtana

2 candid answers
1 product
Krutika Devadiga
Posted by Krutika Devadiga
Pune
5 - 10 yrs
Best in industry
skill iconPython
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+3 more

Role Overview:

Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.


We are seeking an individual with knowledge in Systems Management and/or Systems Monitoring Software and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.


Work Location: Pune/ Chennai


Job Type: Hybrid


Role Responsibilities:

  • The engineer will be primarily responsible for design and development of software solutions for the Virtana Platform
  • Partner and work closely with team leads, architects and engineering managers to design and implement new integrations and solutions for the Virtana Platform.
  • Communicate effectively with people having differing levels of technical knowledge.
  • Work closely with Quality Assurance and DevOps teams assisting with functional and system testing design and deployment
  • Provide customers with complex application support, problem diagnosis and problem resolution

 

Required Qualifications:

  • Minimum of 4+ years of experience in a Web Application centric Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
  • Able to understand and comprehend integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
  • Minimum of 4 years of development experience with one of these high level languages like Python, Java, GO is required.
  • Bachelor's (B.E, B.Tech) or Master's degree (M.E, M.Tech. MCA) in computer science, Computer Engineering or equivalent
  •  2 years of development experience in public cloud environment using Kubernetes etc (Google Cloud and/or AWS)

 

Desired Qualifications:

  • Prior experience with other virtualization platforms like OpenShift is a plus
  • Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
  • Demonstrated ability as a strong technical engineer who can design and code with strong communication skills
  • Firsthand development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
  • Ability to use a variety of debugging tools, simulators and test harnesses is a plus

 

About Virtana:

Virtana delivers the industry's only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.

Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana's software solutions for over a decade.

Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30BIT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.

Read more
Remote only
3 - 8 yrs
₹20L - ₹30L / yr
ETL
Google Cloud Platform (GCP)
skill iconPython
Pipeline management
BigQuery

About Us:


CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services, and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.


Job Summary:


We are seeking a highly skilled and motivated Data Engineer to join our Development POD for the Integration Project. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to ingest, clean, transform, and integrate diverse public datasets into our knowledge graph. This role requires a strong understanding of Cloud Platform (GCP) services, data engineering best practices, and a commitment to data quality and scalability.


Key Responsibilities:


  • ETL Development: Design, develop, and optimize data ingestion, cleaning, and transformation pipelines for various data sources (e.g., CSV, API, XLS, JSON, SDMX) using Cloud Platform services (Cloud Run, Dataflow) and Python.
  • Schema Mapping & Modeling: Work with LLM-based auto-schematization tools to map source data to our schema.org vocabulary, defining appropriate Statistical Variables (SVs) and generating MCF/TMCF files.
  • Entity Resolution & ID Generation: Implement processes for accurately matching new entities with existing IDs or generating unique, standardized IDs for new entities.
  • Knowledge Graph Integration: Integrate transformed data into the Knowledge Graph, ensuring proper versioning and adherence to existing standards. 
  • API Development: Develop and enhance REST and SPARQL APIs via Apigee to enable efficient access to integrated data for internal and external stakeholders.
  • Data Validation & Quality Assurance: Implement comprehensive data validation and quality checks (statistical, schema, anomaly detection) to ensure data integrity, accuracy, and freshness. Troubleshoot and resolve data import errors.
  • Automation & Optimization: Collaborate with the Automation POD to leverage and integrate intelligent assets for data identification, profiling, cleaning, schema mapping, and validation, aiming for significant reduction in manual effort.
  • Collaboration: Work closely with cross-functional teams, including Managed Service POD, Automation POD, and relevant stakeholders.

Qualifications and Skills:


  • Education: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Technology, or a related quantitative field.
  • Experience: 3+ years of proven experience as a Data Engineer, with a strong portfolio of successfully implemented data pipelines.
  • Programming Languages: Proficiency in Python for data manipulation, scripting, and pipeline development.
  • Cloud Platforms and Tools: Expertise in Google Cloud Platform (GCP) services, including Cloud Storage, Cloud SQL, Cloud Run, Dataflow, Pub/Sub, BigQuery, and Apigee. Proficiency with Git-based version control.


Core Competencies:


  • Must Have - SQL, Python, BigQuery, (GCP DataFlow / Apache Beam), Google Cloud Storage (GCS)
  • Must Have - Proven ability in comprehensive data wrangling, cleaning, and transforming complex datasets from various formats (e.g., API, CSV, XLS, JSON)
  • Secondary Skills - SPARQL, Schema.org, Apigee, CI/CD (Cloud Build), GCP, Cloud Data Fusion, Data Modelling
  • Solid understanding of data modeling, schema design, and knowledge graph concepts (e.g., Schema.org, RDF, SPARQL, JSON-LD).
  • Experience with data validation techniques and tools.
  • Familiarity with CI/CD practices and the ability to work in an Agile framework.
  • Strong problem-solving skills and keen attention to detail.


Preferred Qualifications:


  • Experience with LLM-based tools or concepts for data automation (e.g., auto-schematization).
  • Familiarity with similar large-scale public dataset integration initiatives.
  • Experience with multilingual data integration.
Read more
Greatify
Ciline Sanjanyaa
Posted by Ciline Sanjanyaa
Chennai
9 - 18 yrs
₹7L - ₹17L / yr
Playwright
skill iconPython
Appium
Test management
Automation
+3 more

About the job:

Job Title: QA Lead

Location: Teynampet, Chennai

Job Type: Full-time

Experience Level: 9+ Years

Company: Gigadesk Technologies Pvt. Ltd. [Greatify]

Website: www.greatify.ai


Company Description:

At Greatify.ai, we lead the transformation of educational institutions with state-of-the-art AI-driven solutions. Serving 100+ institutions globally, our mission is to unlock their full potential by enhancing learning experiences and streamlining operations. Join us to empower the future of education with innovative technology.


Job Description:

We are looking for a QA Lead to own and drive the quality strategy across our product suite. This role combines hands-on automation expertise with team leadership, process ownership, and cross-functional collaboration.

As a QA Lead, you will define testing standards, guide the QA team, ensure high test coverage across web and mobile platforms, and act as the quality gatekeeper for all releases.


Key Responsibilities:

● Leadership & Ownership

  • Lead and mentor the QA team, including automation and manual testers
  • Define QA strategy, test plans, and quality benchmarks across products
  • Own release quality and provide go/no-go decisions for deployments
  • Collaborate closely with Engineering, Product, and DevOps teams

● Automation & Testing

  • Oversee and contribute to automation using Playwright (Python) for web applications
  • Guide mobile automation efforts using Appium (iOS & Android)
  • Ensure comprehensive functional, regression, integration, and smoke test coverage
  • Review automation code for scalability, maintainability, and best practices

● Process & Quality Improvement

  • Establish and improve QA processes, documentation, and reporting
  • Drive shift-left testing and early defect detection
  • Ensure API testing coverage and support performance/load testing initiatives
  • Track QA metrics, defect trends, and release health

● Stakeholder Collaboration

  • Work with Product Managers to understand requirements and define acceptance criteria
  • Communicate quality risks, timelines, and test results clearly to stakeholders
  • Act as the single point of accountability for QA deliverables


Skills & Qualifications:

● Required Skills

  • Strong experience in QA leadership or senior QA roles
  • Proficiency in Python-based test automation
  • Hands-on experience with Playwright for web automation
  • Experience with Appium for mobile automation
  • Strong understanding of REST API testing (Postman / automation scripts)
  • Experience integrating tests into CI/CD pipelines (GitLab CI, Jenkins, etc.)
  • Solid understanding of SDLC, STLC, and Agile methodologies

● Good to Have

  • Exposure to performance/load testing tools (Locust, JMeter, k6)
  • Experience in EdTech or large-scale transactional systems
  • Knowledge of cloud-based environments and release workflows.
Read more
Metron Security Private Limited
Prathamesh Shinde
Posted by Prathamesh Shinde
Pune
3 - 5 yrs
₹4L - ₹10L / yr
skill iconPython

Job Description:


We are looking for a skilled Backend Developer with 2–5 years of experience in software development, specializing in Python and/or Golang. If you have strong programming skills, enjoy solving problems, and want to work on secure and scalable systems, we'd love to hear from you!


Location - Pune, Baner.

Interview Rounds - In Office


Key Responsibilities:

Design, build, and maintain efficient, reusable, and reliable backend services using Python and/or Golang

Develop and maintain clean and scalable code following best practices

Apply Object-Oriented Programming (OOP) concepts in real-world development

Collaborate with front-end developers, QA, and other team members to deliver high-quality features

Debug, optimize, and improve existing systems and codebase

Participate in code reviews and team discussions

Work in an Agile/Scrum development environment


Required Skills: Strong experience in Python or Golang (working knowledge of both is a plus)


Good understanding of OOP principles

Familiarity with RESTful APIs and back-end frameworks

Experience with databases (SQL or NoSQL)

Excellent problem-solving and debugging skills

Strong communication and teamwork abilities


Good to Have:

Prior experience in the security industry

Familiarity with cloud platforms like AWS, Azure, or GCP

Knowledge of Docker, Kubernetes, or CI/CD tools

Read more
Bengaluru (Bangalore)
0 - 2 yrs
₹6L - ₹8L / yr
skill iconPython
FastAPI
skill iconGo Programming (Golang)
skill iconPostgreSQL
MySQL
+1 more

About the Role

We are looking for a motivated and enthusiastic Backend Engineer (Fresher) who is eager to learn and grow as a software engineer. In this role, you will work closely with experienced team members to build backend services, understand real-world system design, and strengthen your core engineering skills.


Key Responsibilities

 Assist in developing backend features and services using Python

 Write clean, readable, and maintainable code following best practices  Apply basic data structures and algorithms to solve programming problems

 Work with relational databases to store, retrieve, and update data

 Learn and use caching systems and message queues as part of backend workflows

 Build and consume REST APIs


Required Skills & Qualifications

 Good understanding of Python fundamentals

 Basic knowledge of data structures and problem solving

 Familiarity with SQL and relational databases (PostgreSQL/MySQL)  Familiarity with Git and version control systems

 Understanding of basic backend concepts such as HTTP, APIs, and request–response flow

 Willingness to learn new technologies and adapt quickly


Good to Have

 Basic exposure to caching concepts or tools like Redis

 Introductory knowledge of message brokers (Kafka, RabbitMQ, etc.)  Awareness of GenAI concepts or tools (LLMs, APIs) is a plus


What we look for

 Strong learning attitude and curiosity

 Logical thinking and problem-solving mindset

 Ability to collaborate effectively in a team environment

 Interest in building real-world backend systems

Read more
Wokelo AI
Ishvika Dwivedi
Posted by Ishvika Dwivedi
Remote only
0 - 1 yrs
₹7L - ₹10L / yr
skill iconPython
skill iconDjango
SaaS
Natural Language Processing (NLP)
Large Language Models (LLM)
+1 more

About Wokelo:


Wokelo is an LLM agentic platform for investment research and decision making. We automate complex research and analysis tasks traditionally performed by humans. Our platform is leveraged by leading Private Equity firms, Investment Banks, Corporate Strategy teams, Venture Capitalists, and Fortune 500 companies.


With our proprietary agentic technology and state-of-the-art large language models (LLMs), we deliver rich insights and high-fidelity analysis in minutes—transforming how financial decisions are made.


Headquartered in Seattle, we are a global team backed by renowned venture funds and industry leaders. As we rapidly expand across multiple segments, we are looking for passionate individuals to join us on this journey.


Requirements:


  • 0-1 years of experience as a Software Developer.
  • Bachelor’s or Master’s degree in Computer Science or related field.
  • Proficiency in Python with strong experience in Django Rest Framework.
  • Hands-on experience with Django ORM.
  • Ability to learn quickly and adapt to new technologies.
  • Strong problem-solving and analytical skills.
  • Knowledge of NLP, ML models, and related engineering practices (preferred).
  • Familiarity with LLMs, RLHF, transformers, embeddings (a plus).
  • Prior experience in building or scaling a SaaS platform (a plus).
  • Strong attention to detail with experience integrating testing into development workflows.


Key Responsibilities:


  • Develop, test, and maintain scalable backend services and APIs using Python (Django Rest Framework).
  • Work with Django ORM to build efficient database-driven applications.
  • Collaborate with cross-functional teams to design and implement features that enhance the Wokelo platform.
  • Contribute to NLP engineering and ML model development to power GenAI solutions (preferred but not mandatory).
  • Ensure testing and code quality are embedded into the development process.
  • Research and adopt emerging technologies, providing innovative solutions to complex problems.
  • Support the transition of prototypes into production-ready features on our SaaS platform.
  • Perform adhoc tasks as and when required/assigned by manager.


Why Join Us?


  • Opportunity to work on a first-of-its-kind Generative AI SaaS platform.
  • A steep learning curve in a fast-paced, high-growth startup environment.
  • Exposure to cutting-edge technologies in NLP, ML models, LLM Ops, and DevOps.
  • Collaborative culture with global talent and visionary leadership.
  • Full health coverage, flexible time-off, and remote work culture.


Read more
Bengaluru (Bangalore)
2 - 4 yrs
₹21L - ₹27L / yr
skill iconJava
skill iconPython

Strong Backend Engineer Profiles

Mandatory (Experience 1) – Must have 2+ years of hands-on Backend Engineering experience building production-grade systems in a B2B SaaS or product environment

Mandatory (Experience 2) – Must have strong backend development experience using at least one backend framework such as FastAPI / Django (Python), Spring (Java), or Express (Node.js)

Mandatory (Experience 3) – Must have a solid understanding of backend fundamentals, including API development, service-oriented architecture, data structures, algorithms, and clean coding practices

Mandatory (Experience 4) – Must have strong experience working with databases (SQL and/or NoSQL), including efficient data modeling and query optimization

Mandatory (Experience 5) – Must have experience designing, building, and maintaining APIs and backend services, including integrations with external systems (CRMs, payment gateways, ERPs, data platforms, etc.)

Mandatory (Experience 6) – Must have experience working in cloud-based environments (AWS / GCP / Azure) and be familiar with Git-based collaborative development workflows

Mandatory (Domain) – Experience with financial systems, billing platforms, fintech applications, or SaaS revenue-related workflows is highly preferred

Mandatory (Domain) – Experience with financial systems, billing platforms, or fintech applications is highly preferred (fintech background is a strong plus)

Mandatory (Company) – Must have worked in product companies / startups, preferably Series A to Series D

Preferred

Preferred (Education) – Candidates from Tier - 1 engineering institutes (IITs, BITS are highly preferred)

Read more
Poshmark

at Poshmark

3 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Chennai
4yrs+
Upto ₹32L / yr (Varies
)
Test Automation (QA)
Software Testing (QA)
skill iconPython
skill iconJava
Appium
+5 more

About Poshmark

Poshmark is a leading fashion resale marketplace powered by a vibrant, highly engaged community of buyers and sellers and real-time social experiences. Designed to make online selling fun, more social and easier than ever, Poshmark empowers its sellers to turn their closet into a thriving business and share their style with the world. Since its founding in 2011, Poshmark has grown its community to over 130 million users and generated over $10 billion in GMV, helping sellers realize billions in earnings, delighting buyers with deals and one-of-a-kind items, and building a more sustainable future for fashion. For more information, please visit www.poshmark.com, and for company news, visit newsroom.poshmark.com.


About the role

We seek a Senior Software Development Engineer In Test (Sr. SDET) who will collaborate with stakeholders to define comprehensive test strategies and partner with developers, product managers, and QA teams to design, develop, and maintain robust automated testing frameworks and solutions.

You'll greatly impact the quality of Poshmark's growing products and services by creating the next generation of tools and frameworks that make our users and developers more productive. You will influence better software design, promote proper engineering practice, bug prevention strategies, testability, accessibility, privacy, and other advanced quality concepts across products.


Responsibilities

  • Test Harnesses and Infrastructure
  • Design, implement, and maintain test harnesses and scalable testing infrastructure by writing high-quality code.
  • Automation Frameworks
  • Build and extend automation frameworks using tools like Selenium and Appium, writing clean, reusable, and efficient code to enable reliable test execution.
  • Product Quality Monitoring
  • Actively monitor product development and usage at all stages, using custom scripts or tools to identify areas for quality improvement.
  • Collaborative Development
  • Partner with developers to write testable code, identify potential issues early in the development cycle, and integrate testing into CI/CD pipelines.
  • Metrics and Reporting
  • Create and maintain automated solutions for tracking and reporting quality metrics, enabling data-driven decision-making.

6-Month Accomplishments

  • Enhance Automation Framework
  • Collaborate with development teams to design and add new features to the automation framework, ensuring it evolves to meet project needs.
  • Run and Debug Automation Tests
  • Perform regular automation tests across multiple platforms, debug and resolve issues, and ensure smooth execution of regression tests to maintain product stability.
  • Improve Product Quality Metrics
  • Identify and resolve high-priority defects, reducing production issues and improving quality metrics such as defect density and cycle time.

12+ Month Accomplishments

  • Build Testing Infrastructure
  • Set up scalable test harnesses and integrate them with CI/CD pipelines to enable continuous testing and faster feedback loops.
  • Knowledge Sharing and Collaboration
  • Mentor team members in test automation best practices, introduce efficient coding standards, and foster cross-functional collaboration for seamless delivery.
  • Optimize Test Efficiency
  • Lead efforts to improve test execution time by optimizing automation scripts and implementing parallel testing, resulting in faster feedback cycles and more efficient releases.

Qualifications

  • 4 to 7 years of solid programming and scripting experience with languages such as Python and Java.
  • Experienced in developing scalable test automation systems for web, mobile, and APIs using frameworks such as Appium, Selenium, and WebDriver.
  • Strong time management and prioritization abilities, capable of working both independently and in a collaborative team environment.
  • Proficient in writing clean, maintainable, and efficient code, with strong debugging skills.
  • Hands-on experience with tools such as Jira, Confluence, GitHub, Unix commands, and Jenkins.
  • Skilled in all stages of automation, including GUI testing, integration testing, and stress testing.
Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Bengaluru (Bangalore)
2 - 4 yrs
₹19L - ₹27L / yr
skill iconPython
skill iconJava

Strong Backend Engineer Profiles

Mandatory (Experience 1) – Must have 2+ years of hands-on Backend Engineering experience building production-grade systems in a B2B SaaS or product environment

Mandatory (Experience 2) – Must have strong backend development experience using at least one backend framework such as FastAPI / Django (Python), Spring (Java), or Express (Node.js)

Mandatory (Experience 3) – Must have a solid understanding of backend fundamentals, including API development, service-oriented architecture, data structures, algorithms, and clean coding practices

Mandatory (Experience 4) – Must have strong experience working with databases (SQL and/or NoSQL), including efficient data modeling and query optimization

Mandatory (Experience 5) – Must have experience designing, building, and maintaining APIs and backend services, including integrations with external systems (CRMs, payment gateways, ERPs, data platforms, etc.)

Mandatory (Experience 6) – Must have experience working in cloud-based environments (AWS / GCP / Azure) and be familiar with Git-based collaborative development workflows

Mandatory (Domain) – Experience with financial systems, billing platforms, fintech applications, or SaaS revenue-related workflows is highly preferred

Mandatory (Domain) – Experience with financial systems, billing platforms, or fintech applications is highly preferred (fintech background is a strong plus)

Mandatory (Company) – Must have worked in product companies / startups, preferably Series A to Series D

Read more
CAW.Tech

at CAW.Tech

5 recruiters
Ranjana Singh
Posted by Ranjana Singh
Bengaluru (Bangalore)
2 - 3 yrs
Best in industry
Apache Airflow
azkaban
skill iconAmazon Web Services (AWS)
skill iconPython
Pipeline management
+7 more

Responsibilities:

  • Design, develop, and maintain efficient and reliable data pipelines.
  • Identify and implement process improvements, automating manual tasks and optimizing data delivery.
  • Build and maintain the infrastructure for data extraction, transformation, and loading (ETL) from diverse sources using SQL and AWS cloud technologies.
  • Develop data tools and solutions to empower our analytics and data science teams, contributing to product innovation.


Qualifications:

Must Have:

  • 2-3 years of experience in a Data Engineering role.
  • Familiarity with data pipeline and workflow management tools (e.g., Airflow, Luigi, Azkaban).
  • Experience with AWS cloud services.
  • Working knowledge of object-oriented/functional scripting in Python
  • Experience building and optimizing data pipelines and datasets.
  • Strong analytical skills and experience working with structured and unstructured data.
  • Understanding of data transformation, data structures, dimensional modeling, metadata management, schema evolution, and workload management.
  • A passion for building high-quality, scalable data solutions.


Good to have:

  • Experience with stream-processing systems (e.g., Spark Streaming, Flink).
  • Working knowledge of message queuing, stream processing, and scalable data stores.
  • Proficiency in SQL and experience with NoSQL databases like Elasticsearch and Cassandra/MongoDB.


Experience with big data tools such as HDFS/S3, Spark/Flink, Hive, HBase, Kafka/Kinesis.

Read more
Oceano Apex
Neeraj Dutt
Posted by Neeraj Dutt
Remote only
5 - 7 yrs
₹17L - ₹22L / yr
skill iconPython
Work in process

*Job description:*


*Company:* Innovative Fintech Start-up


*Location:* On-site in Gurgaon, India


*Job Type:* Full-Time


*Pay:* ₹100,000.00 - ₹150,000.00 per month


*Experience Level:* Senior (7+ years required)


*About Us*


We are a dynamic Fintech company revolutionizing the financial services landscape through cutting-edge technology. We're building innovative solutions to empower users in trading, market analysis, and financial compliance. As we expand, we're seeking a visionary Senior Developer to pioneer and lead our brand-new tech team from the ground up. This is an exciting opportunity to shape the future of our technology stack and drive mission-critical initiatives in a fast-paced environment.


*Role Overview*


As the Senior Developer and founding Tech Team Lead, you will architect, develop, and scale our core systems while assembling and mentoring a high-performing team. You'll work on generative AI-driven applications, integrate with financial APIs, and ensure robust, secure platforms for trading and market data. This role demands hands-on coding expertise combined with strategic leadership to deliver under tight deadlines and high-stakes conditions.


*Key Responsibilities*


Design, develop, and deploy scalable backend systems using Python as the primary language.


Lead the creation of a new tech team: recruit, mentor, and guide junior developers to foster a collaborative, innovative culture.


Integrate generative AI technologies (e.g., Claude from Anthropic, OpenAI models) to enhance features like intelligent coding assistants, predictive analytics, and automated workflows.


Solve complex problems in real-time, optimizing for performance in mission-critical financial systems.


Collaborate with cross-functional teams to align tech strategies with business goals, including relocation planning to Dubai.


Ensure code quality, security, and compliance in all developments.


Thrive in a high-pressure environment, managing priorities independently while driving projects to completion.


*Required Qualifications:*


7+ years of software development experience; 5+ years in Python.


Proven hands-on experience with OpenAI and Anthropic (Claude) APIs in production systems.


Strong problem-solving skills and ability to operate independently in ambiguous situations.


Experience leading projects, mentoring developers, or building teams.


Bachelor’s/Master’s degree in Computer Science, Engineering, or equivalent experience.


Experience with financial markets, trading systems, or market data platforms.


Familiarity with Meta Trader integrations.


Cloud experience, especially Google Cloud Platform (GCP).


Knowledge of fintech compliance and trade reporting standards.


*What We Offer:*


Competitive salary and benefits package.


Opportunity to build and lead a team in a high-growth Fintech space.


A collaborative, innovative work culture with room for professional growth.


*Job Types:* Full-time, Permanent


*Work Location:* In person

Read more
Palcode.ai

at Palcode.ai

2 candid answers
Team Palcode
Posted by Team Palcode
Remote only
1 - 2 yrs
₹4L - ₹5L / yr
skill iconPython
FastAPI
skill iconFlask

At Palcode.ai, We're on a mission to fix the massive inefficiencies in pre-construction. Think about it - in a $10 trillion industry, estimators still spend weeks analyzing bids, project managers struggle with scattered data, and costly mistakes slip through complex contracts. We're fixing this with purpose-built AI agents that work. Our platform can do “magic” to Preconstruction workflows from Weeks to Hours. It's not just about AI – it's about bringing real, measurable impact to an industry ready for change. We are backed by names like AWS for Startups, Upekkha Accelerator, and Microsoft for Startups.



Why Palcode.ai


Tackle Complex Problems: Build AI that reads between the lines of construction bids, spots hidden risks in contracts, and makes sense of fragmented project data

High-Impact Code: Your code won't sit in a backlog – it goes straight to estimators and project managers who need it yesterday

Tech Challenges That Matter: Design systems that process thousands of construction documents, handle real-time pricing data, and make intelligent decisions

Build & Own: Shape our entire tech stack, from data processing pipelines to AI model deployment

Quick Impact: Small team, huge responsibility. Your solutions directly impact project decisions worth millions

Learn & Grow: Master the intersection of AI, cloud architecture, and construction tech while working with founders who've built and scaled construction software


Your Role:

  • Design and build our core AI services and APIs using Python
  • Create reliable, scalable backend systems that handle complex data
  • Help set up cloud infrastructure and deployment pipelines
  • Collaborate with our AI team to integrate machine learning models
  • Write clean, tested, production-ready code


You'll fit right in if:

  • You have 1 year of hands-on Python development experience
  • You're comfortable with full-stack development and cloud services
  • You write clean, maintainable code and follow good engineering practices
  • You're curious about AI/ML and eager to learn new technologies
  • You enjoy fast-paced startup environments and take ownership of your work


How we will set you up for success

  • You will work closely with the Founding team to understand what we are building.
  • You will be given comprehensive training about the tech stack, with an opportunity to avail virtual training as well.
  • You will be involved in a monthly one-on-one with the founders to discuss feedback
  • A unique opportunity to learn from the best - we are Gold partners of AWS, Razorpay, and Microsoft Startup programs, having access to rich talent to discuss and brainstorm ideas.
  • You’ll have a lot of creative freedom to execute new ideas. As long as you can convince us, and you’re confident in your skills, we’re here to back you in your execution.


Location: Bangalore, Remote


Compensation: Competitive salary + Meaningful equity


If you get excited about solving hard problems that have real-world impact, we should talk.


All the best!!

Read less


Read more
Softcolon Technologies
Krina Patel
Posted by Krina Patel
Nikol, Ahmedabad
0 - 1 yrs
₹0 - ₹10000 / mo
Generative AI
FastAPI
skill iconDjango
skill iconPython
Artificial Intelligence (AI)

Intern (GenAI - Python) - 3 Months Unpaid Internship

Job Title: GenAI Intern (Python) - 3 Months Internship (Unpaid)

Location: Ahmedabad (On-Site)

Duration: 3 Months

Stipend: Unpaid Internship

Company: Softcolon Technologies

About the Internship:

Softcolon Technologies is seeking a dedicated GenAI Intern who is eager to delve into real-world AI applications. This internship provides hands-on experience in Generative AI development, focusing on RAG systems and AI Agents. It is an ideal opportunity for individuals looking to enhance their skills in Python-based AI development through practical project involvement.

Eligibility:

  • Freshers or currently pursuing BE (IT/CE) or related field
  • Strong interest in Generative AI and real-world AI product development

Required Skills (Must Have):

  • Basic knowledge of Python
  • Basic understanding of Python frameworks like FastAPI and basic Django
  • Familiarity with APIs and JSON
  • Submission of resume, GitHub Profile/Project Portfolio, and any AI/Python project links

What You Will Learn (Internship Goals):

You will gain hands-on experience in:

  • Fundamentals of Generative AI (GenAI)
  • Building RAG (Retrieval-Augmented Generation) applications
  • Working with Vector Databases and embeddings
  • Creating AI Agents using Python
  • Integrating LLMs such as OpenAI (GPT models), Claude, Gemini
  • Prompt Engineering + AI workflow automation
  • Building production-ready APIs using FastAPI

Responsibilities:

  • Assist in developing GenAI-based applications using Python
  • Support RAG pipeline implementation (embedding + search + response)
  • Work on API integrations with OpenAI/Claude/Gemini
  • Assist in building backend services using FastAPI
  • Maintain project documentation and GitHub updates
  • Collaborate with team members for tasks and daily progress updates

Selection Process:

  • Resume + GitHub portfolio screening
  • Short technical discussion (Python + basics of APIs)
  • Final selection by the team

Why Join Us?

  • Practical experience in GenAI through real projects
  • Mentorship from experienced developers
  • Opportunity to work on portfolio-level projects
  • Certificate + recommendation (based on performance)
  • Potential for a paid role post-internship (based on performance)

How to Apply:

Share your resume and GitHub portfolio link via:



Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Bengaluru (Bangalore)
6 - 10 yrs
₹60L - ₹70L / yr
skill iconNodeJS (Node.js)
skill iconPython

Candidate must have 6+ years of backend engineering experience, with 1–2 years leading engineers or owning major systems.

Candidate must be from a product-based organization with a startup mindset.

Must be strong in one core backend language: Node.js, Go, Java, or Python.

Deep understanding of distributed systems, caching, high availability, and microservices architecture.

Hands-on experience with AWS/GCP, Docker, Kubernetes, and CI/CD pipelines.

Strong command over system design, data structures, performance tuning, and scalable architecture

Ability to partner with Product, Data, Infrastructure, and lead end-to-end backend roadmap execution.

Read more
Talent Pro
Bengaluru (Bangalore)
10 - 14 yrs
₹70L - ₹100L / yr
Terraform
skill iconPython

10–14 years of experience in software engineering, with strong emphasis on backend and data architecture for large-scale systems.


Proven experience designing and deploying distributed, event-driven systems and streaming data pipelines.


Expert proficiency in Go/Python, including experience with microservices, APIs, and concurrency models.


Deep understanding of data flows across multi-sensor and multi-modal sources, including ingestion, transformation, and synchronization.


Experience building real-time APIs for interactive web applications and data-heavy workflows.


Familiarity with frontend ecosystems (React, TypeScript) and rendering frameworks leveraging WebGL/WebGPU.


Hands-on experience with CI/CD, Kubernetes, Docker, and Infrastructure as Code (Terraform, Helm).

Read more
CAW.Tech

at CAW.Tech

5 recruiters
Ranjana Singh
Posted by Ranjana Singh
Hyderabad
5 - 8 yrs
Best in industry
skill iconPython
skill iconDjango
skill iconPostgreSQL
MySQL
FastAPI
+1 more

We are looking for a Staff Engineer - Python to join one of our engineering teams at our office in Hyderabad.


What would you do?

  • Own end-to-end delivery of backend projects from requirements and LLDs to production.
  • Lead technical design and execution, ensuring scalability, reliability, and code quality.
  • Build and integrate chatbot and AI-driven workflows with third-party systems.
  • Diagnose and resolve complex performance and production issues.
  • Drive testing, documentation, and engineering best practices.
  • Mentor engineers and act as the primary technical point of contact for the project/client.


Who Should Apply?

  • 5+ years of hands-on experience building backend systems in Python.
  • Proficiency in building web-based applications using Django or similar frameworks.
  • In-depth knowledge of the Python stack and API-first system design.
  • Experience working with SQL and NoSQL databases including PostgreSQL/MySQL, MongoDB, ElasticSearch, or key-value stores.
  • Strong experience owning design, delivery, and technical decision-making.
  • Proven ability to lead and mentor engineers through reviews and execution.
  • Clear communicator with a high-ownership, delivery-focused mindset.


Nice to Have

  • Experience contributing to system-level design discussions.
  • Prior exposure to AI/LLM-based systems or conversational platforms.
  • Experience working directly with clients or external stakeholders.
  • Background in fast-paced product or service environments.
Read more
PGAGI
Bengaluru (Bangalore)
1 - 2 yrs
₹8L - ₹9L / yr
skill iconGo Programming (Golang)
skill iconPython
skill iconFlask
Payment gateways
RestAPI
+1 more

Job Title: Backend Engineer – Python (AI Backend)

Location: Bangalore, India

Experience: 1–2 Years

Job Description

We are looking for a Backend Engineer with strong Python skills and hands-on exposure to AI-based applications. The candidate will be responsible for developing scalable backend services and supporting AI-powered systems such as LLM integrations, AI agents, and RAG pipelines.

Key Responsibilities

  • Develop and maintain backend services using Python (FastAPI preferred)
  • Build and manage RESTful APIs for frontend and AI integrations
  • Support development of AI-driven features (LLMs, RAG systems, AI agents)
  • Design and maintain both monolithic and microservices architectures
  • Optimize database performance and backend scalability
  • Work with DevOps for Docker-based deployments

Required Skills

  • Strong experience in Python backend development
  • Hands-on experience with FastAPI / Django / Flask
  • Knowledge of REST APIs and microservices
  • Experience with AI applications (LLM usage, prompt engineering basics)
  • Database knowledge: MongoDB, PostgreSQL or MySQL
  • Experience with Docker and basic cloud platforms (AWS/GCP/Azure)
  • Hands-on experience with Redis for caching and in-memory storage

Good to Have

  • Experience integrating payment gateways (Razorpay, Stripe, PayU, etc.)
  • Exposure to event-driven architectures using RabbitMQ, Kafka, or Redis Streams
  • Kubernetes
  • Understanding of model fine-tuning concepts


Read more
PGAGI
Bengaluru (Bangalore)
1 - 2 yrs
₹8L - ₹9L / yr
skill iconGo Programming (Golang)
skill iconPython
skill iconDjango
skill iconFlask
Payment gateways

Location: Bangalore, India

Experience: 1–2 Years

Job Description

We are looking for a Backend Engineer with strong Python skills and hands-on exposure to AI-based applications. The candidate will be responsible for developing scalable backend services and supporting AI-powered systems such as LLM integrations, AI agents, and RAG pipelines.

Key Responsibilities

  • Develop and maintain backend services using Python (FastAPI preferred)
  • Build and manage RESTful APIs for frontend and AI integrations
  • Support development of AI-driven features (LLMs, RAG systems, AI agents)
  • Design and maintain both monolithic and microservices architectures
  • Optimize database performance and backend scalability
  • Work with DevOps for Docker-based deployments

Required Skills

  • Strong experience in Python backend development
  • Hands-on experience with FastAPI / Django / Flask
  • Knowledge of REST APIs and microservices
  • Experience with AI applications (LLM usage, prompt engineering basics)
  • Database knowledge: MongoDB, PostgreSQL or MySQL
  • Experience with Docker and basic cloud platforms (AWS/GCP/Azure)
  • Hands-on experience with Redis for caching and in-memory storage

Good to Have

  • Experience integrating payment gateways (Razorpay, Stripe, PayU, etc.)
  • Exposure to event-driven architectures using RabbitMQ, Kafka, or Redis Streams
  • Kubernetes
  • Understanding of model fine-tuning concepts


Read more
suntekai
Kushi A
Posted by Kushi A
Remote only
0 - 1 yrs
₹10000 - ₹12000 / mo
skill iconPython
skill iconPostgreSQL
Data Visualization
Business Intelligence (BI)
SQL
+2 more

Job Description: Data Analyst


About the Role

We are seeking a highly skilled Data Analyst with strong expertise in SQL/PostgreSQL, Python (Pandas), Data Visualization, and Business Intelligence tools to join our team. The candidate will be responsible for analyzing large-scale datasets, identifying trends, generating actionable insights, and supporting business decisions across marketing, sales, operations, and customer experience..

Key Responsibilities

  • Data Extraction & Management

  • Write complex SQL queries in PostgreSQL to extract, clean, and transform large datasets.

  • Ensure accuracy, reliability, and consistency of data across different platforms.

  • Data Analysis & Insights

  • Conduct deep-dive analyses to understand customer behavior, funnel drop-offs, product performance, campaign effectiveness, and sales trends.

  • Perform cohort, LTV (lifetime value), retention, and churn analysis to identify opportunities for growth.

  • Provide recommendations to improve conversion rates, average order value (AOV), and repeat purchase rates.

  • Business Intelligence & Visualization

  • Build and maintain interactive dashboards and reports using BI tools (e.g., PowerBI, Metabase or Looker).

  • Create visualizations that simplify complex datasets for stakeholders and management.

  • Python (Pandas)

  • Use Python (Pandas, NumPy) for advanced analytics.

  • Collaboration & Stakeholder Management

  • Work closely with product, operations, and leadership teams to provide insights that drive decision-making.

  • Communicate findings in a clear, concise, and actionable manner to both technical and non-technical stakeholders.

Required Skills

  • SQL/PostgreSQL

  • Complex joins, window functions, CTEs, aggregations, query optimization.

  • Python (Pandas & Analytics)

  • Data wrangling, cleaning, transformations, exploratory data analysis (EDA).

  • Libraries: Pandas, NumPy, Matplotlib, Seaborn

  • Data Visualization & BI Tools

  • Expertise in creating dashboards and reports using Metabase or Looker.

  • Ability to translate raw data into meaningful visual insights.

  • Business Intelligence

  • Strong analytical reasoning to connect data insights with e-commerce KPIs.

  • Experience in funnel analysis, customer journey mapping, and retention analysis.

  • Analytics & E-commerce Knowledge

  • Understanding of metrics like CAC, ROAS, LTV, churn, contribution margin.

  • General Skills

  • Strong communication and presentation skills.

  • Ability to work cross-functionally in fast-paced environments.

  • Problem-solving mindset with attention to detail.



Education: Bachelor’s degree in Data Science, Computer Science, data processing




Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
7 - 12 yrs
₹40L - ₹80L / yr
skill iconMachine Learning (ML)
Apache Spark
Apache Airflow
skill iconPython
skill iconAmazon Web Services (AWS)
+23 more

Review Criteria:

  • Strong MLOps profile
  • 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
  • 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
  • 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
  • Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
  • Must have hands-on Python for pipeline & automation development
  • 4+ years of experience in AWS cloud, with recent companies
  • (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth

 

Preferred:

  • Hands-on in Docker deployments for ML workflows on EKS / ECS
  • Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
  • Experience with CI / CD / CT using GitHub Actions / Jenkins.
  • Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
  • Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.

 

Job Specific Criteria:

  • CV Attachment is mandatory
  • Please provide CTC Breakup (Fixed + Variable)?
  • Are you okay for F2F round?
  • Have candidate filled the google form?

 

Role & Responsibilities:

We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.

 

Key Responsibilities:

  • Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
  • Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
  • Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
  • Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
  • Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
  • Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
  • Collaborate with data scientists to productionize notebooks, experiments, and model deployments.

 

Ideal Candidate:

  • 8+ years in MLOps/DevOps with strong ML pipeline experience.
  • Strong hands-on experience with AWS:
  • Compute/Orchestration: EKS, ECS, EC2, Lambda
  • Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
  • Workflow: MWAA/Airflow, Step Functions
  • Monitoring: CloudWatch, OpenSearch, Grafana
  • Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
  • Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
  • Strong Linux, scripting, and troubleshooting skills.
  • Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.

 

Education:

  • Master’s degree in computer science, Machine Learning, Data Engineering, or related field. 


Read more
ProductNova
Vidhya Vijay
Posted by Vidhya Vijay
Bengaluru (Bangalore)
4 - 6 yrs
₹12L - ₹14L / yr
skill iconReact.js
Microservices
skill iconC#
skill iconPython
MS SQLServer
+7 more

Job Title:

Senior Full Stack Developer

Experience: 5 to 7 Years. Minimum 5yrs FSD exp mandatory

Location: Bangalore (Onsite)

 

About ProductNova:

ProductNova is a fast-growing product development organization that partners with ambitious companies to build, modernize, and scale high-impact digital products. Our teams of product leaders, engineers, AI specialists, and growth experts work at the intersection of strategy, technology, and execution to help organizations create differentiated product portfolios and accelerate business outcomes.

Founded in early 2023, ProductNova has successfully designed, built, and launched 20+ large-scale, AI-powered products and platforms across industries. We specialize in solving complex business problems through thoughtful product design, robust engineering, and responsible use of AI.

 

What We Do

Product Development

We design and build user-centric, scalable, AI-native B2B SaaS products that are deeply aligned with business goals and long-term value creation.

Our end-to-end product development approach covers the full lifecycle:

● Product discovery and problem definition

● User research and product strategy

● Experience design and rapid prototyping

● AI-enabled engineering, testing, and platform architecture

● Product launch, adoption, and continuous improvement

From early concepts to market-ready solutions, we focus on building products that are resilient, scalable, and ready for real-world adoption. Post-launch, we work closely with customers to iterate based on user feedback and expand products across new use cases, customer segments, and markets.

 

Growth & Scale

For early-stage companies and startups, we act as product partners—shaping ideas into viable products, identifying target customers, achieving product-market fit, and supporting go-to-market execution, iteration, and scale.

For established organizations, we help unlock the next phase of growth by identifying opportunities to modernize and scale existing products, enter new geographies, and build entirely new product lines. Our teams enable innovation through AI, platform re-architecture, and portfolio expansion to support sustained business growth.

 

 

 

 

Role Overview

We are looking for a Senior Full Stack Developer with strong expertise in frontend development using React JS, backend microservices architecture in C#/Python, and hands-on experience with AI-enabled development tools. The ideal candidate should be comfortable working in an onsite environment and collaborating closely with cross-functional teams to deliver scalable, high-quality applications.

 

Key Responsibilities:

• Develop and maintain responsive, high-performance frontend applications using React JS

• Design, develop, and maintain microservices-based backend systems using C#, Python

• Experienced in building Data Layer and Databases using MS SQL, Cosmos DB, PostgreSQL

• Leverage AI-assisted development tools (Cursor / GitHub Copilot) to improve coding

efficiency and quality

• Collaborate with product managers, designers, and backend teams to deliver end-to-end

solutions

• Write clean, reusable, and well-documented code following best practices

• Participate in code reviews, debugging, and performance optimization

• Ensure application security, scalability, and reliability

Mandatory Technical Skills:

• Strong hands-on experience in React JS (Frontend Coding) – 3+ yrs

• Solid experience in Microservices Architecture C#, Python – 3+ yrs

• Experience building Data Layer and Databases using MS SQL – 2+ yrs

• Practical exposure to AI-enabled development using Cursor or GitHub Copilot – 1yr

• Good understanding of REST APIs and system integration

• Experience with version control systems (Git), ADO

Good to Have:

• Experience with cloud platforms (Azure)

• Knowledge of containerization tools like Docker and Kubernetes

• Exposure to CI/CD pipelines

• Understanding of Agile/Scrum methodologies

Why Join ProductNova

● Work on real-world, high-impact products used at scale

● Collaborate with experienced product, engineering, and AI leaders

● Solve complex problems with ownership and autonomy

● Build AI-first systems, not experimental prototypes

● Grow rapidly in a culture that values clarity, execution, and learning

If you are passionate about building meaningful products, solving hard problems, and shaping the future of AI-driven software, ProductNova offers the environment and challenges to grow your career.

Read more
ProductNova
Vidhya Vijay
Posted by Vidhya Vijay
Bengaluru (Bangalore)
5 - 8 yrs
₹15L - ₹18L / yr
Large Language Models (LLM) tuning
Prompt engineering
Chatbot
Artificial Intelligence (AI)
skill iconPython
+6 more

ROLE: Ai ML Senior Developer

Exp: 5 to 8 Years

Location: Bangalore (Onsite)

About ProductNova

ProductNova is a fast-growing product development organization that partners with

ambitious companies to build, modernize, and scale high-impact digital products. Our teams

of product leaders, engineers, AI specialists, and growth experts work at the intersection of

strategy, technology, and execution to help organizations create differentiated product

portfolios and accelerate business outcomes.

Founded in early 2023, ProductNova has successfully designed, built, and launched 20+

large-scale, AI-powered products and platforms across industries. We specialize in solving

complex business problems through thoughtful product design, robust engineering, and

responsible use of AI.

 

Product Development

We design and build user-centric, scalable, AI-native B2B SaaS products that are deeply

aligned with business goals and long-term value creation.

Our end-to-end product development approach covers the full lifecycle:

 

1. Product discovery and problem definition

2. User research and product strategy

3. Experience design and rapid prototyping

4. AI-enabled engineering, testing, and platform architecture

5. Product launch, adoption, and continuous improvement

 

From early concepts to market-ready solutions, we focus on building products that are

resilient, scalable, and ready for real-world adoption. Post-launch, we work closely with

customers to iterate based on user feedback and expand products across new use cases,

customer segments, and markets.

 

Growth & Scale

For early-stage companies and startups, we act as product partners—shaping ideas into

viable products, identifying target customers, achieving product-market fit, and supporting

go-to-market execution, iteration, and scale.

For established organizations, we help unlock the next phase of growth by identifying

opportunities to modernize and scale existing products, enter new geographies, and build

entirely new product lines. Our teams enable innovation through AI, platform re-

architecture, and portfolio expansion to support sustained business growth.

 

 

 

 

Role Overview:

We are seeking an experienced AI / ML Senior Developer with strong hands-on expertise in large language models (LLMs) and AI-driven application development. The ideal candidate will have practical experience working with GPT and Anthropic models, building and training B2B products powered by AI, and leveraging AI-assisted development tools to deliver scalable and intelligent solutions.

 

Key Responsibilities:

1. Model Analysis & Optimization

Analyze, customize, and optimize GPT and Anthropic-based models to ensure reliability, scalability, and performance for real-world business use cases.

2. AI Product Design & Development

Design and build AI-powered products, including model training, fine-tuning, evaluation, and performance optimization across development lifecycles.

3. Prompt Engineering & Response Quality

Develop and refine prompt engineering strategies to improve model accuracy, consistency, relevance, and contextual understanding.

4. AI Service Integration

Build, integrate, and deploy AI services into applications using modern development practices, APIs, and scalable architectures.

5. AI-Assisted Development Productivity

Leverage AI-enabled coding tools such as Cursor and GitHub Copilot to accelerate development, improve code quality, and enhance efficiency.

6. Cross-Functional Collaboration

Work closely with product, business, and engineering teams to translate business requirements into effective AI-driven solutions.

7. Model Monitoring & Continuous Improvement

Monitor model performance, analyze outputs, and iteratively improve accuracy, safety, and overall system effectiveness.

 

Qualifications:

1. Hands-on experience analyzing, developing, fine-tuning, and optimizing GPT and Anthropic-based large language models.

2. Strong expertise in prompt design, experimentation, and optimization to enhance response accuracy and reliability.

3. Proven experience building, training, and deploying chatbots or conversational AI systems.

4. Practical experience using AI-assisted coding tools such as Cursor or GitHub Copilot in production environments.

5. Solid programming experience in Python, with strong problem-solving and development fundamentals.

6. Experience working with embeddings, similarity search, and vector databases for retrieval-augmented generation (RAG).

7. Knowledge of MLOps practices, including model deployment, versioning, monitoring, and lifecycle management.

8. Experience with cloud environments such as Azure, AWS for deploying and managing AI solutions.

9. Experience with APIs, microservices architecture, and system integration for scalable AI applications.

 

Why Join Us

• Build cutting-edge AI-powered B2B SaaS products

• Own architecture and technology decisions end-to-end

• Work with highly skilled ML and Full Stack teams

• Be part of a fast-growing, innovation-driven product organization

 

If you are a results-driven AiML Senior Developer with a passion for developing innovative products that drive business growth, we invite you to join our dynamic team at ProductNova.

 

 

 

Read more
ProductNova
Vidhya Vijay
Posted by Vidhya Vijay
Bengaluru (Bangalore)
10 - 12 yrs
₹28L - ₹32L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
skill iconPython
skill iconNodeJS (Node.js)
skill icon.NET
+9 more

ROLE - TECH LEAD/ARCHITECT with AI Expertise

 

Experience: 10–15 Years

Location: Bangalore (Onsite)

Company Type: Product-based | AI B2B SaaS

 

About ProductNova

ProductNova is a fast-growing product development organization that partners with

ambitious companies to build, modernize, and scale high-impact digital products. Our teams

of product leaders, engineers, AI specialists, and growth experts work at the intersection of

strategy, technology, and execution to help organizations create differentiated product

portfolios and accelerate business outcomes.

 

Founded in early 2023, ProductNova has successfully designed, built, and launched 20+

large-scale, AI-powered products and platforms across industries. We specialize in solving

complex business problems through thoughtful product design, robust engineering, and

responsible use of AI.

 

Product Development

We design and build user-centric, scalable, AI-native B2B SaaS products that are deeply

aligned with business goals and long-term value creation.

Our end-to-end product development approach covers the full lifecycle:

 

1. Product discovery and problem definition

2. User research and product strategy

3. Experience design and rapid prototyping

4. AI-enabled engineering, testing, and platform architecture

5. Product launch, adoption, and continuous improvement

 

From early concepts to market-ready solutions, we focus on building products that are

resilient, scalable, and ready for real-world adoption. Post-launch, we work closely with

customers to iterate based on user feedback and expand products across new use cases,

customer segments, and markets.

 

Growth & Scale

For early-stage companies and startups, we act as product partners—shaping ideas into

viable products, identifying target customers, achieving product-market fit, and supporting

go-to-market execution, iteration, and scale.

For established organizations, we help unlock the next phase of growth by identifying

opportunities to modernize and scale existing products, enter new geographies, and build

entirely new product lines. Our teams enable innovation through AI, platform re-

architecture, and portfolio expansion to support sustained business growth.

 

 

 

Role Overview

We are looking for a Tech Lead / Architect to drive the end-to-end technical design and

development of AI-powered B2B SaaS products. This role requires a strong hands-on

technologist who can work closely with ML Engineers and Full Stack Development teams,

own the product architecture, and ensure scalability, security, and compliance across the

platform.

 

Key Responsibilities

• Lead the end-to-end architecture and development of AI-driven B2B SaaS products

• Collaborate closely with ML Engineers, Data Scientists, and Full Stack Developers to

integrate AI/ML models into production systems

• Define and own the overall product technology stack, including backend, frontend,

data, and cloud infrastructure

• Design scalable, resilient, and high-performance architectures for multi-tenant SaaS

platforms

• Drive cloud-native deployments (Azure) using modern DevOps and CI/CD practices

• Ensure data privacy, security, compliance, and governance (SOC2, GDPR, ISO, etc.)

across the product

• Take ownership of application security, access controls, and compliance

requirements

• Actively contribute hands-on through coding, code reviews, complex feature development and architectural POCs

• Mentor and guide engineering teams, setting best practices for coding, testing, and

system design

• Work closely with Product Management and Leadership to translate business

requirements into technical solutions

 

Qualifications:

• 10–15 years of overall experience in software engineering and product

development

• Strong experience building B2B SaaS products at scale

• Proven expertise in system architecture, design patterns, and distributed systems

• Hands-on experience with cloud platforms (Azure, AWS/GCP)

• Solid background in backend technologies (Python/ .NET / Node.js / Java) and

modern frontend frameworks like (React, JS, etc.)

• Experience working with AI/ML teams in deploying and tuning ML models into production

environments

• Strong understanding of data security, privacy, and compliance frameworks

• Experience with microservices, APIs, containers, Kubernetes, and cloud-native

architectures

• Strong working knowledge of CI/CD pipelines, DevOps, and infrastructure as code

• Excellent communication and leadership skills with the ability to work cross-

functionally

• Experience in AI-first or data-intensive SaaS platforms

• Exposure to MLOps frameworks and model lifecycle management

• Experience with multi-tenant SaaS security models

• Prior experience in product-based companies or startups

 

Why Join Us

• Build cutting-edge AI-powered B2B SaaS products

• Own architecture and technology decisions end-to-end

• Work with highly skilled ML and Full Stack teams

• Be part of a fast-growing, innovation-driven product organization

 

If you are a results-driven Technical Lead with a passion for developing innovative products that drives business growth, we invite you to join our dynamic team at ProductNova.

 

Read more
Technology Industry

Technology Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
2 - 5 yrs
₹4L - ₹5L / yr
DevOps
Windows Azure
CI/CD
MySQL
skill iconPython
+12 more

JOB DETAILS:

* Job Title: DevOps Engineer (Azure)

* Industry: Technology

* Salary: Best in Industry

* Experience: 2-5 years

* Location: Bengaluru, Koramangala

Review Criteria

  • Strong Azure DevOps Engineer Profiles.
  • Must have minimum 2+ years of hands-on experience as an Azure DevOps Engineer with strong exposure to Azure DevOps Services (Repos, Pipelines, Boards, Artifacts).
  • Must have strong experience in designing and maintaining YAML-based CI/CD pipelines, including end-to-end automation of build, test, and deployment workflows.
  • Must have hands-on scripting and automation experience using Bash, Python, and/or PowerShell
  • Must have working knowledge of databases such as Microsoft SQL Server, PostgreSQL, or Oracle Database
  • Must have experience with monitoring, alerting, and incident management using tools like Grafana, Prometheus, Datadog, or CloudWatch, including troubleshooting and root cause analysis

 

Preferred

  • Knowledge of containerisation and orchestration tools such as Docker and Kubernetes.
  • Knowledge of Infrastructure as Code and configuration management tools such as Terraform and Ansible.
  • Preferred (Education) – BE/BTech / ME/MTech in Computer Science or related discipline

 

Role & Responsibilities

  • Build and maintain Azure DevOps YAML-based CI/CD pipelines for build, test, and deployments.
  • Manage Azure DevOps Repos, Pipelines, Boards, and Artifacts.
  • Implement Git branching strategies and automate release workflows.
  • Develop scripts using Bash, Python, or PowerShell for DevOps automation.
  • Monitor systems using Grafana, Prometheus, Datadog, or CloudWatch and handle incidents.
  • Collaborate with dev and QA teams in an Agile/Scrum environment.
  • Maintain documentation, runbooks, and participate in root cause analysis.

 

Ideal Candidate

  • 2–5 years of experience as an Azure DevOps Engineer.
  • Strong hands-on experience with Azure DevOps CI/CD (YAML) and Git.
  • Experience with Microsoft Azure (OCI/AWS exposure is a plus).
  • Working knowledge of SQL Server, PostgreSQL, or Oracle.
  • Good scripting, troubleshooting, and communication skills.
  • Bonus: Docker, Kubernetes, Terraform, Ansible experience.
  • Comfortable with WFO (Koramangala, Bangalore).


Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort