Cutshort logo

50+ Python Jobs in India

Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!

icon
Pune, Chennai
3 - 6 yrs
₹6L - ₹12L / yr
skill iconPython
Data engineering
FastAPI
RESTful APIs
ORM
+3 more

Experience: 6+ Years

Location: Chennai & Pune

Work Model: Hybrid

Notice Period: Immediate Joiners Preferred


Role Overview

We are looking for a highly skilled Senior Python Engineer to design, develop, and scale robust backend systems and data-driven applications. The ideal candidate should have strong problem-solving skills, experience with modern Python frameworks, and exposure to cloud and emerging technologies like Generative AI and LLMs.


Key Responsibilities

  • Design, develop, and maintain scalable applications using Python
  • Build and optimize RESTful APIs using Flask or FastAPI
  • Work on data manipulation, processing, and transformation using Python libraries
  • Collaborate with cross-functional teams to define and deliver high-quality solutions
  • Develop efficient, reusable, and reliable code with strong attention to performance
  • Implement containerization and orchestration using Docker and Kubernetes
  • Ensure application security, data protection, and compliance best practices
  • Manage code versioning using Git and follow CI/CD best practices
  • Contribute to cloud-based deployments and infrastructure
  • Explore and implement solutions using Generative AI and Large Language Models (LLMs)



Required Skills & Qualifications

  • 6+ years of hands-on experience in Python development
  • Strong understanding of Python fundamentals and problem-solving skills
  • Experience with data manipulation libraries (e.g., Pandas, NumPy)
  • Expertise in building REST APIs using Flask or FastAPI
  • Solid understanding of ORM frameworks (e.g., SQLAlchemy, Django ORM)
  • Experience with cloud platforms (AWS, Azure, or GCP)
  • Hands-on experience with Docker and Kubernetes
  • Strong knowledge of application security principles
  • Proficiency in Git and version control practices
  • Exposure to Generative AI concepts and working with LLMs
Read more
Service Co

Service Co

Agency job
via Vikash Technologies by Rishika Teja
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
3 - 8 yrs
₹20L - ₹30L / yr
skill iconPython
skill iconJava
skill iconScala
ETL
Google Cloud Platform (GCP)
+3 more

Hiring for Data Engineer


Exp : 3 - 8 yrs

Edu : Any Graduates

Work Location : Noida WFO


Skills :


3+ years experience in building data platforms and data engineering solutions and Data Architecture.


Strong programming skills in Python, Java, or Scala.Experience with cloud platforms (GCP preferred) and big data technologies (Hadoop, Big Query etc.)


Proven experience in designing and building data migration platforms, including planning, execution, and validation of data migrations.


Proficiency in SQL and experience with data modeling, 


ETL processes, and data warehousing solutions.


Knowledge of popular data migration tools, ETL technologies, and frameworks (Airflow, Apache Beam, KNIME etc)




Read more
Remote only
3 - 15 yrs
₹8L - ₹12L / yr
FastAPI
skill iconPython
RESTful APIs
SQL
NOSQL Databases
+5 more

Summary:

We are seeking a highly skilled Python Backend Developer with proven expertise in FastAPI to join our team as a full-time contractor for 12 months. The ideal candidate will have 5+ years of experience in backend development, a strong understanding of API design, and the ability to deliver scalable, secure solutions. Knowledge of front-end technologies is an added advantage. Immediate joiners are preferred. This role requires full-time commitment—please apply only if you are not engaged in other projects.

Job Type:

Full-Time Contractor (12 months)

Location:

Remote

Experience:

3+ years in backend development

Key Responsibilities:

  • Design, develop, and maintain robust backend services using Python and FastAPI.
  •  Implement and manage Prisma ORM for database operations.
  • Build scalable APIs and integrate with SQL databases and third-party services.
  • Deploy and manage backend services using Azure Function Apps and Microsoft Azure Cloud.
  • Collaborate with front-end developers and other team members to deliver high-quality web applications.
  • Ensure application performance, security, and reliability.
  • Participate in code reviews, testing, and deployment processes.

Required Skills:

  • Expertise in Python backend development with strong experience in FastAPI.
  • Solid understanding of RESTful API design and implementation.
  • Proficiency in SQL databases and ORM tools (preferably Prisma)
  • Hands-on experience with Microsoft Azure Cloud and Azure Function Apps.
  • Familiarity with CI/CD pipelines and containerization (Docker).
  • Knowledge of cloud architecture best practices.

Added Advantage:

  • Front-end development knowledge (React, Angular, or similar frameworks).
  • Exposure to AWS/GCP cloud platforms.
  • Experience with NoSQL databases.

Eligibility:

  • Minimum 3 years of professional experience in backend development.
  • Available for full-time engagement.
  • Please excuse if you are currently engaged in other projects—we require dedicated availability.


Read more
Vivanet

at Vivanet

1 candid answer
Ashish Uikey
Posted by Ashish Uikey
Remote only
8 - 12 yrs
Best in industry
RBAC
Microsoft Windows Azure
Integration
CI/CD
skill iconGitHub
+5 more

Job Title: Snowflake Platform Administrator

Duration: 6-12 months contract (could be extended upon performance)

Mode: Remote


About the Role

We are looking for a Snowflake Administrator to join our Snowflake Center of Excellence (COE) to manage, secure, and optimize the enterprise Snowflake data platform. The role will focus on platform administration, security governance, and automation while enabling data engineering, analytics, and business teams to effectively leverage Snowflake capabilities.


Key Responsibilities

Administer and maintain the Snowflake platform, including warehouses, databases, schemas, users, roles, and resource monitors.

•Implement and manage Snowflake security and access governance including RBAC, network policies, and network rules.

•Manage identity and access integration with Azure Active Directory (Azure AD), including role mapping with Azure AD groups.

Monitor platform performance, usage, and cost to ensure efficient and reliable operations.

•Manage key Snowflake capabilities including data sharing (consumer and provider), cloning, data recovery, integrations (storage/API/notification), and performance optimization.

•Develop automation scripts using SQL and Python for administrative and operational tasks.

•Create and maintain CI/CD workflows using GitHub Actions for Snowflake deployments.

•Collaborate with data engineers, analysts, and architects to ensure secure and scalable data platform usage.

•Stay up to date with Snowflake product releases, new features, and platform best practices, and proactively evaluate their applicability to the organization.

•Contribute to standards, best practices, and governance frameworks within the Snowflake COE.

General Business

•Explore opportunities to leverage AI to improve platform automation and productivity.


Required Experience & Skills:

5-8 years of relevant experience in Snowflake Administration and platform management.

•Solid understanding of Snowflake architecture, security, features, and performance optimization.

•Experience implementing RBAC, Network Policies, and Network Rules in Snowflake.

•Experience with Snowflake integration with Azure AD for role and access management via AD groups.

•Proficiency in SQL and Python scripting.

•Experience with GitHub and GitHub Actions/Workflow creation.

•Strong analytical and problem-solving skills.

•Functional Domain: FMCH (Fast Moving Consumer Health).


Preferred Additional Skills:

•AI enthusiast and Automation expertise

•Understanding of modern data architectures including data lakes and real-time processing

•Familiarity with BI tools such as Power BI, Tableau, Looker


Education & Languages

•Bachelor’s degree in computer science, Information Technology, or similar quantitative field of study.

•Fluent in English.

•Function effectively within teams of varied cultural backgrounds and expertise sources.

Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Bengaluru (Bangalore)
5 - 8 yrs
₹10L - ₹30L / yr
databricks
skill iconPython
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Supervised learning

What You’ll Be Doing:

  • Design and develop advanced AI/ML models to solve complex business problems
  • Work closely with cross-functional teams including data engineers and domain experts
  • Perform exploratory data analysis, data cleaning, and model development
  • Translate business challenges into data-driven solutions and actionable insights
  • Drive innovation in advanced analytics and AI/ML capabilities
  • Communicate model insights effectively to both technical and non-technical stakeholders

What We’re Looking For:

  • 5+ years of experience in AI/ML model development
  • Strong foundation in mathematics, probability, and statistics
  • Proficiency in Python and exposure to Azure Machine Learning / Databricks
  • Experience with supervised & unsupervised learning techniques
  • Domain exposure to Energy / Oil & Gas value chain (preferred)
  • Strong problem-solving, stakeholder management, and communication skills


Read more
Gurugram, Vadodara
5 - 10 yrs
₹8L - ₹14L / yr
skill iconNodeJS (Node.js)
skill iconPython
skill iconReact.js
skill iconNextJs (Next.js)
RESTful APIs
+14 more

Job Title : Full Stack Engineer (Node.js, React) – Crypto Trading Platform / Web3

Experience : 5+ Years

Location : Gurugram & Vadodara


Role Overview :

We are looking for a Full Stack Engineer (Node.js, React) – Crypto Trading Platform / Web3 with strong backend and frontend capabilities, along with hands-on experience or understanding of blockchain and cryptocurrency systems.

The ideal candidate should have experience working on high-performance applications, preferably in a crypto exchange or fintech environment.


Key Responsibilities :

  • Develop and maintain scalable backend services and APIs.
  • Build and optimize frontend interfaces for trading and user flows.
  • Work on real-time systems such as order books, pricing, and trade execution.
  • Integrate with blockchain networks, wallets, and third-party APIs.
  • Ensure security, performance, and reliability of the platform.
  • Collaborate with product, design, and DevOps teams for feature delivery.
  • Participate in system design, code reviews, and architecture discussions.


Required Skills & Qualifications :

  • 5+ years of experience in full stack development.

Strong experience in :

  • Backend : Node.js / Python
  • Frontend : React.js / Next.js
  • Experience with REST APIs and microservices architecture.
  • Good understanding of databases (MongoDB / SQL).
  • Experience with Docker and cloud platforms (AWS preferred).
  • Strong understanding of system design and scalability.


Good to Have (Highly Preferred) :

  • Experience working with a crypto exchange or trading platform.
  • Understanding of blockchain fundamentals (Ethereum, Bitcoin, etc.).
  • Experience with wallet integrations, on-chain transactions.
  • Familiarity with WebSockets / real-time data systems.
  • Knowledge of security best practices in fintech/crypto.


Why Join Us ?

  • Work on a high-impact, real-world crypto exchange product.
  • Opportunity to build from early-stage to scale.
  • Fast-paced and ownership-driven environment.
Read more
Techjays

at Techjays

1 candid answer
Sri Krishna Thangamani
Posted by Sri Krishna Thangamani
Coimbatore
3 - 5 yrs
₹7L - ₹11L / yr
Large Language Models (LLM)
Retrieval Augmented Generation (RAG)
Vector database
Design patterns
skill iconPython
+9 more

Expected Date of Joining Immediate/ 30 days  


What makes Techjays an inspiring place to work

At Techjays, we are helping companies reimagine how they build, operate, and scale with AI at the core.

We operate as part of the 1% of companies globally that can truly leverage AI the right way and not just as experimentation, but as secure, scalable, production-grade systems that drive measurable business outcomes.

Our strength lies in combining deep backend engineering with AI system design, building AInative platforms, intelligent workflows, and cloud architectures that are reliable, observable, and enterprise-ready. Our team includes engineers and leaders who have built and scaled products at global technology organizations such as Google, Akamai, NetApp, ADP, Cognizant Consulting, and Capgemini. Today, we function as a high-agency, execution-focused team building advanced AI systems for global clients.

We are looking for a strong backend engineer who can design and build secure, scalable Python systems that power AI-native applications. You will work on AI-enabled platforms, production systems, and scalable backend services that support LLM integrations, RAG pipelines, and intelligent workflows. Years of Experience: 3 - 5 years Location: Coimbatore Key Skills: ● Backend Development (Familiar): Python, Django/Flask, RESTful APIs, Websockets ● Cloud Technologies (Familiar): AWS (EC2, S3, Lambda), GCP (Compute Engine, Cloud Storage, Cloud Functions), CI/CD pipelines with Jenkins/GitLab CI or Github Actions ● Databases (Familiar): PostgreSQL, MySQL, MongoDB ● AI/ML (Familiar): Basic understanding of Machine Learning concepts, assist in building and integrating Agentic AI workflows, familiar with RAG, Vector Databases (Pinecone or ChromaDB or others) ● Tools: Git, Docker, Linux Roles and Responsibilities: ● Develop and maintain backend services using Python and Django/Flask under guidance. ● Assist in building scalable and secure APIs and backend systems for AI-driven applications. ● Write clean, efficient, and maintainable code following best practices. ● Collaborate with cross-functional teams including frontend developers, data scientists, and product teams. ● Participate in code reviews, debugging, and performance optimization. ● Support integration of AI/ML components such as LLMs and RAG pipelines. ● Continuously learn and improve technical skills in backend and AI technologies. What We’re Looking for Beyond Skills: ● Builder mindset — you think in systems, not just tickets ● Ownership — you take features from idea to production ● Structured thinking in ambiguous environments ● Clear communication and collaborative approach ● Ability to work in a fast-paced, evolving startup environment What We Offer: ● Competitive compensation ● Paid holidays & flexible time off ● Medical insurance (Self & Family up to ₹4 Lakhs per person) ● Opportunity to work on production-grade AI systems ● Exposure to global clients and high-impact projects ● A culture that values clarity, integrity, and continuous growth If you want to build AI-native systems that are used in the real world, not just prototypes, Techjays is the place to do it.  

Read more
Techlyticaly
Delhi
0 - 1 yrs
₹40000 - ₹40000 / mo
Data Structures
Object Oriented Programming (OOPs)
Algorithms
skill iconPython
skill iconC
+2 more

As a Software Developer Intern at Techlyticaly, you'll be responsible for solving problems and flexing your tech muscles to build amazing stuff. You'll work under the guidance of mentors and be responsible for developing high-quality, maintainable code modules that are extensible and meet the technical guidelines provided.


Responsibilities:

We want you to show off your technical skills, but we also want you to be creative and think outside the box. Here are some of the ways you'll be flexing your tech muscles:

  • Use your superpowers to solve complex technical problems, combining your excellent abstract reasoning ability with problem-solving skills.
  • Quickly ramp up on at least one product or technology of strategic importance to the organization, and become a true tech ninja.
  • Stay up-to-date with emerging trends in the field, so that you can keep bringing fresh ideas to the table.
  • Implement robust and extensible code modules as per guidelines. We love all code that's functional (Don’t we?)
  • Develop good quality, maintainable code modules without any defects, exhibiting attention to detail. Nothing should look sus!
  • Consistently apply team software development processes such as estimations, tracking, testing, code and design reviews, etc., but do it with a funky twist that reflects your personality.


Qualifications:

We want to make sure you're a funky, tech-loving person with a passion for learning and growing. Here are some of the things we're looking for:

  • You have or pursuing a final year Bachelor's degree in Computer Science or a related field, but you also have a creative side that you're not afraid to show.
  • You have excellent abstract reasoning ability and a strong understanding of core computer science fundamentals.
  • You're familiar with object-oriented programming languages such as Java, Python, or C++, but you're also open to learning new languages and technologies that might not be as mainstream.
  • You have experience with testing, code, and design reviews.
  • You have strong written and verbal communication skills, but you're also not afraid to show your personality and let your funky side shine through.
  • You can work independently and in a team environment, but you're also excited to collaborate with others and share your ideas.


Duration:

  • This Internship Adventure’s duration is 6 months.
  • Based on your performance during the internship, there's a possibility of converting the role into a permanent position. After all, we all want to grow in life. 

This is an entry-level position, you'll get to flex your coding muscles, work on exciting projects, and grow your skills in a fast-paced, dynamic environment. So, if you're passionate about all things tech and ready to take your skills to the next level, we want YOU to apply! Let's make some magic happen together!

We are located in Delhi. This post may require relocation.

Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
5 - 10 yrs
₹21L - ₹30L / yr
skill iconPython
skill iconMachine Learning (ML)
Generative AI (GenAI)
SQL
skill iconDeep Learning
+11 more

JOB DETAILS:

- Job Title: Lead I - Data Science - Python, Machine Learning, Spark 

- Industry: Global Digital Transformation Solutions Provider

- Experience: 5-10 years

- Job Location: Pune

- CTC Range: Best in Industry

 

JD for Data Scientist

Hands-on experience with data analysis tools:

Proficient in using tools such as Python and R for data manipulation, querying, and analysis.

Skilled in utilizing libraries like Pandas, NumPy, and Scikit-Learn to perform in-depth data analysis and modeling.

 

Skilled in machine learning and predictive analytics:

Expertise in building, training, and deploying machine learning models using frameworks such as TensorFlow and PyTorch.

Capable of performing tasks like regression, classification, clustering, and recommendation, leading to data-driven predictions and insights.

 

Expertise in big data technologies:

Proficient in handling large datasets using big data tools such as Spark.

Skilled in employing distributed computing and parallel processing techniques to ensure efficient data processing, storage, and analysis, enabling enterprise-level solutions and informed decision-making

 

Skills: Python, SQL, Machine Learning, and Deep Learning, with mandatory expertise in Generative AI.

 

Must-Haves

5–9 years of relevant experience in Python, SQL, Machine Learning, and Deep Learning, with mandatory expertise in Generative AI

 

******

NP - Immediate joiners only

Location-Pune 

Read more
Bell Techlogix
Pemmraju VenkatVandita
Posted by Pemmraju VenkatVandita
Hyderabad
5 - 10 yrs
₹15L - ₹20L / yr
Generative AI
Microsoft Windows Azure
skill iconPython
SQL
Windows Azure
+1 more

The AI Data Engineer will be responsible for designing, building, and operating scalable data pipelines and curated data assets that power machine learning, generative AI, and intelligent automation solutions in an SLA-driven managed services environment. This role focuses on data ingestion, transformation, governance, and operational reliability across cloud and hybrid environments enabling use cases such as knowledge retrieval (RAG), conversational AI, predictive analytics, and AI-assisted service management. The ideal candidate combines strong data engineering fundamentals with an understanding of AI workload requirements, including quality, lineage, privacy, and performance. 

 

Key Responsibilities 

•Design, build, and operate production-grade data pipelines that support AI/ML and generative AI workloads in managed services environments 

•Develop curated, analytics-ready datasets and data products to enable model training, grounding, feature generation, and AI search/retrieval 

•Implement data ingestion patterns for structured and unstructured sources (APIs, databases, files, event streams, documents) 

•Build and maintain transformation workflows with strong testing and validation 

•Enable Retrieval-Augmented Generation (RAG) by preparing document corpora, chunking strategies, metadata enrichment, and vector indexing patterns 

•Integrate data pipelines with application services 

•Support ITSM and enterprise workflow data needs, including ServiceNow data integration, CMDB/incident data quality improvements, and automation enablement 

•Implement observability for data pipelines (monitoring, alerting, SLAs/SLOs) and perform root cause analysis for pipeline failures or data quality incidents 

•Apply data governance and security best practices 

•Collaborate with ML Engineers, DevOps/SRE, and solution architects to operationalize end-to-end AI solutions 

•Contribute to reusable patterns, templates, and standards within the Bell Techlogix AI Center of Excellence 

 

Required Qualifications 

•Bachelor’s degree in Computer Science, Engineering, Information Systems, or equivalent practical experience 

•5+ years of experience in data engineering, analytics engineering, or platform data operations 

•Strong proficiency in SQL and Python; experience with data modeling and dimensional concepts 

•Hands-on experience with Azure data services (e.g., Data Factory, Synapse, Databricks, Storage, Key Vault) or equivalent cloud tooling 

•Experience building reliable pipelines with scheduling, dependency management, and automated testing/validation 

•Experience supporting production data platforms with incident management, troubleshooting, and root cause analysis 

•Understanding of data security, privacy, and governance principles in enterprise environments 

 

Preferred Qualifications 

•Experience enabling AI/ML workloads: feature engineering, training data preparation, and integration with Azure Machine Learning 

•Experience with unstructured data processing for generative AI 

•Familiarity with vector databases or vector search and RAG patterns 

•Experience with event streaming and messaging 

•Familiarity with ServiceNow data model and integration patterns (Table API, export, CMDB/ITSM reporting) 

•Relevant certifications (Microsoft Azure Data Engineer, Azure AI Engineer, Databricks) 

Read more
Simbian AI
Akanksha Sharan
Posted by Akanksha Sharan
Remote only
12 - 16 yrs
₹50L - ₹70L / yr
skill iconPython
skill iconGo Programming (Golang)
skill iconJava
Cyber Security

About Us Simbian® is building Agentic AI platform for cybersecurity. Founded by repeat successful security founders, we have gathered an excellent cohort of employees, partners, and customers. Our mission is to solve security using AI and our core values are excellence, replication, and intellectual honesty.

Our promise is to make Simbian the best workplace of your career and we believe a small group of thoughtful passionate people can make all the positive difference in the world. To fuel our fast growth, we are seeking an exceptional candidate who shares our core values of excellence (being the world's best at our craft), replication (share your best ideas with others), and intellectual honesty (tell the truth even if it's bitter).

Our AI Agents automate security operations and provide our customers 10x leverage. Our customers include some of the world's largest companies.

Our initial use cases include: SOC alert triage and investigation Prioritization and classification of vulnerabilities AI based threat hunting



As an Engineering Manager, you will lead a pod of highly skilled engineers responsible for building critical components of Simbian’s platform—from scalable backend services and data pipelines to integrations with security tools and novel AI-driven investigation engines. You’ll be responsible for driving execution, mentoring engineers, and shaping technical direction while working closely with product, AI/ML, and security teams.

This role is ideal for a hands-on leader who thrives in startup environments, is comfortable balancing execution with strategy, and can guide engineers to build reliable, secure, and scalable systems.



Responsibilities

• Lead and mentor a pod of backends, frontend, or platform engineers (depending on pod assignment: e.g., Integrations, Investigation Infra, Threat Hunting, etc.).

• Drive delivery of product and platform features aligned to quarterly OKRs

• Establish engineering best practices for code quality, observability, security, and reliability

• Collaborate with product managers and security SMEs to define technical scope, execution plans, and delivery timelines.

• Provide technical guidance in architecture decisions across areas such as: 1. Scalable microservices 2. Security product integrations (EDR, SIEM, CNAPP, etc.)

• Data pipelines (historical + real-time event ingestion)

• AI/ML systems for reasoning and automation

• Recruit, develop, and retain top engineering talent.

• Ensure pods maintain a high bar for innovation, execution, and collaboration.


Requirements

• 12+ years of professional software engineering experience in security domain, with at least 3+ years leading or managing engineering teams. • Strong background in building scalable backend systems (Python, Go, or Java preferred).

• Experience with cloud-native architectures (Kubernetes, Postgres, vector databases, OpenSearch, etc.).

• Familiarity with data pipelines (ETL/ELT, orchestration frameworks like Dagster/Airflow, streaming systems).

• Exposure to security products and data (SIEM, EDR, CNAPP, vulnerability management) is a strong plus.

• Track record of leading pods/teams to deliver complex technical projects with measurable outcomes.

• Strong communication skills, with the ability to work cross-functionally with product, AI/ML, and security teams.

• Startup mindset: bias for execution, ability to operate with ambiguity, and eagerness to wear multiple hats.


Nice to Have

• Experience with AI/ML pipelines, LLM integration, or security-focused AI applications.

• Knowledge of SOC processes, MITRE ATT&CK, or incident response workflows.

• Contributions to open-source projects in data, security, or AI. • Previous experience scaling teams at an early-stage startup.


Benefits

• Competitive salary commensurate with experience

• Generous early-stage equity with significant upside potential

• Annual performance bonuses tied to company and individual goals

Budget- under 90L annually

Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Mumbai, Bengaluru (Bangalore)
5 - 10 yrs
Upto ₹45L / yr (Varies
)
Agentic AI
skill iconPython
RESTful APIs
Google Vertex AI
Gemini (Google AI)
+1 more

This role is responsible for architecting and implementing the Agentic capabilities of the PHI ecosystem. The engineer will lead the development of multi-agent systems, enabling seamless interoperability between AI agents, internal tools, and external services.

The position requires a strong focus on AI safety, secure agent orchestration, and tool-connected AI systems capable of executing complex workflows within the health insurance domain.


1. Agent Orchestration

  • Build and manage autonomous AI agents using Agent Development Kit (ADK) and Vertex AI Agent Engine.
  • Design and implement multi-agent workflows capable of handling complex tasks.

2. Interoperability

  • Implement the Model Context Protocol (MCP) to enable connectivity between:
  • AI agents
  • Internal PHI tools
  • External services and APIs.

3. Multimodal Development

  • Build real-time, bidirectional audio applications using the Gemini Live API.
  • Integrate image generation models and support multimodal AI capabilities.

4. Safety Engineering

  • Implement AI safety layers to protect sensitive healthcare data.
  • Use Model Armor and Cloud DLP API to:
  • Sanitize prompts
  • Prevent exposure of PII/PHI data
  • Enforce secure AI interactions.

5. Agent-to-Agent (A2A) Communication

  • Configure remote agent connectivity using the A2A SDK.
  • Enable cross-agent collaboration and workflow orchestration.

Must-Have Skills

  • Advanced proficiency with Agent Development Kit (ADK).
  • Strong experience with Vertex AI Agent Engine.
  • Hands-on experience with Model Context Protocol (MCP).
  • Experience implementing Agent-to-Agent (A2A) workflows using the A2A SDK.
  • Expertise in Google Gen AI SDK for Python.
  • Experience building multimodal AI applications.
  • Proven experience implementing AI safety layers, including:
  • Model Armor
  • Cloud DLP API

Good-to-Have Skills (Foundation)

Data & Analytics

  • BigQuery optimization techniques, including:
  • Partitioning
  • Clustering
  • Denormalization for performance and cost optimization.

Streaming & Real-Time Pipelines

  • Experience building real-time data pipelines using:
  • Google Pub/Sub
  • BigQuery streaming pipelines
Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Mumbai
3 - 5 yrs
Upto ₹33L / yr (Varies
)
Agentic AI
skill iconPython
RESTful APIs
Google Vertex AI
Gemini (Google AI)
+1 more

We are seeking a Senior Machine Learning Engineer to support the development and deployment of advanced AI capabilities within the PHI ecosystem.

This role focuses on the execution of Generative AI tasks, including model integration and agent deployment. The candidate will be responsible for building RAG-based workflows and ensuring AI interactions remain grounded and accurate using Google Cloud AI tools.


Key Responsibilities

1. GenAI Integration

  • Develop and maintain integrations with Gemini 1.5 Pro and Flash models
  • Use the Google Gen AI SDK for Python to build and manage model integrations

2. Agent Deployment

  • Assist in deploying AI agents to Vertex AI Agent Engine
  • Work with the Agent Development Kit (ADK) for agent lifecycle management

3. RAG & Embeddings

  • Generate and manage text and multimodal embeddings
  • Support semantic search and Retrieval-Augmented Generation (RAG) pipelines

4. Testing & Quality

  • Run evaluation scripts to verify model output quality
  • Ensure models follow grounding and response accuracy guidelines

Must-Have Skills

  • Strong Python programming
  • Experience working with REST APIs
  • Hands-on experience with Vertex AI Studio
  • Experience working with Gemini APIs
  • Understanding of Agentic AI concepts
  • Familiarity with ADK CLI
  • Experience or understanding of RAG architecture
  • Knowledge of embedding generation

Good-to-Have Skills (Foundation):

BigQuery

  • Basic SQL knowledge
  • Experience with data loading
  • Ability to debug and troubleshoot queries

Data Streaming

  • Familiarity with Google Pub/Sub
  • Understanding of synthetic data generation

Visualization

  • Basic reporting and dashboards using Looker Studio
Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
5 - 10 yrs
Upto ₹40L / yr (Varies
)
skill iconPython
RESTful APIs
Microservices

As a Backend Engineer, you will be a core member of the Platform Implementation Team, responsible for building the robust, scalable, and secure backend infrastructure for a multi-cloud enterprise Data & AI platform.


You will design and develop high-performance microservices, RESTful APIs, and event-driven architectures that serve as the backbone for enterprise-wide applications.

Working closely with Platform Engineers, Data Modelers, and UI teams, you will ensure seamless data flow between core business systems (CRM, ERP) and the platform, enabling the rollout of critical business services across multiple global Local Business Units (LBUs).



Backend Development

  • Design and develop scalable backend services and microservices
  • Build and maintain RESTful APIs for enterprise applications
  • Define and maintain API contracts using OpenAPI/Swagger

Platform & System Integration

  • Enable seamless integration between enterprise systems (CRM, ERP) and the platform
  • Support data flow across multiple global business units

Event-Driven Architecture

  • Implement asynchronous processing and event-driven systems
  • Work with message brokers and streaming platforms

Cross-Functional Collaboration

  • Collaborate with platform engineers, data modelers, and frontend teams
  • Contribute to architecture discussions and backend design decisions

Must-Have Skills

Experience

  • 5–7 years of hands-on experience in backend software engineering
  • Experience building enterprise-grade backend systems

Core Programming

Strong proficiency in at least one backend language:

  • Python
  • Node.js
  • Java

Strong understanding of:

  • Object-oriented programming (OOP)
  • Functional programming principles

API & Microservices

  • Extensive experience building RESTful APIs
  • Experience designing microservices architectures
  • Ability to define API contracts using OpenAPI / Swagger

Cloud Infrastructure

Hands-on experience with cloud platforms:

  • Google Cloud Platform (GCP)
  • Microsoft Azure

Examples of services:

  • Cloud Functions
  • Cloud Run
  • Azure App Services

Database Management

Experience with both Relational and NoSQL databases

Relational:

  • PostgreSQL
  • Cloud SQL

NoSQL:

  • Schema design
  • Complex querying
  • Performance optimization

Event-Driven Architecture

Experience with asynchronous processing and message brokers:

  • GCP Pub/Sub
  • Apache Kafka
  • RabbitMQ

Security & Authentication

Strong understanding of:

  • OAuth 2.0
  • JWT authentication
  • Role-Based Access Control (RBAC)
  • Data encryption

Software Engineering Best Practices

  • Writing clean, maintainable code
  • Version control using Git
  • Writing unit and integration tests
  • Familiarity with CI/CD pipelines
  • Containerization using Docker

Good-to-Have Skills

AI & LLM Integration

  • Experience integrating Generative AI models
  • Exposure to:
  • OpenAI
  • Vertex AI
  • LLM gateways
  • Retrieval-Augmented Generation (RAG)

Frontend Exposure

Basic familiarity with frontend frameworks such as:

  • React
  • Next.js
  • Angular

Understanding how backend APIs integrate with UI applications

Advanced Data Stores

Experience with:

  • Vector databases (Pinecone, Milvus)
  • Knowledge graphs

Domain Knowledge

  • Experience in Life Insurance or BFSI sector
  • Understanding of enterprise data governance and compliance standards
Read more
Vivanet

at Vivanet

1 candid answer
Ashish Uikey
Posted by Ashish Uikey
Mumbai
8 - 15 yrs
₹8L - ₹23L / yr
Data engineering
Microsoft Business Intelligence (MSBI)
SQL server
Microsoft SQL Server
MS SQLServer
+16 more

Project context


Summary

A new GIT platform will be created in our existing Captive in India (CASPL) and it will operate some activities of Digital Centre of Excellence (DEC)


Digital Centre of Excellence (DEC) manages a transverse offer of digital products and delivers IT services through its centers of excellence.

Activities encompass development of reusable components (building blocks), development and maintenance of business solutions that leverage on multiple expertise.


It requires professionals having technical competencies and noticeable experience on critical services in the context of an investment bank.

Working in DEC requires ability to extensively collaborate across geographies with other IT professionals and non-IT functions as well as a strong motivation on supporting the digital transformation of the bank.


Key Responsibilities

•Design & Develop Microsoft BI solutions to answer business needs

•Support existing BI application on their evolution / fixes based on the urgency

•Visualize, interpret, and report data findings and may create dynamic data reports as well

•Deliver code according to the specifications; follow code standards, versioning and branching

•Maintain a high standard of delivery quality

•Interact with business analysts in order to provide business focused solutions

•Interact with tech-leads, architects to ensure good design and code quality

•Work closely with the team, internal stakeholders and other cross functional teams

•Ensure documentations are up to date (technical design, deployment guide, release notes, etc.)

•Follow the user acceptance test and coordinate the prioritization with project manager

•Diagnose & resolve application/configuration/code level technical issues

•Create / amend necessary CI/CD pipelines using Azure DevOps

•Innovate and look for new solutions/components

•Build POCs to demonstrate new usage and present them in senior audience

•Perform regular peer-reviews, maintain low technical debts, assess & up-skill junior staffs

•Implement any possible automation of a recurrent process


Communication Key Internal Contacts

•GIT CASPL DEC – Operation manager

•GIT CASPL DEC – Squad leader

•GIT ISAP DEC – Product manager


Legal and Regulatory Responsibilities

•Comply with all applicable legal, regulatory and internal Compliance requirements, including, but not limited to, the local Compliance manual and the Financial Crime Policy. Complete any mandatory training in line with legal, regulatory and internal Compliance requirements.

•Maintain appropriate knowledge to ensure to be fully qualified to undertake the role. Complete all mandatory training as required to attain and maintain competence.

•Refrain from taking any steps which could lead to the removal of certification of fitness and properness to perform the role.

•Undertake all necessary steps to satisfy the annual certification process.

•Comply with all applicable conduct rules as prescribed by the relevant regulator.


ROLE REQUIREMENTS:

•Minimum 8+ years of experience with Microsoft BI stack components

•Experience and knowledge of SDLC or Agile development framework and methodologies

•Strong experience on data warehousing design, modelling and historization

•Experience with CI/CD practices using AzureDevOps


Degree holder in Computer Science or related discipline / relevant experience with MSBI technology A total of 8+ years in IT industry with a minimum of 6+ years of related working experience, in financial/banking environment.


Mandatory skills

•Experience with SQL Server (2017 / 2022)

•Experience in SQL query optimization; complex data retrievals and manipulations.

•Design and development of SSRS, PowerBI, ETL (SSIS)

•MDX/DAX, SSAS Tabular modelling

•C# experience for SSIS


Nice to have (Optional)

•Exposure with data processing pipelining using Python, PySpark, Airflow, etc

•Cloud development using Kubernetes, micro-services

•MPP database like Vertica

•Scripting using PowerShell

Data visualization tools Cognos, Tableau or Qlik

C#, SSRS, SQL Query Optimization, SQL Server (2017/2022), Microsoft Power BI

Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
5 - 10 yrs
Best in industry
skill iconPython
Generative AI
Microservices
RESTful APIs

We are seeking a highly skilled Senior Backend Developer with deep expertise in Python and FastAPI to join our team. This role focuses on building high-performance, scalable backend services capable of handling high request volumes while integrating advanced LLM technologies.


The ideal candidate will design robust distributed systems, implement efficient data storage solutions, and ensure enterprise-grade security within an Azure-based infrastructure. This is a great opportunity to work on AI/ML integrations and mission-critical applications requiring high performance and reliability.


Key Responsibilities:


Backend Development

  • Design and maintain high-performance backend services using Python and FastAPI
  • Implement advanced FastAPI features such as dependency injection, middleware, and async programming
  • Write comprehensive unit tests using pytest
  • Design and maintain Pydantic schemas

High-Concurrency Systems

  • Implement asynchronous code for high-volume request processing
  • Apply concurrency patterns and atomic operations to ensure efficient system performance

Data & Storage

  • Optimize MongoDB operations
  • Implement Redis caching strategies (TTL, performance tuning, caching patterns)

Distributed Systems

  • Implement rate limiting, retry logic, failover mechanisms, and region routing
  • Build microservices and event-driven architectures
  • Work with EventHub, Blob Storage, and Databricks

AI/ML Integration

  • Integrate OpenAI API, Gemini API, and Claude API
  • Manage LLM integrations using LiteLLM
  • Optimize AI service usage within the Azure ecosystem

Security

  • Implement JWT authentication
  • Manage API keys and encryption protocols
  • Implement PII masking and data security mechanisms

Collaboration

  • Work with cross-functional teams on architecture and system design
  • Contribute to engineering best practices and technical improvements
  • Mentor junior developers where required

Must-Have Skills & Requirements

Experience

  • 7+ years of hands-on Python backend development
  • Bachelor’s degree in Computer Science, Engineering, or related field
  • Experience building high-traffic, scalable systems

Core Technical Skills

Python

  • Advanced knowledge of asynchronous programming, concurrency, and atomic operations

FastAPI

  • Expert-level experience with dependency injection, middleware, and async code

Testing

  • Strong experience with pytest and Pydantic schemas

Databases

  • Hands-on experience with MongoDB and Redis
  • Strong understanding of caching patterns, TTL, and performance optimization

Distributed Systems

  • Experience with rate limiting, retry logic, failover mechanisms, high concurrency processing, and region routing

Microservices

  • Experience building microservices and event-driven systems
  • Exposure to EventHub, Blob Storage, and Databricks

Cloud

  • Strong experience working in Azure environments

AI Integration

  • Familiarity with OpenAI API, Gemini API, Claude API, and LiteLLM

Security

  • Implementation experience with JWT authentication, API keys, encryption, and PII masking

Soft Skills

  • Strong problem-solving and debugging skills
  • Excellent communication and collaboration
  • Ability to manage multiple priorities
  • Detail-oriented approach to code quality
  • Experience mentoring junior developers

Good-to-Have Skills

Containerization

  • Docker, Kubernetes (preferably within Azure)

DevOps

  • CI/CD pipelines and automated deployment

Monitoring & Observability

  • Experience with Grafana, distributed tracing, custom metrics

Industry Experience

  • Experience in Insurance, Financial Services, or regulated industries

Advanced AI/ML

  • Vector databases
  • Similarity search optimization
  • LangChain / LangSmith

Data Processing

  • Real-time data processing and event streaming

Database Expertise

  • PostgreSQL with vector extensions
  • Advanced Redis clustering

Multi-Cloud

  • Experience with AWS or GCP alongside Azure

Performance Optimization

  • Advanced caching strategies
  • Backend performance tuning
Read more
VegaStack
Careers VegaStack
Posted by Careers VegaStack
Bengaluru (Bangalore)
0 - 0 yrs
₹10000 - ₹15000 / mo
skill iconNextJs (Next.js)
skill iconPython
skill iconDjango
skill icontailwindcss
TypeScript
+4 more

Who We Are

We're a DevOps and Automation company based in Bengaluru, India. We have successfully delivered over 170 automation projects for 65+ global businesses, including Fortune 500 companies that entrust us with their most critical infrastructure and operations. We're bootstrapped, profitable, and scaling rapidly by consistently solving real, impactful problems.

What We Value

  • Ownership: As part of our team, you're responsible for strategy and outcomes, not just completing assigned tasks.
  • High Velocity: We move fast, iterate faster, and amplify our impact, always prioritizing quality over speed.

Who we seek

We are looking for a Fullstack Developer Intern to join our Engineering team. You’ll build and improve internal products. This is a hands-on internship focused on learning by shipping. Your ultimate goal will be to build highly responsive and innovative AI based software solutions that meet our business needs.

We're looking for individuals who genuinely care, ship fast, and are driven to make a significant impact.

🌏 Job Location: Bengaluru (Work From Office)

What You Will Be Doing

  • Build user-facing features using Next.js and TypeScript.
  • Convert designs into responsive UI using Tailwind CSS and reusable components.
  • Work with APIs to integrate frontend with backend services.
  • Implement common product workflows: authentication, forms, dashboards, tables, and navigation.
  • Fix bugs, write clean code, and improve performance.
  • Collaborate in a PR-based workflow on GitHub.
  • Write and maintain documentation for the features you ship.
  • Learn and apply best practices: component structure, state management, error handling, accessibility basics.

What We’re Looking For

  • Basic to intermediate experience with JavaScript and NextJS.
  • Familiarity with TypeScript basics.
  • Comfortable with HTML/CSS and responsive design, Tailwind CSS is a plus.
  • Understanding of how APIs work and how to consume them from the frontend.
  • Strong Git knowledge.
  • Strong learning mindset, ownership, and attention to detail.

Benefits

  • Work directly with founders and the leadership team.
  • Drive projects that create real business impact, not busywork.
  • Gain practical skills that traditional education misses.
  • Experience rapid growth as you tackle meaningful challenges.
  • Fuel your career journey with continuous learning and advancement paths.
  • Thrive in a workplace where collaboration powers innovation daily.


Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Bengaluru (Bangalore)
5 - 7 yrs
₹4L - ₹12L / yr
skill iconPython
Large Language Models (LLM)
FastAPI
Windows Azure
CI/CD

👉 Job Title: Senior Backend Developer

🌟 Experience: 5-7 Years

💡Location: Bangalore

👉 Notice Period :- Immediate Joiners

💡 Work Mode - 5 Days work from Office

( Candidate Serving notice period are preffered)


Role Summary

We are seeking a Senior Backend Developer with strong expertise in Python and FastAPI to build scalable, high-performance backend systems integrated with LLM technologies on Azure. The role involves designing distributed systems, optimizing data pipelines, and ensuring secure, enterprise-grade applications.


Key Responsibilities

  • Develop backend services using Python & FastAPI (async, middleware)
  • Build high-concurrency, scalable systems and microservices
  • Work with Azure services and event-driven architectures
  • Optimize MongoDB & Redis for performance
  • Integrate LLM APIs (OpenAI, Gemini, Claude)
  • Implement security (JWT, encryption, API management)

Mandatory Skills (Top 3)

  1. Strong Python backend development with FastAPI
  2. Hands-on experience with Microsoft Azure cloud
  3. Experience in building scalable distributed/microservices systems


Good to Have

  • Docker, Kubernetes, CI/CD
  • LLM frameworks (LangChain, vector DBs)
  • Monitoring tools and real-time data processing


Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Bengaluru (Bangalore)
5 - 7 yrs
₹4L - ₹12L / yr
skill iconJava
skill iconPython
skill iconNodeJS (Node.js)
Windows Azure
Google Cloud Platform (GCP)
+3 more

👉 Job Title: Backend Developer

🌟 Experience: 5-7 Years

💡Location: Bangalore

👉 Notice Period :- Immediate Joiners

💡 Work Mode - 5 Days work from Office

( Candidate Serving notice period are preffered)


Role Summary

We are looking for a Backend Engineer to join the Platform Implementation Team, responsible for building scalable, secure, and high-performance backend systems for a multi-cloud Data & AI platform. You will design microservices, develop REST APIs, and enable seamless data integration across enterprise systems like CRM and ERP.


💫 Key Responsibilities

✅ Design and develop scalable microservices and RESTful APIs

✅ Build event-driven architectures for asynchronous processing

✅ Integrate backend systems with cloud platforms (GCP/Azure)

✅ Ensure secure, reliable, and optimized data handling

✅ Collaborate with cross-functional teams (UI, Data, Platform)

✅ Follow best practices in coding, testing, CI/CD, and containerization


💫 Mandatory Skills (Top 3)

✅ Strong backend programming experience (Python / Node.js / Java)

✅ Expertise in API development & Microservices architecture

✅ Hands-on experience with Cloud platforms (GCP or Azure)





Read more
Koolioai
Aishwaria SterlingJames
Posted by Aishwaria SterlingJames
Remote only
0 - 1 yrs
₹15000 - ₹20000 / mo
skill iconPython
skill iconReact.js
skill iconHTML/CSS
skill iconJavascript
skill iconRedux/Flux
+2 more

About koolio.ai


Website: www.koolio.ai


Koolio Inc. is a cutting-edge Silicon Valley startup dedicated to transforming how stories are told through audio. Our mission is to democratize audio content creation by empowering individuals and businesses to effortlessly produce high-quality, professional-grade content. Leveraging AI and intuitive web-based tools, koolio.ai enables creators to craft, edit, and distribute audio content—from storytelling to educational materials, brand marketing, and beyond. We are passionate about helping people and organizations share their voices, fostering creativity, collaboration, and engaging storytelling for a wide range of use cases.


About the Internship Position

We are looking for a motivated full-time Software Development Intern to join our innovative team. As an intern at koolio.ai, you’ll have the opportunity to work on a next-gen AI-powered platform and gain hands-on experience developing and optimizing backend systems that power our platform. This internship is ideal for students or recent graduates who are passionate about backend technologies and eager to learn in a dynamic, fast-paced startup environment.


Key Responsibilities:

  • Assist in the development and maintenance of backend systems and APIs.
  • Write reusable, testable, and efficient code to support scalable web applications.
  • Work with cloud services and server-side technologies to manage data and optimize performance.
  • Collaborate with cross-functional teams to integrate frontend features with backend logic.
  • Collaborate with the product and design teams to develop and implement user-friendly web interfaces
  • Ensure responsive design and optimize web pages for performance and scalability across devices
  • Debug and resolve issues, improving the overall user experience on the platform and ensuring reliability
  • Assist in integrating APIs and frontend services with the backend
  • Stay up-to-date with the latest trends and suggest improvements to enhance the platform’s functionality


Requirements and Skills:

  • Education: Currently pursuing or recently completed a degree in Computer Science, Engineering, or a related field.
  • Technical Skills:
  • Good understanding of server-side technologies like Python
  • Familiarity with REST APIs and database systems (e.g., MySQL, PostgreSQL, or NoSQL databases).
  • Exposure to cloud platforms like AWS, Google Cloud, or Azure is a plus.
  • Knowledge of version control systems such as Git.
  • Strong proficiency in JavaScript frameworks like ReactJS or Redux
  • Proficiency in frontend languages such as HTML, CSS, and JavaScript (ES6+)
  • Soft Skills:
  • Eagerness to learn and adapt in a fast-paced environment.
  • Strong problem-solving and critical-thinking skills.
  • Effective communication and teamwork capabilities.
  • Other Skills: Familiarity with CI/CD pipelines and basic knowledge of containerization (e.g., Docker) is a bonus.


Why Join Us?

  • Gain real-world experience working on a cutting-edge platform.
  • Work alongside a talented and passionate team committed to innovation.
  • Receive mentorship and guidance from industry experts.
  • Opportunity to transition to a full-time role based on performance and company needs.


This internship is an excellent opportunity to kickstart your career in software development, build critical skills, and contribute to a product that has a real-world impact.

Read more
TalentXO
tabbasum shaikh
Posted by tabbasum shaikh
Bengaluru (Bangalore)
4 - 7 yrs
₹34L - ₹40L / yr
skill iconPython
LLM
OpenAI
Gemini
RAG
+5 more

Role & Responsibilities

As a Senior GenAI Engineer you will own the AI layer of our product — building the features that make Zenskar intelligent. This is not a research role and not a prompt-engineering role. You will build production AI systems that enterprise clients depend on, which means reliability, observability, and rigorous evals matter as much as the AI capability itself. You own the full vertical — the model, the pipeline, and the UI.

  • Build and own CS Copilot — a real-time assistant for customer success teams, spanning STT pipelines, live transcription, and LLM-powered suggestions
  • Build LLM-powered document understanding features — extracting structured, reliable data from unstructured enterprise documents
  • Own AI feature UIs end-to-end — you build the interface, not just the model integration layer
  • Design and maintain an eval framework — define what 'working' means for each AI feature and catch regressions before users do
  • Drive model selection and integration decisions — choosing the right provider and approach for each use case, managing latency and cost
  • Own AI platform reliability — observability, fallback behaviour, and graceful degradation when models fail
  • Work closely with product, customer success, and the full-stack engineer — AI features only matter if they are usable and trusted by real users

THE IMPACT YOU'LL MAKE-

  • You will define what AI means at Zenskar — the features you ship will be the most visible and differentiated parts of the product
  • CS Copilot, if done well, changes how enterprise customer success teams operate every single day — this is a high-stakes, high-visibility surface
  • You will establish the engineering culture around AI reliability at Zenskar — evals, observability, and disciplined iteration
  • Your work will directly accelerate enterprise deals — AI features are increasingly a buying criterion for our clients
  • You will be the person who brings engineering rigour to a domain where most companies ship demos and call it a feature

Ideal Candidate

  • Strong Senior GenAI / AI Backend Engineer Profiles
  • Mandatory (Experience 1) – Must have 4+ years of total software development experience, with at least 2+ years working on AI/LLM-based features in production
  • Mandatory (Experience 2) – Must have strong backend engineering experience using Python (FastAPI / Django preferred) and building production-grade systems
  • Mandatory (Experience 3) – Must have hands-on experience building LLM-based applications, including OpenAI / Gemini / similar models in real projects
  • Mandatory (Experience 4) – Must have experience with RAG (Retrieval Augmented Generation) including chunking, embeddings, and retrieval pipelines
  • Mandatory (Experience 5) – Must have experience designing end-to-end AI pipelines, including chaining, tool usage, structured outputs, and handling failure cases
  • Mandatory (Experience 6) – Must have experience building agentic AI systems (multi-step workflows, tool orchestration like LangGraph / CrewAI or custom agents)
  • Mandatory (Experience 7) – Must have strong coding and system design skills, not just prompt engineering or experimentation
  • Mandatory (Experience 8) – Must have experience shipping AI features in production, not just POCs or research projects
  • Mandatory (Experience 9) – Must have experience working with APIs, backend services, and integrations
  • Mandatory (Experience 10) – Must have understanding of AI system reliability, including latency, cost optimization, fallback handling, and basic eval thinking
  • Mandatory (Company) – Product companies / startups, preferably Series A to Series D
  • Mandatory (Note) - Candidate's overall experience should not be more than 7 Yrs
  • Mandatory (Tech Stack) – Strong in Python + AI/LLM ecosystem, experience with modern AI tooling and frameworks
  • Mandatory (Exclusion) – Reject profiles that are only Prompt Engineers, Data Scientists, or Frontend Engineers without strong backend + system building experience
  • Preferred (Skill) – Experience with fine-tuning (LoRA / QLoRA) or open-source model deployment (vLLM / Ollama)
  • Preferred (Frontend) – Basic ability to build or contribute to frontend (React or similar)
  • Highly Preferred (Education) – Candidates from Tier-1 institutes (IITs, BITS, NITs, IIITs, top global universities)


Read more
Remote only
0 - 2 yrs
₹1.4L - ₹3L / yr
skill iconPython
skill iconReact.js
FastAPI
Open-source LLMs
AI Coding Tools
+1 more

About AIVOA


AIVOA is building an AI-native Supply Chain Operating System for Life Sciences companies (API & FDF manufacturers).

We are creating an intelligent control layer that connects procurement, production, compliance, and logistics — enabling faster decisions, automation, and real-time visibility across operations.


About the Role

We are looking for a highly driven fresher to join as an AI Engineer and work on building AI-native systems from scratch.

This is a full-stack engineering role where you will:

  • Build backend systems using Python (FastAPI)
  • Develop frontend interfaces using React + Vite
  • Work on AI-powered workflows and automation systems

You will directly contribute to building real-world systems used in regulated industries.


What You’ll Work On

  • Backend APIs using FastAPI (Python)
  • Frontend applications using React + Vite
  • AI-assisted workflows (automation, decision systems)
  • Integrating APIs, databases, and AI tools
  • Building end-to-end product features (not isolated tasks)


Required Skills


  • Strong basics in Python
  • Basic understanding of React
  • Understanding of APIs and how systems connect
  • Basic SQL knowledge
  • Strong problem-solving mindset


Good to Have (Optional)


  • FastAPI exposure
  • React project experience
  • Git/GitHub
  • Interest in AI tools (ChatGPT, Copilot, etc.)


Who Should Apply


  • Freshers serious about becoming AI / Full Stack Engineers
  • Builders (projects > certificates)
  • People who can learn fast and execute
  • Candidates who want startup experience and real ownership


Growth

  • Work directly with founders and domain experts
  • Build real AI systems 
  • Fast growth based on performance


Read more
A boutique software product engineering services company

A boutique software product engineering services company

Agency job
via CIEL HR Services by Ragesh A C
Pune
12 - 15 yrs
₹40L - ₹75L / yr
skill iconPython
skill iconPostgreSQL
skill iconC++
CI/CD
skill iconReact.js
+4 more

Job Description :


Responsibilities :


- Design and develop Python-based microservices


- Build and operate gRPC / Protobuf-based APIs


- Implement asynchronous processing, concurrency, and job orchestration


- Design systems with retries, idempotency, and fault tolerance


- Work with and integrate native C/C++ components with Python services


- Design and optimise PostgreSQL schemas and queries


- Contribute to React-based frontend applications (TypeScript/JavaScript)


- Own features end-to-end : design, development, deployment, and monitoring


- Debug issues across application, system, and performance layers


- Build and maintain CI/CD pipelines and automated tests


Requirements :


- Strong experience in Python backend development (sync + async)


- Hands-on experience with gRPC / Protobuf-based APIs


- Experience with FastAPI / Flask / Django


- Strong understanding of microservices and distributed systems


- Experience with PostgreSQL and data modeling


- Exposure to React / JavaScript / TypeScript


- Knowledge of concurrency, multi-threading, and system design


- Strong understanding of Linux systems and debugging


- Experience in production environments (performance tuning, issue resolution)


- Exposure to C/C++ or Python-native integrations (preferred)


Qualification :


- Engineering Graduates from Tier 1 & Tier 2 Colleges/Deemed Universities only


- Open for candidates from outstation.


- Experience range : 12 - 15 years (but not more than 15 years of experience)


- Strictly Individual contributors, Handson coding. 80% Individual Contributor & 20% Architecture, Design, Systems



Read more
CLOUDSUFI
Noida
5 - 12 yrs
₹30L - ₹50L / yr
Large Language Models (LLM) tuning
Retrieval Augmented Generation (RAG)
Generative AI
Natural Language Processing (NLP)
skill iconPython
+2 more

About Us :


CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.


Our Values :


We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.


Equal Opportunity Statement :


CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace.


Role : Lead AI/Senior Engineer-AI


Location : Noida, Delhi/NCR


Experience : 5- 12 years


Education : BTech / BE / MCA / MSc Computer Science


Must Haves :


Conversational AI & NLU :


- Advanced proficiency with Dialogflow CX


- Intent classification, entity extraction, conversation flow design


- Experience building structured dialogue flows with routing logic CCAI platform familiarity


Agentic AI & Multi-Step Reasoning :


- Production experience with Google ADK (or LangChain/LangGraph equivalent)


- Multi-step reasoning and tool orchestration capability


- Tool-use patterns and function calling implementation


RAG Systems & Knowledge Management :


- Hands-on Vertex AI RAG Engine experience (or equivalent)


- Semantic search, chunking strategies, retrieval optimization


- Document processing pipelines (PDF parsing, chunking)


LLM/GenAI & Prompt Engineering :


- Production experience with Gemini models


- Advanced prompt engineering for customer support


- Langfuse experience for prompt management


Google Cloud Platform & Vertex AI :


- Advanced Vertex AI proficiency (Generative AI APIs, Agent Engine)


- Cloud Functions and Cloud Run deployment experience


- BigQuery for conversation analytics


API Integration :


- Genesys Cloud CX integration experience


- REST API design and webhook implementation


- Enterprise authentication patterns (OAuth 2.0)


Good To Have :


Conversational AI & NLU :


- Multi-language support implementation (Spanish/English)


- Telephony integration (speech recognition, TTS, DTMF)


- Barge-in handling and voice optimization


Agentic AI :


- Agent state management and session persistence


- Advanced fallback strategies and error recovery


- Dynamic tool selection and evaluation


RAG Systems :


- Re-ranking and advanced retrieval quality metrics


- Query expansion and context-aware retrieval


- Corpus organization strategies


LLM/GenAI :


- Prompt versioning, A/B testing, iterative refinement


- Prompt injection mitigation strategies


- In-context learning, few-shot, chain-of-thought techniques


LLMOps & Observability :


- Vertex AI Evaluation Service experience


- Groundedness, relevance, coherence, safety metrics


- Trace-level debugging with Cloud Trace


- Centralized logging strategies


Google Cloud :


- Application Integration connectors


- VPC Service Controls and enterprise security


- Cloud Pub/Sub for event-driven systems


Enterprise Integration :


- Third-party AI agent orchestration (SAP Joule, ServiceNow AI, Agentforce)


- Salesforce, SAP, ServiceNow integration patterns


- Context passage strategies for escalations


Architecture & System Design :


- Configuration-driven systems (Meta-Agent patterns)


- Microservices and containerization


- Scalable, multi-tenant system design


- Disaster recovery and failover strategies


Product Quality & KPIs :


- Customer support metrics expertise (CSAT, SSR, escalation rate)


- A/B testing and experimentation frameworks


- User feedback loop implementation


Deliverables :


- Architecture Design : End-to-end platform architecture, data flow diagrams, Dialogflow CX vs. ADK routing decisions


- Conversational Flows : 15+ dialogue flows covering billing, networking, appointments, troubleshooting, and escalations


- ADK Agent Implementation : Complex reasoning agents for technical support, account analysis, and context preparation


- RAG Pipeline : Document processing, chunking configuration, corpus organization (product docs, support articles, policies, promotions)


- Prompt Management : System prompts, Langfuse setup, playbook governance, version control


- Quality Framework : Evaluation pipeline, metrics dashboards, automated assessment, continuous improvement recommendations


- Integration Layer : Genesys handoff, webhook integrations, Application Integration setup, session management


- Testing & Validation : Conversation flow tests, performance testing (latency, throughput, 1000 concurrent users), security validation


- Response time <2 seconds (p95), 99.9% uptime, 1000 concurrent conversations


- Data encryption (TLS 1.2+, AES-256 at rest), PII redaction, 1-year data retention


- Graceful degradation and fallback mechanisms

Read more
Zeuron.AI

at Zeuron.AI

1 candid answer
Kavitha Rajan
Posted by Kavitha Rajan
Bengaluru (Bangalore)
1 - 2 yrs
₹11L - ₹12L / yr
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Computer Vision
skill iconFlutter
Embedded C
+2 more

Job Title: Software/Hardware Engineer (IIT/NIT)

Location: Bangalore

Website: https://www.zeuron.ai

Experience: 1 Year

CTC: ₹12 LPA


About the Company

Zeuron.ai is a Bangalore-based deep-tech startup founded in 2019, focused on building brain-inspired computing and AI-driven healthcare solutions. The company combines neuroscience, AI, and gaming to create innovative digital therapeutics and neurotechnology platforms for improving brain health, rehabilitation, and overall well-being.

About the Role

We are looking for a highly motivated Software/Hardware Engineer from premier institutes (IIT/NIT) with strong fundamentals and a passion for building scalable and efficient systems. This role offers an opportunity to work on cutting-edge technology and solve real-world problems.

 

Key Responsibilities

Design, develop, and optimize software/hardware solutions

Work on system architecture, debugging, and performance improvements

Collaborate with cross-functional teams (product, design, operations)

Participate in code reviews, testing, and deployment processes

Contribute to innovation and continuous improvement initiatives

 

Requirements

B.Tech/M.Tech from IITs/NITs (Computer Science, Electronics, Electrical, or related fields)

1 year of experience (internships/project experience considered)

Strong programming skills (C/C++/Python/Java) or hardware fundamentals (embedded systems, VLSI, circuit design)

Good understanding of data structures, algorithms, and system design

Problem-solving mindset with strong analytical skills


Preferred Skills

Experience with embedded systems, IoT, or product development

Knowledge of cloud platforms or system-level programming

Good in Computer vision, Flutter, JavaScript, AI/ML

Read more
Improving
Remote only
4 - 8 yrs
Best in industry
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
skill iconPython
skill iconJenkins
skill iconKubernetes

What are we looking for??

  1. You have a good understanding and work experience in AKS, Kubernetes, and EKS.
  2. You are able to manage multi region clusters for disaster recovery.
  3. You have a good understanding of AWS stack.
  4. You have experience of production level in Kubernetes. 
  5. You are comfortable coding/programming and can do so whenever required. 
  6. You have worked with programmable infrastructure in some way - Built a CI/CD pipeline, Provisioned infrastructure programmatically or Provisioned monitoring and logging infrastructure for large sets of machines.
  7. You love automating things, sometimes even what seems like you can’t automate - such as one of our engineers used Ansible to set up the Ubuntu workstation and runs a playbook every time something has to be installed.
  8. You don’t throw around words such as “high availability” or “resilient systems” without understanding at least their basics. Because you know that words are easy to talk about but there is a fair amount of work to build such a system in practice.
  9. You love coaching people - about the 12-factor apps or the latest tool that reduced your time of doing a task by X times and so on. You lead by example when it comes to technical work and community.
  10. You understand the areas you have worked on very well but, you are curious about many systems that you may not have worked on and want to fiddle with them.
  11. You know that understanding applications and the runtime technologies gives you a better perspective - you never looked at them as two different things.

What you will be learning and doing?

  1. You will be working with customers trying to transform their applications and adopt cloud-native technologies. The technologies used will be Kubernetes, Prometheus, Service Mesh, Distributed tracing and public cloud technologies or on-premise infrastructure.
  2. The problems and solutions are continuously evolving in space but fundamentally you will be solving problems with simplest and scalable automation.
  3. You will be building open source tools for problems that you think are common across customers and industry. No one ever benefited from re-inventing the wheel, did they? 
  4. You will be hacking around open source projects, understand their capabilities, limitations and apply the right tool for the right job.
  5. You will be educating the customers - from their operations engineers to developers on scalable ways to build and operate applications in modern cloud-native infrastructure.


Read more
Bengaluru (Bangalore)
5 - 10 yrs
₹1L - ₹8L / yr
databricks
ETL
PySpark
Apache Spark
CI/CD
+7 more

Profile - Databricks Developer

Experience- 5+ years

Location- Bangalore (On site)

PF & BGV is Mandatory


Job Description: -


* Design, build, and optimize data pipelines and ETL/ELT workflows using Databricks and Apache Spark (PySpark).

* Develop scalable, high performance data solutions using Spark distributed processing.

* Lead engineering initiatives focused on automation, performance tuning, and platform modernization.

* Implement and manage CI/CD pipelines using Git-based workflows and tools such as GitHub Actions or Jenkins.

* Collaborate with cross-functional teams to translate business needs into technical solutions.

* Ensure data quality, governance, and security across all processes.

* Troubleshoot and optimize Spark jobs, Databricks clusters, and workflows.

* Participate in code reviews and develop reusable engineering frameworks.

* Should have knowledge of utilizing AI tools to improve productivity and support daily engineering activities.

* Strong knowledge and hands-on experience in Databricks Genie, including prompt engineering, workspace usage, and automation


. Required Skills & Experience:

* 5+ years of experience in Data Engineering or related fields.

* Strong hands-on expertise in Databricks (notebooks, Delta Lake, job orchestration).

* Deep knowledge of Apache Spark (PySpark, Spark SQL, optimization techniques).

* Strong proficiency in Python for data processing, automation, and framework development.

* Strong proficiency in SQL, including complex queries, performance tuning, and analytical functions.

* Strong knowledge of Databricks Genie and leveraging it for engineering workflows.

* Strong experience with CI/CD and Git-based development workflows. * Proficiency in data modeling and ETL/ELT pipeline design.

* Experience with automation frameworks and scheduling tools.

* Solid understanding of distributed systems and big data concepts

Read more
Ctruh

at Ctruh

2 candid answers
1 video
Ariba Khan
Posted by Ariba Khan
Bengaluru (Bangalore)
9 - 13 yrs
Upto ₹60L / yr (Varies
)
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconPython
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
+2 more

About the Role:

Ctruh is hiring a hands-on Director of Engineering who codes, architects systems, and builds teams. You’ll set the technical foundation, drive engineering excellence, and own the architecture of our AI, 3D, and XR platform.


This is not a pure management role - expect to spend 50–60% of your time writing code, solving deep technical problems, and owning mission-critical systems. As we scale, this role transitions into CTO, taking full ownership of technical vision and long-term strategy.


What You’ll Own:

1. Technical Leadership & Architecture

  • Architect Ctruh’s full-stack platform across frontend, backend, infrastructure, and AI.
  • Scale core systems: VersaAI engine, rendering pipeline, AR deployment, analytics.
  • Make decisions on stack, scalability patterns, architecture, and technical debt.
  • Own design for high-performance 3D asset processing, real-time rendering, and ML deployment.
  • Lead architectural discussions, design reviews, and set engineering standards.

2. Hands-On Development

  • Write production-grade code across frontend, backend, APIs, and cloud infra.
  • Build critical features and core system components independently.
  • Debug complex systems and optimize performance end-to-end.
  • Implement and optimize AI/ML pipelines for 3D generation, CV, and recognition.
  • Build scalable backend services for large-scale asset processing and real-time pipelines.
  • Develop WebGL/Three.js rendering and AR workflows.

3. Team Building & Engineering Management

  • Hire and grow a team of 5–8 engineers initially (scaling to 15–20).
  • Establish engineering culture, values, and best practices.
  • Build career frameworks, performance systems, and growth plans.
  • Conduct 1:1s, mentor engineers, and drive continuous improvement.
  • Set up processes for agile execution, deployments, and incident response.

4. Product & Cross-Functional Collaboration

  • Work with the founder and product team on roadmap, feasibility, and prioritization.
  • Translate product requirements into technical execution plans.
  • Collaborate with design for UX quality and technical alignment.
  • Support sales and customer success with integrations and technical discussions.
  • Contribute technical inputs to product strategy and customer-facing initiatives.

5. Engineering Operations & Infrastructure

  • Own CI/CD, testing frameworks, deployments, and automation.
  • Create monitoring, logging, and alerting setups for reliability.
  • Manage AWS infrastructure with a focus on cost and performance.
  • Build internal tools, documentation, and developer workflows.
  • Ensure enterprise-grade security, compliance, and reliability.


Tech Stack:

1. Frontend: React.js, Next.js, TypeScript, WebGL, Three.js

2. BackendNode.js, Python, Express/FastAPI, REST, GraphQL

3. AI/ML: PyTorch, TensorFlow, CV models, Stable Diffusion, LLMs, ML pipelines

4. 3D & Graphics: Three.js, WebGL, Babylon.js, glTF, USDZ, rendering optimization

5. Databases: PostgreSQL, MongoDB, Redis, vector databases

6. Cloud & Infra: AWS (EC2, S3, Lambda, SageMaker), Docker, Kubernetes, CI/CD: GitHub Actions, Monitoring: Datadog, Sentry


What We’re Looking For:

1. Must-Haves

  • 9+ years of engineering experience, with 3–4 years in technical leadership.
  • Deep full-stack experience with strong system design fundamentals.
  • Proven success building products from 0→1 in fast-paced environments.
  • Hands-on expertise with React/Next.js, Node.js/Python, and AWS.
  • AI/ML deployment experience (CV, generative AI, 3D reconstruction).
  • Ability to design scalable architectures for high-performance systems.
  • Strong people leadership with experience hiring and mentoring teams.
  • Ready to code, review, design, and lead from the front.
  • Startup mindset: fast execution, problem-solving, ownership.


2. Highly Desirable

  • Strong 3D graphics/WebGL/Three.js knowledge.
  • Experience with real-time systems, rendering optimizations, or large-scale pipelines.
  • Background in B2B SaaS, XR, gaming, or immersive tech.
  • Experience scaling engineering teams from 5 → 20+.
  • Open-source contributions or technical content creation.
  • Experience working closely with founders or executive leadership.


Why Ctruh:

  • Hard, meaningful engineering problems at the intersection of AI, 3D, XR, and web tech.
  • Build from day zero – architecture, team, and culture.
  • Path to CTO as the company scales.
  • High autonomy to drive technical decisions.
  • Direct founder collaboration on product vision.
  • High ownership, high-growth environment.
  • Backed by global leaders: Microsoft, Google, NVIDIA, AWS.

Location & Work Culture:

  • Location: HSR Layout, Bengaluru
  • Schedule: 6 days a week, (5 days-in-office, Saturdays WFH)
  • Culture: High-intensity, high-integrity, engineering-first
  • Team: Young, ambitious, technically strong


The Ideal Candidate:

You're an engineer at heart and a leader by instinct. You love coding as much as architecting systems. You balance speed with quality, innovate fearlessly, and thrive in ambiguity.


You can:

  • Architect microservices in the morning
  • Review mission-critical PRs at noon
  • Build a Three.js shader in the afternoon
  • Run an engineering standup in the evening


You’ve experienced both the pain of poor architecture and the joy of elegant systems - and know how to build things that scale. If you geek out over AI/ML pipelines, 3D rendering, WebGL performance, or building engineering orgs from scratch, you’ll love Ctruh.

Read more
Hashone Career
Madhavan I
Posted by Madhavan I
Bengaluru (Bangalore)
3 - 8 yrs
₹20L - ₹28L / yr
SQL
skill iconPython
AtScale

Summary:

Data Engineer/Analytics Engineer with experience in semantic layer modeling using AtScale, building scalable data pipelines, and delivering high-performance analytics solutions on cloud platforms.




 Responsibilities

• Build and maintain ETL/ELT pipelines for large-scale data

• Develop semantic models, cubes, and metrics in AtScale

• Optimize query performance and BI dashboards

• Integrate data platforms (Snowflake, Databricks, BigQuery)

• Collaborate with analysts and business teams




 Skills

• SQL, Python/Scala

• Data modeling (star schema, OLAP)

• AtScale (semantic layer)

• Spark, dbt, Airflow

• BI tools (Tableau, Power BI, Looker)

• AWS / GCP / Azure



 Experience

• 3–8+ years in data/analytics engineering

• Experience with enterprise data platforms and BI systems

Read more
ARDEM Incorporated
Remote only
8 - 12 yrs
₹9L - ₹12L / yr
Project delivery
Software Development
Project Management
Team Management
skill icon.NET
+10 more

Senior Project Owner / Project Manager Technology


Department - Technology / Software Development

Work Mode - Work From Home (WFH), Full Time

Experience - Minimum 10 Years (Development Background)

Time Zone - Candidate should be comfortable working in US time zone overlap and attending client calls accordingly.


ROLE SUMMARY

We are looking for a seasoned Senior Project Owner / Project Manager with a strong development foundation to lead our technology initiatives. This role bridges client management and technical execution you will own endto-end delivery of multiple concurrent projects while supporting a high-performing remote team.


KEY RESPONSIBILITIES

Project & Delivery Management

  • Own and manage multiple concurrent technology projects from initiation to production release
  • Define project scope, timelines, milestones, and resource allocation plans
  • Distribute tasks effectively across a team of developers, QA, and support engineers
  • Track assigned work daily, follow up on progress, and proactively remove blockers
  • Ensure all projects meet deadlines and quality benchmarks without compromise
  • Participate actively in production activities and take full accountability for live deployments


US Client Management

  • Serve as the Technology single point of contact for all assigned US clients
  • Attend and lead client calls that are focused on an ARDEM Technical Solution. This may include discussions related to future clients or existing clients (US time zone overlap required)
  • Resolve client queries, manage escalations, and ensure high client satisfaction
  • Showcase company-developed applications and software demos confidently to clients
  • Translate complex client requirements into clear technical deliverables for the team


Team Leadership

  • Lead, mentor, and performance-manage a distributed remote team of technical members
  • Foster accountability, ownership, and a high-delivery culture within the team
  • Conduct sprint planning, stand-ups, retrospectives, and performance reviews
  • Identify skill gaps and work with HR/training teams to bridge them


Process & Operations

  • Deeply understand ARDEM's internal processes and align project execution accordingly
  • Ensure development standards and best practices are followed across all projects
  • Manage crisis situations with composure, identify root causes and drive swift resolution
  • Coordinate with cross-functional teams including HR, Operations, Training, and QA
  • Maintain project documentation, status reports, and risk registers


REQUIRED EXPERIENCE

  • 10+ years of total experience in software development and project management
  • 5–7 years of hands-on coding experience in one or more technologies listed below
  • 2–3 years in a team management or tech lead role overseeing 5+ members
  • Proven experience managing multiple simultaneous projects in a remote/WFH environment
  • Prior experience working with US-based clients strong understanding of US work culture and expectations


TECHNICAL SKILLS

  • Python: scripting, automation, data processing, backend services
  • JavaScript / Node.js: server-side development, REST APIs, async workflows
  • NET Core: enterprise application development and service integration
  • SQL Databases: query optimization, schema design, stored procedures
  • Familiarity with CI/CD pipelines, Git workflows, and deployment processes
  • Ability to review code, understand architectural decisions, and guide the team technically


SKILLS & COMPETENCIES

  • Exceptional verbal and written communication skills in English client-facing confidence is a must
  • Strong crisis management and conflict resolution ability under tight deadlines
  • Highly organized with a structured approach to planning, prioritization, and execution
  • Self-driven and accountable capable of operating independently in a remote environment
  • Strong presentation skills able to demo software to non-technical stakeholders
  • Empathetic leadership style with the ability to motivate and align diverse team members


QUALIFICATIONS

  • Bachelor's or master's degree in computer science
  • PMP Certification: Preferred (candidates without PMP must demonstrate equivalent project management rigor)
  • Agile / Scrum certifications (CSM, PMI-ACP) are an added advantage


LOCATION PREFERENCE

  • Candidates must be based in a Tier-1 city: Mumbai, Delhi NCR, Bengaluru, Hyderabad, Chennai, Pune, or Kolkata
  • This is a full-time Work From Home role: reliable internet, a dedicated workspace, and availability during US business hours are mandatory


ABOUT ARDEM

ARDEM Incorporated is a leading Business Process Outsourcing (BPO) and Automation company serving US based clients across diverse industries. Our Technology Team builds and maintains in-house applications that power data processing pipelines, automation workflows, internal platforms, and domain-specific training modules all engineered to deliver operational excellence at scale. To our clients, we provide cloud-based platforms to assist in their day-to-day business analytics. Our cloud services focus on finance, logistics and utility management.

Read more
Blitzy

at Blitzy

2 candid answers
1 product
Eman Khan
Posted by Eman Khan
Pune
8 - 15 yrs
₹35L - ₹45L / yr
Generative AI (GenAI)
Agentic AI
Large Language Models (LLM)
skill iconPython
skill iconJavascript
+2 more

About Blitzy

Blitzy is a Cambridge, MA based AI software development platform on a mission to revolutionize the software development life cycle by autonomously building custom software to unlock the next industrial revolution. We're transforming how enterprises build software, turning enterprise requirements into production-ready code with an agentic software development platform that can autonomously execute 80% of the quantum of software development work. We're backed by multiple tier 1 investors, and have proven success as founders of previous start-ups.


The Role

We're seeking an experienced POC Engineer to join our India HQ2 team. This critical role sits at the intersection of cutting-edge AI technology and enterprise customer success. You'll lead the technical design, development, and delivery of proof-of-concept implementations that demonstrate the transformative power of Blitzy's agentic software development platform to Fortune 500 clients. This role requires deep technical expertise, strong customer-facing skills, and the ability to rapidly prototype and iterate on complex AI-driven solutions.


What You'll Do

  • Lead Technical Pilots and POCs: Design, architect, and implement end-to-end proof-of-concept solutions that showcase Blitzy's platform capabilities to enterprise customers
  • Customer Collaboration: Work directly with Fortune 500 clients to understand their technical requirements, challenges, and success criteria for POC engagements
  • Rapid Prototyping: Build functional demonstrations and prototypes that prove technical feasibility and business value, often under tight timelines
  • AI/ML Integration: Leverage LLMs, agentic workflows, and multimodal AI capabilities to solve complex customer use cases
  • Technical Architecture: Design scalable, production-grade architectures that can transition from POC to full implementation
  • Performance Optimization: Conduct performance analysis, identify bottlenecks, and optimize solutions for speed, efficiency, and reliability
  • Documentation & Knowledge Transfer: Create comprehensive technical documentation and effectively communicate architecture decisions to both technical and non-technical stakeholders
  • Cross-functional Collaboration: Partner with Sales, Product, and Engineering teams to refine platform capabilities based on customer feedback
  • Innovation Leadership: Stay at the forefront of AI/ML advances and identify opportunities to incorporate new techniques into POC solutions


What We're Looking For

Required Qualifications

  • 10+ years of software engineering experience with proven track record in enterprise software development and production systems
  • Deep expertise in AI/ML technologies, particularly with LLMs, agentic systems, and generative AI applications
  • Strong background in building enterprise-grade solutions that have shipped to production
  • Extensive experience with modern software architecture, including cloud-based platforms (AWS, GCP, or Azure)
  • Proficiency in multiple programming languages (Python, TypeScript/JavaScript, Go, Java, or C++)
  • Demonstrated experience leading technical teams and mentoring engineers
  • Proven ability to translate complex technical concepts for diverse audiences
  • Track record of working directly with enterprise customers in technical consulting or solutions engineering capacity
  • Strong analytical and problem-solving skills with focus on performance optimization
  • Excellent communication skills with ability to present to C-level executives

Preferred Qualifications

  • Background in developing AI agentic workflow orchestrators or similar multi-agent systems
  • Experience with GPU-based computing and optimization
  • Experience with sequence-to-sequence models and deep neural network architectures
  • M.S. or Ph.D. in Computer Science, Engineering, or related field
  • Experience leading distributed teams across geographies
  • Track record of innovation (patents, publications, or open-source contributions)
  • Experience with Responsible AI practices and frameworks

Key Success Metrics

  • Successfully deliver POCs that convert to paid customer engagements
  • Achieve high customer satisfaction scores and build strong customer relationships
  • Reduce time-to-value for POC implementations through reusable frameworks and components
  • Contribute technical insights that improve core platform capabilities
  • Build scalable POC processes and best practices for the India team

Our Culture

Who we are: Led by two pioneering co-founders we are one of the fastest growing companies in the U.S., creating our own category of enterprise autonomous software development. We automate thousands of hours of software development for our customers, which includes strong representation within the Fortune 500.

How we work:

  • We move Blitzy Fast: Time is both our company's and our clients' most precious asset. We move quickly and decisively to innovate internally and deliver exceptional software externally.
  • Championship Mindset: We operate like a professional sports team. We win as a team by holding ourselves and each other to high standards, collaborating in-person, and remaining focused on the mission.
  • Passion for Invention: We're pushing the frontier of what's possible, requiring constant innovation and iteration.
  • We Work for the Customer: We focus on delivering outsized value to the customers we work with and expanding those relationships into deep, meaningful partnerships.

We believe in being 'everyday athletes'—taking care of ourselves so we can bring our best minds to work. We promote great sleep, movement, and restorative activities for optimal mental performance. It makes for a happier and more productive team.

What We Offer

  • Opportunity to shape Blitzy's expansion in India as a founding member of HQ2
  • Work with cutting-edge AI technology and Fortune 500 enterprise clients
  • Competitive compensation package including equity
  • Collaborative, high-performance team environment
  • Professional growth opportunities in a fast-scaling startup
  • Chance to make significant impact on product direction through customer insights
  • Culture that values innovation, technical excellence, and work-life balance
Read more
InteligenAI
Ayushi Sarmah
Posted by Ayushi Sarmah
Gurugram
2 - 6 yrs
₹12L - ₹30L / yr
skill iconPython
Artificial Intelligence (AI)
Generative AI
skill iconMachine Learning (ML)
skill iconData Science
+2 more

Title: AI Solutions Architect

Location: Gurgaon

Experience: 2-6 years

Type: Full-Time

 

About the company:

InteligenAI is a fast growing, profitable AI product studio with a global clientele.

We design and deliver enterprise-grade, custom AI solutions that solve real problems - going far beyond makeshift PoCs and over-promising demos.

We’re building one of the most trusted AI services companies in the world - and are looking for a driven, entrepreneurial person to help us get there. Our work spans Agentic AI architectures, document digitization pipelines, retrieval-augmented generation (RAG) systems, and SFT + RLHF workflows - all built in-house so we can move fast, think deep and deliver with confidence.

If you are looking for meaningful work, high ownership and the freedom to push boundaries, you will feel right at home here.

 

About the role:

We are looking for a hands-on AI engineer to lead AI solution delivery across our client engagements. This role blends technical leadership with solution architecture and a strong product mindset. You will be at the frontline of AI solution delivery, where you will drive the full product lifecycle from understanding business objectives, designing technical approaches, building POCs to delivering production-grade AI systems.


This is not a backseat, “wait for instructions” role. You will work directly with founders, clients, and our growing AI team to shape solutions that make an impact. This role is ideal for someone with an entrepreneurial mindset, a desire to learn and grow constantly and someone who enjoys their work thoroughly. You will be handling multiple responsibilities simultaneously where you will be challenged every day. If you are looking for a 9-to-5 role, this may not be the right fit.

 

Key responsibilities:

·      Understand business problems, translate them into solution architectures and lead end-to-end AI solution delivery

·      Design and deliver production-grade ML/GenAI systems tailored to real-world use cases

·      Collaborate with clients to identify needs, present solutions and guide implementation

·      Act as a thought partner to the founder and contribute to strategic decisions

·      Lead and mentor a growing AI/Tech team

·      Collaborate with product and design teams to ship AI-driven features that solve real user problems

·      Continuously explore and experiment with cutting-edge GenAI tools, technologies and frameworks

 

Must have skills:

·      2+ years of hands-on experience building AI/ML solutions across domains

·      Proven ability to understand business workflows and design relevant AI solutions

·      Strong knowledge of GenAI and experience building scalable applications using LLMs, prompt engineering and embedding models

·      Proficient in Python and familiar with libraries/frameworks such as LangChain, Hugging Face Transformers, OpenAI APIs, Pinecone/FAISS

·      Solid understanding of data pipelines, data analytics and ability to take solutions from prototype to production

·      Self-starter mindset- ability to independently manage projects, make decisions and deliver outcomes from day 1

·      Excellent communication and problem-solving skills

 

Good to have:

·      Open-source contributions or personal GenAI projects

·      Experience working in startups or fast-paced, tech-first organizations

·      Experience with MLOps tools

·      Entrepreneurial experience

Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
4 - 12 yrs
Upto ₹45L / yr (Varies
)
MLOps
skill iconPython
databricks
Windows Azure
skill iconAmazon Web Services (AWS)

We are seeking a skilled and passionate ML Engineer with 3+ years of experience to join our team. The ideal candidate will be instrumental in developing, deploying, and maintaining machine learning models, with a strong focus on MLOps practices.

This role requires hands-on experience with Azure cloud services, Databricks, and MLflow to build robust and scalable ML solutions.


Responsibilities

  • Design, develop, and implement machine learning models and algorithms to solve complex business problems.
  • Collaborate with data scientists to transition models from research and development into production-ready systems.
  • Build and maintain scalable data pipelines for ML model training and inference using Databricks.
  • Implement and manage the ML model lifecycle using MLflow, including experiment tracking, model versioning, and model registry.
  • Deploy and manage ML models in production environments on Azure, leveraging services such as:
  • Azure Machine Learning
  • Azure Kubernetes Service (AKS)
  • Azure Functions
  • Support MLOps workloads by automating model training, evaluation, deployment, and monitoring processes.
  • Ensure the reliability, performance, and scalability of ML systems in production.
  • Monitor model performance, detect model drift, and implement retraining strategies.
  • Collaborate with DevOps and Data Engineering teams to integrate ML solutions into existing infrastructure and CI/CD pipelines.
  • Document model architecture, data flows, and operational procedures.

Qualifications

Education

  • Bachelor’s or Master’s degree in Computer Science, Engineering, Statistics, or a related quantitative field.

Experience

  • Minimum 3+ years of professional experience as an ML Engineer or in a similar role.

Required Skills

  • Strong proficiency in Python for data manipulation, machine learning, and scripting.
  • Hands-on experience with machine learning frameworks, such as:
  • Scikit-learn
  • TensorFlow
  • PyTorch
  • Keras
  • Demonstrated experience with MLflow for:
  • Experiment tracking
  • Model management
  • Model deployment
  • Proven experience working with Microsoft Azure cloud services, specifically:
  • Azure Machine Learning
  • Azure Databricks
  • Related compute and storage services
  • Solid experience with Databricks for:
  • Data processing
  • ETL pipelines
  • ML model development
  • Strong understanding of MLOps principles and practices, including:
  • CI/CD for ML
  • Model versioning
  • Model monitoring
  • Model retraining
  • Experience with containerization and orchestration technologies, including:
  • Docker
  • Kubernetes (especially AKS)
  • Familiarity with SQL and data warehousing concepts.
  • Experience working with large datasets and distributed computing frameworks.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and collaboration skills.

Nice-to-Have Skills

  • Experience with other cloud platforms (AWS or GCP).
  • Knowledge of big data technologies such as Apache Spark.
  • Experience with Azure DevOps for CI/CD pipelines.
  • Familiarity with real-time inference patterns and streaming data.
  • Understanding of Responsible AI principles, including fairness, explainability, and privacy.

Certifications (Preferred)

  • Microsoft Certified: Azure AI Engineer Associate
  • Databricks Certified Machine Learning Associate (or higher) 
Read more
Inteliment Technologies

at Inteliment Technologies

2 candid answers
Ariba Khan
Posted by Ariba Khan
Pune
6 - 10 yrs
Upto ₹30L / yr (Varies
)
skill iconPython
ETL
Snowflake
Spark
PowerBI
+3 more

About the company:

At Inteliment, we help organizations turn data into powerful decisions. With two decades of proven expertise, we work with global customers to solve complex business problems using advanced data and analytics solutions. Our ACE – Analytical Centre of Excellence brings together some of the best minds in data engineering, analytics, and AI to build next-generation decision intelligence platforms. If you are passionate about data engineering, modern data platforms, and solving real business problems, this role will give you the opportunity to work on global enterprise data ecosystems. 


About the role

We are seeking a highly skilled Data Architect with strong hands-on expertise in Data Engineering and/or Data Visualization tools, having 6+ years of experience in the pure Data Analytics domain. The ideal candidate will be responsible for architecting scalable data solutions, guiding technical teams, and ensuring robust data pipelines, analytics frameworks, and visualization ecosystems aligned with business objectives. 


Requirements:

  • Bachelor’s or master’s degree in computer sciences, Information Technology, or a related field.
  • 6+ years of hands-on experience in Data Analytics domain.
  • Strong experience in designing enterprise data solutions.
  • Proven experience in handling large-scale data systems.
  • Experience in client-facing roles is preferred. 
  • Certifications with related field will be an added advantage

Technical Skills

✔ Data Engineering Stack

  • Python / PySpark / SQL
  • ETL Tools (e.g., Informatica, Talend, SSIS, or equivalent)
  • Cloud Platforms (AWS / Azure / GCP)
  • Data Warehousing (Snowflake, Redshift, BigQuery, etc.)
  • Big Data Technologies (Spark, Hadoop – preferred)

✔ Visualization & BI Tools (At least one advanced tool mandatory)

  • Power BI
  • Tableau
  • Qlik
  • Looker or equivalent

✔ Database Technologies

  • SQL (MySQL, PostgreSQL, SQL Server, Oracle)
  • NoSQL (MongoDB, Cassandra – preferred)

✔ Additional Preferred Skills

  • Data Modeling (Star/Snowflake schema)
  • API integrations
  • CI/CD for data pipelines
  • Version control (Git)
  • Agile methodology exposure

Soft Skills

  • Leadership: Strong leadership and mentoring capabilities to guide technical teams.
  • Communication: Excellent communication skills for collaborating with cross-functional teams and stakeholders.
  • Problem-Solving: Analytical mindset with a keen attention to detail.
  • Adaptability: Ability to manage shifting priorities and requirements effectively.
  • Team Collaboration: Strong interpersonal skills for fostering a collaborative work environment.


Responsibilities:

✔ Solution Architecture & Design

  • Design end-to-end data architecture solutions including data ingestion, transformation, storage, and visualization.
  • Architect scalable and high-performance data pipelines.
  • Define best practices, standards, and governance frameworks for data analytics projects.

✔ Data Engineering

  • Build and optimize ETL/ELT pipelines.
  • Work with structured and unstructured datasets.
  • Design and implement data lakes, data warehouses, and modern data platforms.
  • Ensure data quality, integrity, and performance tuning.

✔ Data Visualization & Analytics

  • Architect and implement enterprise-level dashboards and reporting solutions.
  • Define data models optimized for BI tools.
  • Guide teams in building intuitive, performance-driven visualizations.
  • Translate business requirements into scalable analytics solutions.

✔ Technical Leadership

  • Provide technical direction to data engineers, BI developers, and analysts.
  • Conduct code reviews and enforce architectural standards.
  • Collaborate with cross-functional teams including business stakeholders and delivery teams.
  • Mentor junior team members and drive capability building.

✔ Stakeholder Engagement

  • Participate in client discussions, solution presentations, and requirement workshops.
  • Provide effort estimations and solution proposals.
  • Act as a technical escalation point. 
Read more
Superclaims
Akshith Daithala
Posted by Akshith Daithala
Hyderabad
1 - 3 yrs
₹5L - ₹7.5L / yr
skill iconPython
FastAPI
skill iconPostgreSQL
SQLAlchemy
LangGraph
+11 more

About Superclaims

Superclaims modernizes health insurance claims adjudication with intelligent automation. We help insurers and TPAs replace manual, document-heavy workflows with faster, more accurate decisions at scale.


Role: Python Backend Developer

We are looking for a Python Backend Developer who is excited to build AI-powered automation products in a fast-paced startup environment.


What you'll do

- Build and maintain scalable backend systems and APIs

- Develop intelligent data extraction pipelines using AI/ML

- Design and implement agentic workflows with LangGraph

- Design efficient database schemas and optimize queries in PostgreSQL

- Integrate and work with LLMs (OpenAI, Gemini, or similar)

- Collaborate with product, frontend, and data teams to deliver end-to-end features

- Write clean, tested, and well-documented code


Must-have skills

- Strong proficiency in Python and a modern web framework (FastAPI or similar)

- Experience with PostgreSQL and an ORM (SQLAlchemy preferred)

- Solid understanding of RESTful API design and best practices

- Hands-on experience or strong familiarity with LangGraph

- Experience working with LLMs (OpenAI, Gemini, or similar providers)

- Comfort with Git/version control and collaborative development workflows


Nice-to-have skills

- Experience with Docker and containerized deployments

- Knowledge of Redis for caching or background tasks

- Exposure to cloud platforms (GCP, AWS, or Azure)

- Experience with vector databases and retrieval-augmented generation

- Basic prompt engineering skills

- Experience with object storage (S3/MinIO)


What we're looking for

- 1+ years of Python backend development experience (open to exceptional freshers)

- Fast learner with genuine curiosity about AI/ML and automation

- Prior startup experience preferred

- Ownership mindset, bias for action, and comfort with ambiguity

- Ready to relocate to Hyderabad (work location)


How to apply

Please share:

- Your resume

- GitHub/Portfolio link

- A brief note on why you're interested in AI-powered automation and Superclaims

Read more
Verse
Ravi K
Posted by Ravi K
Bengaluru (Bangalore)
2 - 5 yrs
₹15L - ₹20L / yr
skill iconPython
FastAPI
skill iconPostgreSQL
Neo4J
LangGraph

Founding Engineer (Bangalore)


The problem:

Business enterprises overpay vendors - on every batch of invoices, on every month because the data that would catch lives in different systems. We are building an AI agent that processes invoices end-to-end, reasons across all the relevant sources, flags genuine discrepancies, and acts - without a human having to investigate each one.


What you will own

Everything engineering. Schema design to deployment to the 2am fix when something breaks in production. There is no tech lead above you. There is no platform team. There is the architecture, you, and the founders. Concretely, this means building:

  • A multi-stage agentic pipeline that takes a vendor invoice and produces a structured decision - fully autonomous for clear cases, escalating to human review for genuinely ambiguous ones. We use LangGraph, but if you've built equivalent systems with Temporal, Prefect, or custom state machines with LLM orchestration, that works
  • An LLM-powered extraction layer that handles real invoices - scanned PDFs, stamped documents, inconsistent layouts - and returns structured output
  • A graph data model that connects invoices to various sources and can traverse those relationships to detect discrepancies
  • ERP connectors, GST validation logic, and a write-back layer that closes the loop


What we need

  • Strong Python. Async FastAPI, clean service boundaries, tests that actually catch bugs. You have shipped Python backends that handled real production load
  • Solid Postgres. Complex queries, schema design, migrations without downtime, row-level security for multi-tenant data. pgvector is a plus - if not, you pick it up fast
  • LLM API experience in production. You have called an LLM API for something that real users depended on. You know about structured output, retry logic, cost management, prompt versioning. A side project counts if it was genuinely deployed
  • Comfort with graph data models. You understand when a graph is the right structure and when it is not. You do not need deep Neo4j production experience - you need to understand graph relationships conceptually and be willing to learn Cypher. It is a 2-day ramp for the right person
  • Working knowledge of deployment. Deployed and operated production workloads on GCP. Cloud Run, Cloud SQL, Cloud Storage, Redis — you're comfortable across the stack. If you've done it on AWS, the translation isn't hard, but GCP is where we are
  • You own things. Not "I contributed to" - you designed it, shipped it, and fixed it when it broke. That pattern needs to be visible in your history


Good to have, not mandatory

  • Built an agentic pipeline with multiple stages
  • Any fintech, P2P domain experience - even tangential
  • Worked at a startup with under 20 people
  • Has a GitHub, blog, or writeup that shows how you think about a hard technical problem


What you get

  • The hardest engineering problem you would have worked on. This is not CRUD with an LLM bolted on
  • Real ownership. First engineering hire. Your architectural decisions will be in this product five years from now
  • Equity that matters. ESOP - Open to discussion. We are pre-seed - this is a bet, not a guarantee. We will not pretend otherwise
  • No meetings tax. You work directly with the founders. The product is specified clearly. You know what you are building and why


Honest about stage: We do not have a production ready infra yet. We have a complete architecture specification and a working prototype. If you need the stability of an established engineering org, this is not the right moment. If you want to build something real from zero and own a meaningful piece of it, it is.


The founders

One of us has spent 20 years building revenue and operational engines at companies where there was no playbook - part of the pilot team that established the world's largest search company's direct sales operations in India, managed global operations for a global mobile advertising platform, scaled a B2C platform to become one of India’s leading edtech platforms and most recently worked on building an enterprise Agentic Voice AI platform. The other has spent 15 years taking AI from demo to production in domains where failure is expensive - voice, lending, and conversational systems across a Series D conversational AI company, a major telco, a Big 4, and a leading NBFC.


Two IIT/IIM alumni who have both watched AI work in enterprise, and know exactly what it takes to get it there. We are not building this product because it sounds interesting. We are building it because we have both sat across the table from CFOs who know they are losing margin and have no tool capable of doing anything about it.

Read more
-

-

Agency job
via Qubit Labs by Solomiia Kuzbyt
Remote only
4 - 15 yrs
$48K - $67.2K / yr
Blockchain
skill iconGo Programming (Golang)
skill iconPython
skill iconJavascript
TypeScript
+1 more

About the Role

Join the Blockchain Backend Infrastructure team and take a position in building and maintaining a leading blockchain management platform. You'll be responsible for building cutting-edge blockchain infrastructure while implementing high-throughput, real-time scalable software solutions.

As a Blockchain Engineer, you will be instrumental in the research and integration of blockchain technologies into the platform. Your responsibilities will include collaborating closely with foundations and developers to gain a deep understanding of blockchain protocols and on-chain projects, then applying that knowledge to implement new features within the platform.

You will focus equally on external protocol integration patterns and internal wallet infrastructure. This role serves as a technical bridge between raw on-chain capabilities and the wallet features delivered to customers.

What You'll Do

  • Implement modern backend applications and infrastructure in a microservices architecture, using the latest technologies and development practices.
  • Deep dive into the latest blockchain technology and become an expert in the fundamentals, protocols, and features of the chains we support.
  • Collaborate effectively with developers, engineers, and other roles while demonstrating strong independent problem-solving abilities.
  • Contribute to production reliability through on-call participation, incident response, and post-incident follow-ups.

What You'll Bring

  • 5+ years of backend development experience in modern languages (Go, Python, JavaScript/TypeScript).
  • 3+ years of hands-on blockchain development experience.
  • Experience working on high-scale distributed systems.
  • Understanding of microservices architecture and API design.
  • Knowledge of consensus mechanisms, cryptographic primitives, and distributed systems.
  • Strong problem-solving skills, attention to detail, and a collaborative mindset.

Preferred

  • Experience building blockchain solutions for enterprise or institutional use cases.
  • Understanding of security best practices for smart contracts and blockchain systems.
  • Demonstrated ability to apply AI tools in day-to-day development.
  • Understanding of MPC, multi-signature wallets, or other advanced cryptographic techniques.
  • Bachelor's degree in Computer Science, Engineering, or a related field.
  • Experience with Docker, Kubernetes, and Helm.
  • Location:
  • - EU preferred or availability to travel to one of dev hubs in Europe once per quarter.


Read more
Improving
Rohini Jadhav
Posted by Rohini Jadhav
Bengaluru (Bangalore)
5 - 8 yrs
₹25L - ₹35L / yr
skill iconPython
skill iconKubernetes
skill iconJenkins
CI/CD
skill iconDocker
+1 more

What are we looking for??

  1. You have a good understanding and work experience in AKS, Kubernetes, and EKS.
  2. You are able to manage multi region clusters for disaster recovery.
  3. You have a good understanding of AWS stack.
  4. You have experience of production level in Kubernetes. 
  5. You are comfortable coding/programming and can do so whenever required. 
  6. You have worked with programmable infrastructure in some way - Built a CI/CD pipeline, Provisioned infrastructure programmatically or Provisioned monitoring and logging infrastructure for large sets of machines.
  7. You love automating things, sometimes even what seems like you can’t automate - such as one of our engineers used Ansible to set up the Ubuntu workstation and runs a playbook every time something has to be installed.
  8. You don’t throw around words such as “high availability” or “resilient systems” without understanding at least their basics. Because you know that words are easy to talk about but there is a fair amount of work to build such a system in practice.
  9. You love coaching people - about the 12-factor apps or the latest tool that reduced your time of doing a task by X times and so on. You lead by example when it comes to technical work and community.
  10. You understand the areas you have worked on very well but, you are curious about many systems that you may not have worked on and want to fiddle with them.
  11. You know that understanding applications and the runtime technologies gives you a better perspective - you never looked at them as two different things.

What you will be learning and doing?

  1. You will be working with customers trying to transform their applications and adopt cloud-native technologies. The technologies used will be Kubernetes, Prometheus, Service Mesh, Distributed tracing and public cloud technologies or on-premise infrastructure.
  2. The problems and solutions are continuously evolving in space but fundamentally you will be solving problems with simplest and scalable automation.
  3. You will be building open source tools for problems that you think are common across customers and industry. No one ever benefited from re-inventing the wheel, did they? 
  4. You will be hacking around open source projects, understand their capabilities, limitations and apply the right tool for the right job.
  5. You will be educating the customers - from their operations engineers to developers on scalable ways to build and operate applications in modern cloud-native infrastructure.
Read more
Searce Inc

at Searce Inc

3 recruiters
Karthika Senthilkumar
Posted by Karthika Senthilkumar
Coimbatore
7 - 10 yrs
Best in industry
Data engineering
skill iconPython
SQL
Google Cloud Platform (GCP)

Who are we ?


Searce means ‘a fine sieve’ & indicates ‘to refine, to analyze, to improve’. It signifies our way of working: To improve to the finest degree of excellence, ‘solving for better’ every time. Searcians are passionate improvers & solvers who love to question the status quo.


The primary purpose of all of us, at Searce, is driving intelligent, impactful & futuristic business outcomes using new-age technology. This purpose is driven passionately by HAPPIER people who aim to become better, everyday.


Tech Superpowers


End-to-End Ecosystem Thinker: You build modular, reusable data products across ingestion, transformation (ETL/ELT), and consumption layers. You ensure the entire data lifecycle is governed, scalable, and optimized for high-velocity delivery.


The MDS Architect. You reimagine business with the Modern Data Stack (MDS) to deliver Data Mesh implementations and real value. You treat every dataset as a measurable "Data Product with a clear focus on ROI and time-to-insight.


Distributed Compute & Scale Savant: You craft resilient architectures that survive petabyte scale volume and data skew without "breaking the bank. You prove your designs with cost-performance benchmarks, not just slideware.


Al-Ready Orchestrator: You engineer the bridge between structured data and Unstructured/Vector stores. By mastering pipelines for RAG models and GenAl, you turn raw data into the fuel for intelligent, automated workflows.


The Quality Craftsman (Builder @ Heart): You are an outcome-focused leader who lives in the code. From embedding GDPR/PII privacy-by-design to optimizing SQL, Python, and Spark daily, you ensure integrity is baked into every table


Experience & Relevance


Engineering Depth: 7-10 years of professional experience in end-to-end data product development. You have a portfolio that proves your ability to build complex, high-velocity pipelines for both Batch and Streaming workloads


Cloud-Native Fluency: Deep, hands-on experience designing and deploying scalable data solutions on at least one major cloud platform (AWS, GCP, or Azure). You are comfortable navigating the nuances of EMR, BigQuery, or Synapse at scale.


Al-Native Workflow: You don't just build for Al you build with Al. You must be proficient in using Al coding assistants (e.g.. GitHub Copilot) to accelerate your delivery and have a track record of building the data foundations required for Generative Al.


Architectural Portfolio: Evidence of leading 2-3 large-scale transformations-including platform migrations, data lakehouse builds, or real-time analytics architectures.


Foster a culture of technical excellence by mentoring and inspiring a team of Data analysts and engineers. Lead deep-dive code reviewa, prompte best-practice data modeling and ensure the squad adopts modern engineering standards like CI/CD For data


Client-Facing Acumen: You have direct experience in a consultative, client-facing role. You can confidently translate a CEO's business vision into a Lead Engineer's technical specification without losing anything in translation.


The "Solver" Mindset: A track record of solving 'impossible data problems-whether it's fixing massive data skew, optimizing spiraling cloud costs, or architecting 99.9% available data services.



Read more
Searce Inc

at Searce Inc

3 recruiters
Vaivashhya VN
Posted by Vaivashhya VN
Coimbatore
7 - 10 yrs
Best in industry
Data engineering
Data migration
Datawarehousing
ETL
SQL
+6 more

Who are we ?


Searce means ‘a fine sieve’ & indicates ‘to refine, to analyze, to improve’. It signifies our way of working: To improve to the finest degree of excellence, ‘solving for better’ every time. Searcians are passionate improvers & solvers who love to question the status quo.


The primary purpose of all of us, at Searce, is driving intelligent, impactful & futuristic business outcomes using new-age technology. This purpose is driven passionately by HAPPIER people who aim to become better, everyday.


Tech Superpowers


End-to-End Ecosystem Thinker: You build modular, reusable data products across ingestion, transformation (ETL/ELT), and consumption layers. You ensure the entire data lifecycle is governed, scalable, and optimized for high-velocity delivery.


The MDS Architect. You reimagine business with the Modern Data Stack (MDS) to deliver Data Mesh implementations and real value. You treat every dataset as a measurable "Data Product with a clear focus on ROI and time-to-insight.


Distributed Compute & Scale Savant: You craft resilient architectures that survive petabyte scale volume and data skew without "breaking the bank. You prove your designs with cost-performance benchmarks, not just slideware.


Al-Ready Orchestrator: You engineer the bridge between structured data and Unstructured/Vector stores. By mastering pipelines for RAG models and GenAl, you turn raw data into the fuel for intelligent, automated workflows.


The Quality Craftsman (Builder @ Heart): You are an outcome-focused leader who lives in the code. From embedding GDPR/PII privacy-by-design to optimizing SQL, Python, and Spark daily, you ensure integrity is baked into every table


Experience & Relevance


Engineering Depth: 7-10 years of professional experience in end-to-end data product development. You have a portfolio that proves your ability to build complex, high-velocity pipelines for both Batch and Streaming workloads


Cloud-Native Fluency: Deep, hands-on experience designing and deploying scalable data solutions on at least one major cloud platform (AWS, GCP, or Azure). You are comfortable navigating the nuances of EMR, BigQuery, or Synapse at scale.


Al-Native Workflow: You don't just build for Al you build with Al. You must be proficient in using Al coding assistants (e.g.. GitHub Copilot) to accelerate your delivery and have a track record of building the data foundations required for Generative Al.


Architectural Portfolio: Evidence of leading 2-3 large-scale transformations-including platform migrations, data lakehouse builds, or real-time

analytics architectures.


Client-Facing Acumen: You have direct experience in a consultative, client-facing role. You can confidently translate a CEO's business vision into a Lead Engineer's technical specification without losing anything in translation.


The "Solver" Mindset: A track record of solving 'impossible data problems-whether it's fixing massive data skew, optimizing spiraling cloud costs, or architecting 99.9% available data services.

Read more
Bengaluru (Bangalore)
2 - 4 yrs
₹21L - ₹28L / yr
Artificial Intelligence (AI)
skill iconPython

Strong Junior GenAI / AI Backend Engineer Profiles

Mandatory (Experience 1) – Must have 1.5+ years of full time expeirence in software development using LLMs (OpenAI / Gemini / similar) in projects (internship / full-time / strong personal projects)

Mandatory (Experience 2) – Must have strong coding skills in Python and hands-on backend development experience (FastAPI / Django preferred)

Mandatory (Experience 3) – Must have built or contributed to AI/LLM-based applications, such as chatbots, copilots, document processing tools, etc.

Mandatory (Experience 4) – Must have basic understanding of RAG concepts (embeddings, vector DBs, retrieval)

Mandatory (Experience 5) – Must have experience building APIs or backend services and integrating with external systems

Mandatory (Experience 6) – Must have AI/LLM projects clearly mentioned in CV (with what was built, not just tools used)

Mandatory (Experience 7) – Must have worked with modern development tools (Git, APIs, basic cloud exposure)

Mandatory (Tech Stack) – Strong in Python + basic AI/LLM ecosystem

Mandatory (Company) – Product companies / Funded startups (Series A / B / high-growth environments)

Mandatory (Education) - Tier-1 institutes (IITs, BITS, IIITs), Can be skipped if from Top notch product companies

Mandatory (Exclusion) - Avoid candidates with Only Prompt Engineers, from pure Data Science / ML theory background without backend coding, Frontend-heavy engineers.

Read more
Talent Pro
Bengaluru (Bangalore)
4 - 7 yrs
₹37L - ₹48L / yr
Artificial Intelligence (AI)
skill iconPython

Strong Senior GenAI / AI Backend Engineer Profiles

Mandatory (Experience 1) – Must have 4+ years of total software development experience, with at least 2+ years working on AI/LLM-based features in production

Mandatory (Experience 2) – Must have strong backend engineering experience using Python (FastAPI / Django preferred) and building production-grade systems

Mandatory (Experience 3) – Must have hands-on experience building LLM-based applications, including OpenAI / Gemini / similar models in real projects

Mandatory (Experience 4) – Must have experience with RAG (Retrieval Augmented Generation) including chunking, embeddings, and retrieval pipelines

Mandatory (Experience 5) – Must have experience designing end-to-end AI pipelines, including chaining, tool usage, structured outputs, and handling failure cases

Mandatory (Experience 6) – Must have experience building agentic AI systems (multi-step workflows, tool orchestration like LangGraph / CrewAI or custom agents)

Mandatory (Experience 7) – Must have strong coding and system design skills, not just prompt engineering or experimentation

Mandatory (Experience 8) – Must have experience shipping AI features in production, not just POCs or research projects

Mandatory (Experience 9) – Must have experience working with APIs, backend services, and integrations

Mandatory (Experience 10) – Must have understanding of AI system reliability, including latency, cost optimization, fallback handling, and basic eval thinking

Mandatory (Company) – Product companies / startups, preferably Series A to Series D

Mandatory (Note) - Candidate's overall experience should not be more than 7 Yrs

Mandatory (Tech Stack) – Strong in Python + AI/LLM ecosystem, experience with modern AI tooling and frameworks

Mandatory (Exclusion) – Reject profiles that are only Prompt Engineers, Data Scientists, or Frontend Engineers without strong backend + system building experience

Read more
Newmi Care
Parnika Sangwar
Posted by Parnika Sangwar
Gurugram
1 - 4 yrs
₹4L - ₹6.5L / yr
skill iconPython
skill iconDjango

Company Description

Newmi Care is India's leading outpatient healthcare platform specializing in women's and child health. The services are delivered through an integrated digital platform, physical clinics, and outpatient department (OPD) solutions for corporate and insurance partners. Newmi Care is dedicated to empowering women with seamless and specialized healthcare solutions.


Role Description

This is a full-time on-site role for a Python Developer based in Gurugram. The Python Developer will design, develop, test, and maintain efficient back-end components, APIs, and systems that support the company's platform. This role requires a candidate with 2–4 years of hands-on project experience.


Qualifications

  • Proficiency in Back-End Web Development and comprehensive knowledge of Python programming.
  • Hands on project experience of at least 2+ years is mandatory.
  • Experience in Software Development with a strong understanding of Object-Oriented Programming (OOP) concepts and principles.
  • Experience with Django Framework is Mandatory.
  • Familiarity with working on Databases, including designing, querying, and optimizing database performance.
  • Strong problem-solving abilities and a keen eye for detail in coding and debugging processes.
  • Ability to work independently and collaboratively in an agile development environment.
  • Understanding of front-end technologies and their integration with back-end services is beneficial.
  • Bachelor's degree in Computer Science, Software Engineering, or a related technical field is preferred.
  • Immediate joiners or candidates with a notice period of 15–20 days will be preferred.


Read more
AI-powered content creation and automation platform

AI-powered content creation and automation platform

Agency job
via Uplers by Shrishti Singh
Bengaluru (Bangalore)
2 - 4 yrs
₹15L - ₹28L / yr
skill iconPython
skill iconNodeJS (Node.js)
TypeScript
Artificial Intelligence (AI)
Generative AI
+2 more

Software Engineer

Onsite - HSR Bangalore

6 Days work from Office (Flexible working hours)


Product is a PowerPoint AI assistant used by consulting companies and Fortune 500 teams. A typical professional spends 1 to 3 hours creating one slide. With Product company, they create a v1 of their entire deck in 10 minutes, and make changes like “turn this table to a chart” in seconds directly within PowerPoint.

In the next 2 years, our goal at company is to forever change the way business presentations are made.


Who are we?

  • small, strong team of 5
  • founders are CS graduates from IIT Kharagpur with a specialisation in AI
  • work 6 days a week from our office in HSR Layout in Bangalore
  • funded by Y Combinator and other amazing investors
  • used by consulting companies and Fortune 500 teams


Your responsibilities (in order)

  • Design, implement, test, and deploy full features
  • Design and implement a robust infrastructure to enable rapid development and automated testing
  • Look at usage data to iterate on features


What we’re looking for

  • Undergraduate or master's in Computer Science or equivalent degree
  • 2+ years of backend or DevOps software engineering experience
  • Experience with TypeScript (JavaScript) or Python


You’ll be a good fit if

  • You want to work on a product that can change the way a very large number of people work
  • The chaos of high growth and things breaking is exciting to you
  • You are a workaholic, looking to upskill faster than most people think is possible. This role is not a good fit for you if you’re looking to prioritise work-life balance.
  • You prefer working in-person with other smart people who are excited and passionate about what they’re building
  • You love solving very hard problems at a rapid pace. We discuss timelines in days or weeks, so you’ll constantly be expected to ship really high-quality work.



Perks

  • Comprehensive health insurance for you and dependents
  • Workstation enhancements
  • Subscriptions to AI tools such as Cursor, ChatGPT, etc.

(If there's anything else we can do to make your work more enjoyable, just ask)


If you are interested in proceeding, we would be happy to move your profile to the next stage of the evaluation process.

Kindly share the following details to help us take this forward :


  • Current CTC (Fixed + Variable):
  • Expected CTC:
  • Notice Period (If currently serving, please mention your Last Working Day)
  • Details of any active offers in hand (if applicable)
  • Expected/Available Date of Joining (if applicable)
  • Attach Updated CV:
  • Attach Github Link / Leet code link or other:
  • Current Location:
  • Preffered Location:
  • Reason for job Change:
  • Reason for relocation (if applicable):
  • Are you comfortable with 6 days wfo (flexible working hours)?( Yes / No):

Read more
Blitzy

at Blitzy

2 candid answers
1 product
Bisman Gill
Posted by Bisman Gill
Pune
5yrs+
Upto ₹50L / yr (Varies
)
skill iconPython
skill iconKubernetes
Terraform
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
+2 more

About the role

We are looking for talented Senior Backend Engineers (5+ years of experience) to join our team and take ownership of different parts of our stack. You will be working alongside a team of Engineers locally and directly with the U.S. Engineering team on all aspects of product/application development. You will leverage your experiences and abilities to inform decisions across product development and technology. You will help us build the foundation of our 2nd Headquarters in Pune: its culture, its processes, and its practices. There are a ton of interesting problems to solve, so come hungry. If your colleagues describe you as curious, driven, kind, and creative you are a culture fit.

What Success Looks Like

  • You write, review and ship code in production. Your employer or client's success depends on the software you build
  • You use Generative AI tools on a daily basis to enhance the quality and efficacy of your software and non-software deliverables
  • You are a self-starter and enjoy working with minimal supervision
  • You evaluate and make technical architecture decisions with a long-term view, optimizing for speed, quality, and safety
  • You take pride in the product you create and the code that you write
  • Your team can rely on you to get them out of a sticky situation in production
  • You can work well on a team of sales executives, designers and engineers in an in-person environment
  • You are passionate about the enterprise software development lifecycle and feel strongly about improving it
  • You are a first principles engineer who exercises curiosity about the technologies you work with
  • You can learn quickly about technologies, software and code that you are not familiar with, often from rudimentary documentation
  • You take ownership of the code that you write, and you help the team operate with everything that you build, throughout its lifecycle
  • You communicate openly and solicit feedback on important decisions, keeping the team aligned on your rationale
  • You exercise an optimistic mindset and are willing to go the extra mile to make things work

Areas of Ownership

Our hiring process is designed for you to demonstrate a generalist set of capabilities, with a specialization in Backend Technologies.

Required Technical Experience (MUST HAVE):

  • Expertise in Python -
  • Deep hands-on experience with Terraform -
  • Proficiency in Kubernetes -
  • Experience with cloud platforms (GCP strongly preferred, AWS/Azure acceptable) -

Additional experience with some of the following:

  • Backend Frameworks and Technologies (Node.js, NuxtJS, Express.js)
  • Programming languages (JavaScript, TypeScript, Java, C++, Go)
  • RPCs (REST, gRPC or GraphQL)
  • Databases (SQL, NoSQL, Postgres, MongoDB, or Firebase)
  • CI/CD (Jenkins, CircleCI, GitLab or similar)
  • Source code versioning tools such as Git or Perforce
  • Microservices architecture

Ways to stand out

  • Familiarity with AI Platforms
  • Extensive experience with building enterprise-scale applications with >99% SLAs
  • Deep expertise across the full required stack: Python, Terraform, Kubernetes, and GCP

You'll Get...

  • Competitive Salary
  • Medical Insurance Benefits
  • Employer Provident Fund contributions with Gratuity after 5 years of service
  • Company-sponsored US onsite trips for high performers, based on business requirements
  • Potential international transfer support for top performers, based on business requirements
  • Technology (hardware, software, trainings, etc.) equipment and/or allowance
  • The opportunity to re-shape an entire industry
  • Beautiful office environment
  • Meal allowance and/or food provision on site

Culture

Who we are: Our Co-Founder and CTO is a Serial Gen AI Inventor who grew up in Pune, India, is a BITS Pilani graduate, and worked at NVIDIA's Pune office for 6 years. There, he was promoted 5 times in 6 years and was transferred to the NVIDIA Headquarters in Santa Clara, California. After making significant contributions to NVIDIA, he proceeded to attend Harvard for his dual Masters in Engineering and MBA from HBS. Our other Co-Founder/CEO is a successful Serial Entrepreneur who has built multiple companies. As a team, we work very hard, have a curious mind-set, and believe in a low-ego high output approach.


Read more
oil and Gas Industry (petroleum refinery)

oil and Gas Industry (petroleum refinery)

Agency job
via First Tek, Inc. by David Ingale
Bengaluru (Bangalore)
8 - 12 yrs
₹15L - ₹25L / yr
skill iconPython
MLOps
skill iconMachine Learning (ML)
API
CI/CD
+5 more

🔹 Role: Python Engineer – Python & MLOps

📍 Location: Bellandur, Bangalore

🕐 Work Timings: 01:30 PM – 10:30 PM

🏢 Work Mode: Monday (WFH), Tuesday–Friday (WFO)

📅 Experience: 8-12 Years (Ideal: 8-10 Years)

🔹 Role Overview

This role focuses on building and maintaining a production-grade AI/ML platform. You will work on scalable Python systems, MLOps pipelines, APIs, and CI/CD workflows in an enterprise environment.

🔹 Key Responsibilities

✔ Develop production-grade Python applications using OOP principles

✔ Build and enhance MLOps pipelines (training, validation, deployment)

✔ Design and optimize REST APIs with OpenAI/Swagger

✔ Implement async programming for high-performance systems

✔ Work on CI/CD pipelines (Azure Pipelines / GitHub Actions)

✔ Ensure clean, testable, and maintainable code (PyTest, TDD)

🔹 Required Skills

✔ Strong Python (OOP, modular design)

✔ MLOps & CI/CD pipeline experience

✔ REST API development

✔ Async programming (async/await, concurrency)

✔ Pandas / Polars & Scikit-learn

✔ JSON Schema–driven development

✔ Testing using PyTest

🔹 Nice to Have

➕ Azure ML SDK

➕ Pydantic

➕ Azure Cosmos DB

➕ Experience with large enterprise platforms

Read more
Vikgol
Madhuri D R
Posted by Madhuri D R
Remote only
3 - 6 yrs
₹8L - ₹15L / yr
Linux/Unix
TCP/IP
DNS
Voice Over IP (VoIP)
skill iconAmazon Web Services (AWS)
+16 more

Job role: Systems Engineer (L2)

Location: Remote/Bengaluru

Experience: 3-6 years


About the Role:

We are looking for a Systems Engineer (L2) to join our growing infrastructure team. You will be responsible for managing, optimizing, and scaling our cloud communication platform that handles billions of messages and voice calls annually.


Key Responsibilities:

 — Design, deploy, and maintain scalable cloud infrastructure — AWS/GCP/Azure.

 — Manage and optimize networking components — routers, switches, firewalls, load balancers.

 — Handle incident response — monitor systems, identify issues, resolve production problems.

 — Implement DevOps best practices — CI/CD pipelines, automation, containerization.

 — Collaborate with backend and product teams on system architecture.

— Performance tuning — ensure high availability and reliability of platform.

— Security management — implement security protocols and compliance standards.


Required Skills:

Technical:

  • Linux/Unix administration — strong fundamentals
  • Networking — TCP/IP, DNS, BGP, VoIP protocols
  • Cloud platforms — AWS/GCP/Azure — minimum 2 years
  • DevOps tools — Docker, Kubernetes, Jenkins, CI/CD
  • Monitoring tools — Grafana, Prometheus, Kibana, Datadog
  • Scripting — Python, Bash, Shell
  • Databases — MySQL, PostgreSQL, Redis


Soft skills:

  • Strong problem-solving under pressure
  • Good communication — English written and verbal
  • Team player — collaborative mindset


Good to Have:

  • Experience in telecom/CPaaS/cloud communications industry
  • Knowledge of VoIP, SIP, RTP protocols
  • AI/ML operations experience
  • CCNA/AWS certifications


Read more
Hashone Career
Madhavan I
Posted by Madhavan I
Coimbatore
10 - 15 yrs
₹20L - ₹38L / yr
skill iconPython
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Large Language Models (LLM) tuning

Job Description – AI Tech Lead

Location: Bangaluru

Experience: 10+ Years

Function: AI Center of Excellence (CoE)

Reporting To: Senior Vice President – CX / Head of AI CoE

 

 

We are seeking two highly experienced AI Tech Leads (AVP/DGM level) to drive the architecture, development, and delivery of large‑scale AI solutions spanning Predictive AI, GenAI, and Agentic AI across BPM, IT Services, Digital, Data Engineering, and Enterprise Transformation programs.

The role demands strong technical leadershipsolution design capabilities, hands‑on execution ownership, and the ability to lead multi‑disciplinary teams to deliver scalable, production‑grade AI systems.

 

2. Key Responsibilities

A. Solution Architecture & Strategy

  • Lead end‑to‑end solution architecture across Predictive AI, GenAI, Agentic AI, and enterprise data ecosystems.
  • Partner with business and technology teams to define AI strategy, technical roadmaps, and implementation frameworks.
  • Translate business goals into scalable AI architectures leveraging microservices, distributed systems, and modern AI toolchains.
  • Own architectural decisions on model design, data pipelines, deployment frameworks, MLOps stack, and scaling strategies.

B. Project Delivery & Execution Leadership

  • Drive the complete AI project lifecycle: Requirement Analysis → Architecture → Model Development → Engineering → Deployment → Monitoring.
  • Lead AI engineering teams in developing production‑grade ML/GenAI/Agentic solutions with high reliability and performance.
  • Establish and enforce engineering best practices, coding standards, DevOps/MLOps processes, and quality controls.
  • Manage multiple concurrent AI initiatives with strong governance, risk mitigation, and stakeholder communication.

C. Technical Hands-on Expertise

  • Architect and build complex AI systems involving:
  • Large Language Models (LLMs) & GenAI apps
  • Agentic workflows and autonomous task orchestration
  • Predictive modeling, forecasting, optimization, and statistical modeling
  • Knowledge graphs, vector databases, embeddings
  • Data engineering pipelines (ETL/ELT) and cloud-native architectures
  • Drive model evaluation, experimentation, benchmarking, A/B testing, and continuous improvements.

D. Team Leadership & Mentoring

  • Lead and mentor a team of AI engineers, data scientists, MLOps engineers and developers.
  • Build internal capabilities by establishing training, code reviews, reusable accelerators, and technical playbooks.
  • Actively collaborate with product managers, data engineering teams, CX strategy teams, and domain SMEs.

E. Stakeholder & Client Management

  • Act as a technology partner during client discussions, proposals, RFP responses, and solution demonstrations.
  • Communicate complex AI concepts to CXOs, business leaders, and non-technical stakeholders seamlessly.
  • Support pre-sales with solutioning, effort estimation, and technical presentations.

 

3. A. Technical Skills

  • Strong proficiency in Python, cloud platforms (Azure/AWS/GCP), and AI frameworks (TensorFlow, PyTorch, LangChain, LlamaIndex).
  • Hands-on experience building applications using:
  • LLMs, RAG, fine‑tuning, prompt engineering
  • Autonomous AI agents & multi-agent systems
  • Predictive ML models (Regression, Classification, Clustering, NLP, CV)
  • Expertise in microservices architecture, API design, scalable deployments.
  • Strong command over SDLC, Agile methodologies, CI/CD, DevOps & MLOps.
  • Experience with data engineering tools: Spark, Databricks, Airflow, Kafka, SQL/NoSQL, and modern data lakehouse platforms.

B. Functional & Domain Skills

  • Experience working in BPM, Customer Experience, Digital Transformation, IT Services.
  • Ability to map AI use cases to business value: workflow optimization, automation, customer experience, operations, and analytics.

C. Leadership & Soft Skills

  • Strong team leadership and mentoring experience.
  • Excellent communication, client-facing abilities, and stakeholder management skills.
  • Strong decision-making, problem-solving, and delivery ownership.

4. Qualifications

  • Bachelor’s / Master’s in Computer Science, Engineering, Data Science, or related fields.
  • 10–15 years total experience with at least 5+ years leading AI/ML projects.
  • Demonstrated success delivering large-scale AI programs in enterprise environments.
  • Certifications in AI/ML, cloud, or architecture (preferred).

 

 

Read more
Nevis Software Solutions Pvt Ltd
Pune
3 - 5 yrs
₹7L - ₹12L / yr
skill iconDjango
skill iconPython
RESTful APIs
Web API

About the Role

We are looking for an experienced Django Developer to join our on-site engineering team in Pune. This role involves building and scaling high-performance backend systems for our SaaS products. You will work closely with product, frontend, and DevOps teams to design robust APIs, optimize databases, and deliver production-grade solutions.

This is a hands-on role with ownership, technical depth, and real impact.


Key Responsibilities

  • Design, develop, test, and maintain scalable backend services using Django & Python
  • Architect and implement secure, high-performance RESTful APIs
  • Work extensively with PostgreSQL for schema design, query optimization, indexing, and performance tuning
  • Build and manage asynchronous workflows using Celery
  • Implement real-time features using Daphne, Redis, and WebSockets (ASGI stack)
  • Containerize applications using Docker; manage Docker Compose and environment setups
  • Collaborate with frontend developers, product managers, and designers for seamless delivery
  • Perform code reviews, mentor junior developers, and enforce best practices
  • Ensure application security, scalability, and reliability
  • Monitor system performance and handle debugging, logging, and error management
  • Maintain clear documentation for APIs, services, and deployment workflows

Required Skills & Qualifications

  • 3-4 years of hands-on experience with Django & Python
  • Strong expertise in REST API design and backend architecture
  • Advanced knowledge of PostgreSQL (queries, indexing, optimization)
  • Solid experience with Celery for background tasks
  • Hands-on experience with Daphne, Redis, and WebSockets
  • Strong command over Docker & containerized deployments
  • Proficiency with Git/GitHub workflows, PR reviews, and basic CI
  • Excellent understanding of ORM concepts and database modeling
  • Strong problem-solving, debugging, and communication skills
  • Experience using AI/LLM tools to improve productivity is a plus

Nice-to-Have

  • Experience with cloud platforms (AWS / GCP / Azure)
  • Exposure to CI/CD pipelines and deployment automation
  • Familiarity with monitoring tools (Sentry, Prometheus, Grafana, etc.)
  • Basic frontend understanding (HTML, CSS, JavaScript)
  • Experience handling high-traffic systems and performance optimization
  • Exposure to Agile / Scrum environments

What We Offer

  • Competitive salary package
  • Opportunity to work on scalable SaaS and AI-driven platforms
  • Strong engineering culture with ownership and autonomy
  • On-site collaborative environment with fast decision-making
  • Learning, growth, and leadership opportunities
  • Challenging projects with end-to-end responsibility

Expectations & Deliverables

  • Production-ready, well-tested, and maintainable code
  • Proactive communication and ownership of deliverables
  • High-quality documentation and clean architecture practices
  • Adherence to security, compliance, and IP standards


Read more
CLOUDSUFI

at CLOUDSUFI

3 recruiters
Ayushi Dwivedi
Posted by Ayushi Dwivedi
Noida
6 - 12 yrs
₹35L - ₹45L / yr
Agentic AI
Large Language Models (LLM)
Natural Language Processing (NLP)
skill iconPython
Retrieval Augmented Generation (RAG)

About Us

CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.

 

Our Values 

We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.

  

Equal Opportunity Statement 

CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, colour, religion, gender, gender identity or expression, sexual orientation and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/


Location: Noida, India (Hybrid) - 2 days from office

Position: Full-time

As a Senior Data Scientist, you will be a key player in our technical leadership. You will be responsible for designing, developing, and deploying sophisticated AI and Machine Learning solutions, with a strong emphasis on Generative AI and Large Language Models (LLMs). You will architect and manage scalable AI microservices, drive research into state-of-the-art techniques, and translate complex business requirements into tangible, high-impact products. This role requires a blend of deep technical expertise, strategic thinking, and leadership.

 

Key Responsibilities

  • Architect & Develop AI Solutions: Design, build, and deploy robust and scalable machine learning models, with a primary focus on Natural Language Processing (NLP), Generative AI, and LLM-based Agents.
  • Build AI Infrastructure: Create and manage AI-driven microservices using frameworks like Python FastAPI, ensuring high performance and reliability.
  • Lead AI Research & Innovation: Stay abreast of the latest advancements in AI/ML. Lead research initiatives to evaluate and implement state-of-the-art models and techniques for performance and cost optimization.
  • Solve Business Problems: Collaborate with product and business teams to understand challenges and develop data-driven solutions that create significant business value, such as building business rule engines or predictive classification systems.
  • End-to-End Project Ownership: Take ownership of the entire lifecycle of AI projects—from ideation, data processing, and model development to deployment, monitoring, and iteration on cloud platforms.
  • Team Leadership & Mentorship: Lead learning initiatives within the engineering team, mentor junior data scientists and engineers, and establish best practices for AI development.
  • Cross-Functional Collaboration: Work closely with software engineers to integrate AI models into production systems and contribute to the overall system architecture.

 

Required Skills and Qualifications

  • Master’s (M.Tech.) or Bachelor's (B.Tech.) degree in Computer Science, Artificial Intelligence, Information Technology, or a related field.
  • 7+ years of professional experience in a Data Scientist, AI Engineer, or related role.
  • Expert-level proficiency in Python and its core data science libraries (e.g., PyTorch, Huggingface Transformers, Pandas, Scikit-learn).
  • Demonstrable, hands-on experience building and fine-tuning Large Language Models (LLMs) and implementing Generative AI solutions.
  • Proven experience in developing and deploying scalable systems on cloud platforms, particularly in GCP.
  • Strong background in Natural Language Processing (NLP), including experience with multilingual models and transcription.
  • Experience with containerization technologies, specifically Docker.
  • Solid understanding of software engineering principles and experience building APIs and microservices.

 

Preferred Qualifications

  • A strong portfolio of projects. A track record of publications in reputable AI/ML conferences is a plus.
  • Experience with full-stack development (Node.js, Next.js) and various database technologies (SQL, MongoDB, Elasticsearch).
  • Familiarity with setting up and managing CI/CD pipelines (e.g., Jenkins).
  • Proven ability to lead technical teams and mentor other engineers.
  • Experience developing custom tools or packages for data science workflows.

 

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort