Cutshort logo
Remote python jobs

50+ Remote Python Jobs in India

Apply to 50+ Remote Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!

icon
OIP Insurtech

at OIP Insurtech

2 candid answers
Katarina Vasic
Posted by Katarina Vasic
Remote only
4 - 10 yrs
Best in industry
Terraform
Ansible
Windows Azure
skill iconAmazon Web Services (AWS)
skill iconDocker
+4 more

Are you looking to explore what is possible in a collaborative and innovative work environment? Is your goal to work with a team of talented professionals who are keenly focused on solving complex business problems and supporting product innovation with technology?

If so, you might be our next Senior DevOps Engineer, where you will be involved in building out systems for our rapidly expanding team, enabling the whole group to operate more effectively and iterate at top speed in an open, collaborative environment.


Systems management and automation are the name of the game here – in development, testing, staging, and production. If you are passionate about building innovative and complex software, are comfortable in an “all hands on deck” environment, and can thrive in an Insurtech culture, we want to meet you!


What We’re Looking For

  • You will collaborate with our development team to support ongoing projects, manage software releases, and ensure smooth updates to QA and production environments. This includes handling configuration updates and meeting all release requirements.
  • You will work closely with your team members to enhance the company’s engineering tools, systems, procedures, and data security practices.
  • Provide technical guidance and educate team members and coworkers on development and operations.
  • Monitor metrics and develop ways to improve.
  • Conduct systems tests for security, performance, and availability.
  • Collaborate with team members to improve the company’s engineering tools, systems and procedures, and data security.

What You’ll Be Doing:

  • You have a working knowledge of technologies like:
  • Docker, Kubernetes
  • oipJenkins (alt. Bamboo, TeamCity, Travis CI, BuildMaster)
  • Ansible, Terraform, Pulumi
  • Python
  • You have experience with GitHub Actions, Version Control, CI/CD/CT, shell scripting, and database change management
  • You have working experience with Microsoft Azure, Amazon AWS, Google Cloud, or other cloud providers
  • You have experience with cloud security management
  • You can configure assigned applications and troubleshoot most configuration issues without assistance
  • You can write accurate, concise, and formatted documentation that can be reused and read by others
  • You know scripting tools like bash or PowerShell




Read more
MyOperator - VoiceTree Technologies

at MyOperator - VoiceTree Technologies

1 video
3 recruiters
Vijay Muthu
Posted by Vijay Muthu
Remote only
3 - 5 yrs
₹12L - ₹20L / yr
skill iconPython
skill iconDjango
FastAPI
Microservices
Large Language Models (LLM)
+22 more

About Us:

MyOperator and Heyo are India’s leading conversational platforms, empowering 40,000+ businesses with Call and WhatsApp-based engagement. We’re a product-led SaaS company scaling rapidly, and we’re looking for a skilled Software Developer to help build the next generation of scalable backend systems.


Role Overview:

We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.


Key Responsibilities:

  • Develop robust backend services using Python, Django, and FastAPI
  • Design and maintain a scalable microservices architecture
  • Integrate LangChain/LLMs into AI-powered features
  • Write clean, tested, and maintainable code with pytest
  • Manage and optimize databases (MySQL/Postgres)
  • Deploy and monitor services on AWS
  • Collaborate across teams to define APIs, data flows, and system architecture

Must-Have Skills:

  • Python and Django
  • MySQL or Postgres
  • Microservices architecture
  • AWS (EC2, RDS, Lambda, etc.)
  • Unit testing using pytest
  • LangChain or Large Language Models (LLM)
  • Strong grasp of Data Structures & Algorithms
  • AI coding assistant tools (e.g., Chat GPT & Gemini)

Good to Have:

  • MongoDB or ElasticSearch
  • Go or PHP
  • FastAPI
  • React, Bootstrap (basic frontend support)
  • ETL pipelines, Jenkins, Terraform

Why Join Us?

  • 100% Remote role with a collaborative team
  • Work on AI-first, high-scale SaaS products
  • Drive real impact in a fast-growing tech company
  • Ownership and growth from day one


Read more
Intineri infosol Pvt Ltd

at Intineri infosol Pvt Ltd

2 candid answers
Adil Saifi
Posted by Adil Saifi
Remote only
5 - 12 yrs
₹3L - ₹8L / yr
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
skill iconPython
PyTorch

We are seeking a Senior Artificial Intelligence Engineer to join our team, contributing expertise in AI technologies across dynamic projects. This full-time, freelance, or remote position is available in Pune Division, Delhi, Noida, Gurgaon, Mumbai, Bengaluru, and Kolkata. Candidates should bring up to 10 years of professional experience and a strong foundation in developing and deploying advanced AI solutions in a fast-paced IT environment.


Qualifications and Skills

  • Machine Learning (Mandatory skill): Hands-on experience designing, building, and optimizing machine learning models for real-world applications is essential for this role.
  • Python (Mandatory skill): Exceptional proficiency in Python programming, with ability to write efficient, scalable, and maintainable code for AI projects.
  • Artificial Intelligence (AI) (Mandatory skill): Proven capability to implement, train, and deploy artificial intelligence systems across diverse domains and business scenarios.
  • PyTorch: Strong knowledge of PyTorch for creating, training, and fine-tuning deep learning models for various industrial use cases.
  • MLOps: Familiarity with MLOps practices to automate, monitor, and manage machine learning workflows, deployments, and model lifecycle effectively.
  • SQL: Adept at using SQL for extracting, analyzing, and managing large volumes of structured data within AI-related projects.
  • Excellent problem-solving skills and analytical thinking to identify, develop, and implement innovative AI-powered solutions to complex business problems.
  • Solid understanding of software development best practices, including version control, code documentation, and collaborative teamwork in cross-functional settings.


Roles and Responsibilities

  • Design, implement, and deploy scalable machine learning and artificial intelligence models and algorithms for a wide array of use cases.
  • Collaborate with product managers, data engineers, and other key stakeholders to define AI project requirements and deliver innovative outcomes.
  • Analyze large and complex datasets to extract valuable insights, train models, and continuously improve model performance and accuracy.
  • Adopt and implement best practices in MLOps for continuous integration, deployment, monitoring, and maintenance of AI solutions.
  • Work with PyTorch and related frameworks to build, experiment, and optimize deep learning models tailored to specific business challenges.
  • Communicate findings, progress, risks, and results succinctly to technical and non-technical stakeholders, guiding strategic decision-making.
  • Actively research emerging trends and advancements in AI, recommending and integrating relevant tools and methodologies into ongoing projects.
  • Provide technical mentorship and guidance to junior engineers, fostering an environment focused on continuous learning and growth.


Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Remote only
5 - 10 yrs
₹5L - ₹12L / yr
Perl
skill iconPython
SQL server
ADO
skill iconGit
+6 more

Job Title : Perl Developer

Experience : 6+ Years

Engagement Type : C2C (Contract)

Location : Remote

Shift Timing : General Shift


Job Summary :

We are seeking an experienced Perl Developer with strong scripting and database expertise to support an application modernization initiative.

The role involves code conversion for compatibility between Sybase and MS SQL, ensuring performance, reliability, and maintainability of mission-critical systems.

You will work closely with the engineering team to enhance, migrate, and optimize codebases written primarily in Perl, with partial transitions toward Python for long-term sustainability.


Mandatory Skills :

Perl, Python, T-SQL, SQL Server, ADO, Git, Release Management, Monitoring Tools, Automation Tools, CI/CD, Sybase-to-MSSQL Code Conversion


Key Responsibilities :

  • Analyze and convert existing application code from Sybase to MS SQL for compatibility and optimization.
  • Maintain and enhance existing Perl scripts and applications.
  • Where feasible, refactor or rewrite legacy components into Python for improved scalability.
  • Collaborate with development and release teams to ensure seamless integration and deployment.
  • Follow established Git/ADO version control and release management practices.
  • (Optional) Contribute to monitoring, alerting, and automation improvements.

Required Skills :

  • Strong Perl development experience (primary requirement).
  • Proficiency in Python for code conversion and sustainability initiatives.
  • Hands-on experience with T-SQL / SQL Server for database interaction and optimization.
  • Familiarity with ADO/Git and standard release management workflows.

Nice to Have :

  • Experience with monitoring and alerting tools.
  • Familiarity with automation tools and CI/CD pipelines.
Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Remote only
6 - 8 yrs
₹12L - ₹15L / yr
Perl
skill iconPython
SQL server
ADO
skill iconGit

Role: Perl Developer

Location: Remote

Experience: 6–8 years

Shift: General

Job Description

Primary Skills (Must Have):

  • Strong Perl development skills.
  • Good knowledge of Python and T-SQL / SQL Server to create compatible code.
  • Hands-on experience with ADO, Git, and release management practices.

Secondary Skills (Good to Have):

  • Familiarity with monitoring/alerting tools.
  • Exposure to automation tools.

Day-to-Day Responsibilities

  • Perform application code conversion for compatibility between Sybase and MS SQL.
  • Work on existing Perl-based codebase, ensuring maintainability and compatibility.
  • Convert code into Python where feasible (as part of the migration strategy).
  • Where Python conversion is not feasible, create compatible code in Perl.
  • Collaborate with the team on release management and version control (Git).


Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Remote only
9 - 15 yrs
₹35L - ₹42L / yr
skill iconPython
skill iconJava
skill iconGo Programming (Golang)
skill iconRuby on Rails (ROR)
skill iconNodeJS (Node.js)
+3 more

Lead technical Consultant

Experience: 9-15 Years


This is a Backend-heavy Polyglot developer role - 80% Backend 20% Frontend

Backend 

  1. 1st Primary Language - Java or Python or Go Or ROR or Rust 
  2. 2nd Primary Language - one of the above or Node

The candidate should be experienced in atleast 2 backend tech stacks.


Frontend 


  1. React or Angular 
  2. HTML, CSS


The interview process would be quite complex and require depth of experience. The candidate should be hands-on in backend and frontend development (80-20)

The candidate should have experience with Unit testing, CI/CD, devops etc.

Good Communication skills is a must have.

Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Remote only
5 - 9 yrs
₹25L - ₹32L / yr
skill iconPython
skill iconJava
skill iconGo Programming (Golang)
skill iconNodeJS (Node.js)
skill iconRuby on Rails (ROR)
+3 more

 Senior Technical Consultant (Polyglot)

Experience- 5-9 Years


This is a Backend-heavy Polyglot developer role - 80% Backend 20% Frontend

Backend 

  1. 1st Primary Language - Java or Python or Go Or ROR or Rust 
  2. 2nd Primary Language - one of the above or Node

The candidate should be experienced in atleast 2 backend tech stacks.


Frontend 


  1. React or Angular 
  2. HTML, CSS


The interview process would be quite complex and require depth of experience. The candidate should be hands-on in backend and frontend development (80-20)

The candidate should have experience with Unit testing, CI/CD, devops etc.

Good Communication skills is a must have.

Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Remote only
6 - 10 yrs
₹40L - ₹50L / yr
skill iconPython
skill iconDjango
Microservices
Kalfa

Strong Software Engineering Profile

Mandatory (Experience 1): Must have 5+ years of experience using Python to design software solutions.

Mandatory (Skills 1): Strong working experience with Python (with Django framework experience) and Microservices architecture is a must.

Mandatory (Skills 2): Must have experience with event-driven architectures using Kafka

Mandatory (Skills 3): Must have Experience in DevOps practices and container orchestration using Kubernetes, along with cloud platforms like AWS, GCP, or Azure

Mandatory (Company): Product companies, Experience working in fintech, banking, or product companies is a plus.

Mandatory (Education): From IIT (Candidate should have done bachelor degree Btech or Dual degree Btech+Mtech or Intergrated Msc), From other premium institutes NIT, MNNIT, VITS, BITS (Candidates should have done B.E/B.Tech)

Preferred

Read more
MyOperator - VoiceTree Technologies

at MyOperator - VoiceTree Technologies

1 video
3 recruiters
Vijay Muthu
Posted by Vijay Muthu
Remote only
2 - 3 yrs
₹5L - ₹7L / yr
skill icongrafana
prometheus
skill iconAmazon Web Services (AWS)
DevOps
CI/CD
+5 more

About MyOperator

MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.


About the Role

We are seeking a Site Reliability Engineer (SRE) with a minimum of 2 years of experience who is passionate about monitoring, observability, and ensuring system reliability. The ideal candidate will have strong expertise in Grafana, Prometheus, Opensearch, and AWS CloudWatch, with the ability to design insightful dashboards and proactively optimize system performance.


Key Responsibilities

  • Design, develop, and maintain monitoring and alerting systems using Grafana, Prometheus, and AWS CloudWatch.
  • Create and optimize dashboards to provide actionable insights into system and application performance.
  • Collaborate with development and operations teams to ensure high availability and reliability of services.
  • Proactively identify performance bottlenecks and drive improvements.
  • Continuously explore and adopt new monitoring/observability tools and best practices.


Required Skills & Qualifications

  • Minimum 2 years of experience in SRE, DevOps, or related roles.
  • Hands-on expertise in Grafana, Prometheus, and AWS CloudWatch.
  • Proven experience in dashboard creation, visualization, and alerting setup.
  • Strong understanding of system monitoring, logging, and metrics collection.
  • Excellent problem-solving and troubleshooting skills.
  • Quick learner with a proactive attitude and adaptability to new technologies.


Good to Have (Optional)

  • Experience with AWS services beyond CloudWatch.
  • Familiarity with containerization (Docker, Kubernetes) and CI/CD pipelines.
  • Scripting knowledge (Python, Bash, or similar).


Why Join Us

At MyOperator, you will play a key role in ensuring the reliability, scalability, and performance of systems that power AI-driven business communication for leading global brands. You’ll work in a fast-paced, innovation-driven environment where your expertise will directly impact thousands of businesses worldwide.



Read more
Thinkgrid Labs

at Thinkgrid Labs

2 candid answers
Eman Khan
Posted by Eman Khan
Remote only
2 - 10 yrs
₹10L - ₹18L / yr
Microsoft Fabric
Fabric Mirroring
skill iconPython
skill iconScala
Spark
+4 more

Job Description


Who are you?

  • SQL & CDC Pro: Strong SQL Server/T-SQL; hands-on CDC or replication patterns for initial snapshot + incremental syncs, including delete handling.
  • Fabric Mirroring Practitioner: You’ve set up and tuned Fabric Mirroring to land data into OneLake/Lakehouse; comfortable with OneLake shortcuts and workspace/domain organisation.
  • Schema-Drift Aware: You detect, evolve, and communicate schema changes safely (contracts, tests, alerts), minimising downstream breakage.
  • High-Volume Ingestion Mindset: You design for throughput, resiliency, and backpressure—retries, idempotency, partitioning, and efficient file sizing.
  • Python/Scala/Spark Capable: You can build notebooks/ingestion frameworks for advanced scenarios and data quality checks.
  • Operationally Excellent: You add observability (logging/metrics/alerts), document runbooks, and partner well with platform, security, and analytics teams.
  • Data Security Conscious: You respect PII/PHI, apply least privilege, and align with RLS/CLS patterns and governance guardrails.


What you will be doing?

  • Stand Up Mirroring: Configure Fabric Mirroring from SQL Server (and other relational sources) into OneLake; tune schedules, snapshots, retention, and throughput.
  • Land to Bronze Cleanly: Define Lakehouse folder structures, naming/tagging conventions, and partitioning for fast, organised Bronze ingestion.
  • Handle Change at Scale: Implement CDC—including soft/hard deletes, late-arriving data, and backfills—using reliable watermarking and reconciliation checks.
  • Design Resilient Pipelines: Build ingestion with Fabric Data Factory and/or notebooks; add retries, dead-lettering, and circuit-breaker patterns for fault tolerance.
  • Manage Schema Drift: Automate drift detection and schema evolution; publish change notes and guardrails so downstream consumers aren’t surprised.
  • Performance & Cost Tuning: Optimise batch sizes, file counts, partitions, parallelism, and capacity usage to balance speed, reliability, and spend.
  • Observability & Quality: Instrument lineage, logs, metrics, and DQ tests (nulls, ranges, uniqueness); set up alerts and simple SLOs for ingestion health.
  • Collaboration & Documentation: Partner with the Fabric Platform Architect on domains, security, and workspaces; document pipelines, SLAs, and runbooks.


Must-have skills

  • SQL Server, T-SQL; CDC/replication fundamentals
  • Microsoft Fabric Mirroring; OneLake/Lakehouse; OneLake shortcuts
  • Schema drift detection/management and data contracts
  • Familiarity with large, complex relational databases
  • Python/Scala/Spark for ingestion and validation
  • Git-based workflow; basic CI/CD (Fabric deployment pipelines or Azure DevOps)

Benefits

  • 5 day work week
  • 100% remote setup with flexible work culture and international exposure
  • Opportunity to work on mission-critical healthcare projects impacting providers and patients globally
Read more
Wama Technology

at Wama Technology

2 candid answers
Ariba Khan
Posted by Ariba Khan
Remote only
4 - 8 yrs
Upto ₹13L / yr (Varies
)
skill iconPython
Playwright
pytest

Note: The shift hours for this job are from 4PM- 1AM IST


About The Role:

We are seeking a highly skilled and experienced QA Automation Engineer with over 5 years of experience in both automation and manual testing. The ideal candidate will possess strong expertise in Python, Playwright, PyTest, Pywinauto, and Java with Selenium, API testing with Rest Assured, and SQL. Experience in the mortgage domain, Azure DevOps, and desktop & web application testing is essential. The role requires working in evening shift timings (4 PM – 1 AM IST) to collaborate with global teams.


Key Responsibilities:

  • Design and develop automation test scripts using Python, Playwright, PywinAuto, and PyTest.
  • Design, develop, and maintain automation frameworks for desktop applications using Java with WinAppDriver and Selenium, and Python with Pywinauto.
  • Understand business requirements in the mortgage domain and prepare detailed test plans, test cases, and test scenarios.
  • Define automation strategy and identify test cases to automate for web, desktop, and API testing.
  • Perform manual testing for desktop, web, and API applications to validate functional and non-functional requirements.
  • Create and execute API automation scripts using Rest Assured for RESTful services validation.
  • Perform SQL queries to validate backend data and ensure data integrity in mortgage domain application.
  • Use Azure DevOps for test case management, defect tracking, CI/CD pipeline execution, and test reporting.
  • Collaborate with DevOps and development teams to integrate automated tests within CI/CD pipelines.
  • Proficient in version control and collaborative development using Git.
  • Experience in managing test automation projects and dependencies using Maven.
  • Work closely with developers, BAs, and product owners to clarify requirements and provide early feedback.
  • Report and track defects with clear reproduction steps, logs, and screenshots until closure.
  • Apply mortgage domain knowledge to test scenarios for loan origination, servicing, payments, compliance, and default modules.
  • Ensure adherence to regulatory and compliance standards in mortgage-related applications.
  • Perform cross-browser testing and desktop compatibility testing for client-based applications.
  • Drive defect prevention by identifying gaps in requirements and suggesting improvements.
  • Ensure best practices in test automation - modularization, reusability, and maintainability.
  • Provide daily/weekly status reports on testing progress, defect metrics, and automation coverage.
  • Maintain documentation for automation frameworks, test cases, and domain-specific scenarios.
  • Experienced in working within Agile/Scrum development environments.
  • Proven ability to thrive in a fast-paced environment and consistently meet deadlines with minimal supervision.
  • Strong team player with excellent multitasking skills, capable of managing multiple priorities in a deadline-driven environment.


Key requirements:

  • 4-8 years of experience in Quality Assurance (manual and automation).
  • Strong proficiency in Python, Pywinauto, PyTest, Playwright
  • Hands-on experience with Rest Assured for API automation.
  • Expertise in SQL for backend testing and data validation.
  • Experience in mortgage domain applications (loan origination, servicing, compliance).
  • Knowledge of Azure DevOps for CI/CD, defect tracking, and test case management.
  • Proficiency in testing desktop and web applications.
  • Excellent collaboration and communication skills to work with cross-functional global teams.
  • Willingness to work in evening shift timings (4 PM – 1 AM IST).
Read more
NeoGenCode Technologies Pvt Ltd
Remote only
5 - 10 yrs
₹15L - ₹35L / yr
skill iconJava
skill iconPython
skill iconGo Programming (Golang)
skill iconReact.js
skill iconAngular (2+)
+6 more

Job Title : Senior Technical Consultant (Polyglot)

Experience Required : 5 to 10 Years

Location : Bengaluru / Chennai (Remote Available)

Positions : 2

Notice Period : Immediate to 1 Month


Role Overview :

We seek passionate polyglot developers (Java/Python/Go) who love solving complex problems and building elegant digital products.

You’ll work closely with clients and teams, applying Agile practices to deliver impactful digital experiences..


Mandatory Skills :

Strong in Java/Python/Go (any 2), with frontend experience in React/Angular, plus knowledge of HTML, CSS, CI/CD, Unit Testing, and DevOps.


Key Skills & Requirements :

Backend (80% Focus) :

  • Strong expertise in Java, Python, or Go (at least 2 backend stacks required).
  • Additional exposure to Node.js, Ruby on Rails, or Rust is a plus.
  • Hands-on experience in building scalable, high-performance backend systems.

Frontend (20% Focus) :

  • Proficiency in React or Angular
  • Solid knowledge of HTML, CSS, JavaScript

Other Must-Haves :

  • Strong understanding of unit testing, CI/CD pipelines, and DevOps practices.
  • Ability to write clean, testable, and maintainable code.
  • Excellent communication and client-facing skills.


Roles & Responsibilities :

  • Tackle technically challenging and mission-critical problems.
  • Collaborate with teams to design and implement pragmatic solutions.
  • Build prototypes and showcase products to clients.
  • Contribute to system design and architecture discussions.
  • Engage with the broader tech community through talks and conferences.

Interview Process :

  1. Technical Round (Online Assessment)
  2. Technical Round with Client (Code Pairing)
  3. System Design & Architecture (Build from Scratch)

✅ This is a backend-heavy polyglot developer role (80% backend, 20% frontend).

✅ The right candidate is hands-on, has multi-stack expertise, and thrives in solving complex technical challenges.

Read more
Snaphyr

Snaphyr

Agency job
via SnapHyr by MUKESHKUMAR CHAUHAN
Remote only
7 - 10 yrs
₹20L - ₹50L / yr
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
skill iconPython
Large Language Models (LLM)
Retrieval Augmented Generation (RAG)
+6 more

🌍 We’re Hiring: Senior Field AI Engineer | Remote | Full-time


Are you passionate about pioneering enterprise AI solutions and shaping the future of agentic AI?

Do you thrive in strategic technical leadership roles where you bridge advanced AI engineering with enterprise business impact?


We’re looking for a Senior Field AI Engineer to serve as the technical architect and trusted advisor for enterprise AI initiatives. You’ll translate ambitious business visions into production-ready applied AI systems, implementing agentic AI solutions for large enterprises.


What You’ll Do:

🔹 Design and deliver custom agentic AI solutions for mid-to-large enterprises

🔹 Build and integrate intelligent agent systems using frameworks like LangChain, LangGraph, CrewAI

🔹 Develop advanced RAG pipelines and production-grade LLM solutions

🔹 Serve as the primary technical expert for enterprise accounts and build long-term customer relationships

🔹 Collaborate with Solutions Architects, Engineering, and Product teams to drive innovation

🔹 Represent technical capabilities at industry conferences and client reviews


What We’re Looking For:

✔️ 7+ years of experience in AI/ML engineering with production deployment expertise

✔️ Deep expertise in agentic AI frameworks and multi-agent system design

✔️ Advanced Python programming and scalable backend service development

✔️ Hands-on experience with LLM platforms (GPT, Gemini, Claude) and prompt engineering

✔️ Experience with vector databases (Pinecone, Weaviate, FAISS) and modern ML infrastructure

✔️ Cloud platform expertise (AWS, Azure, GCP) and MLOps/CI-CD knowledge

✔️ Strategic thinker able to balance technical vision with hands-on delivery in fast-paced environments


Why Join Us:

  • Drive enterprise AI transformation for global clients
  • Work with a category-defining AI platform bridging agents and experts
  • High-impact, customer-facing role with strategic influence
  • Competitive benefits: medical, vision, dental insurance, 401(k)


Read more
Snaphyr

Snaphyr

Agency job
via SnapHyr by MUKESHKUMAR CHAUHAN
Remote only
5 - 10 yrs
₹20L - ₹50L / yr
skill iconMachine Learning (ML)
skill iconPython
Generative AI
Large Language Models (LLM)
Customer Support
+3 more

🌍 We’re Hiring: Customer Facing Data Scientist (CFDS) | Remote | Full-time


Are you passionate about applied data science and enjoy partnering directly with enterprise customers to deliver measurable business impact?

Do you thrive in fast-paced, cross-functional environments and want to be the face of a cutting-edge AI platform?


We’re looking for a Customer Facing Data Scientist to design, develop, and deploy machine learning applications with our clients, helping them unlock the value of their data while building strong, trusted relationships.


What You’ll Do:

🔹 Collaborate directly with customers to understand their business challenges and design ML solutions

🔹 Manage end-to-end data science projects with a customer success mindset

🔹 Build long-term trusted relationships with enterprise stakeholders

🔹 Work across industries: Banking, Finance, Health, Retail, E-commerce, Oil & Gas, Marketing

🔹 Evangelize the platform, teach, enable, and support customers in building AI solutions

🔹 Collaborate internally with Data Science, Engineering, and Product teams to deliver robust solutions


What We’re Looking For:

✔️ 5–10 years of experience solving complex data problems using Machine Learning

✔️ Expert in ML modeling and Python coding

✔️ Excellent customer-facing communication and presentation skills

✔️ Experience in AI services or startup environments preferred

✔️ Domain expertise in Finance is a plus

✔️ Applied experience with Generative AI / LLM-based solutions is a plus


Why Join Us:

  • High-impact opportunity to shape a new business vertical
  • Work with next-gen AI technology to solve real enterprise problems
  • Backed by top-tier investors with experienced leadership
  • Recognized as a Top 5 Data Science & ML platform by G2
  • Comprehensive benefits: medical, vision, dental insurance, 401(k)


Read more
Snaphyr

Snaphyr

Agency job
via SnapHyr by MUKESHKUMAR CHAUHAN
Remote only
5 - 8 yrs
₹20L - ₹50L / yr
skill iconPython
skill iconMachine Learning (ML)
Large Language Models (LLM)
Langchaing
Agentic Frameworks
+7 more

🚀 We’re Hiring: Senior AI Engineer (Customer Facing) | Remote


Are you passionate about building and deploying enterprise-grade AI solutions?

Do you enjoy combining deep technical expertise with customer-facing problem-solving?

We’re looking for a Senior AI Engineer to design, deliver, and integrate cutting-edge AI/LLM applications for global enterprise clients.


What You’ll Do:

🔹 Partner directly with enterprise customers to understand business requirements & deliver AI solutions

🔹 Architect and integrate intelligent agent systems (LangChain, LangGraph, CrewAI)

🔹 Build LLM pipelines with RAG and client-specific knowledge

🔹 Collaborate with internal teams to ensure seamless integration

🔹 Champion engineering best practices with production-grade Python code


What We’re Looking For:

✔️ 5+ years of hands-on experience in AI/ML engineering or backend systems

✔️ Proven track record with LLMs & intelligent agents

✔️ Strong Python and backend expertise

✔️ Experience with vector databases (Pinecone, We aviate, FAISS)

✔️ Excellent communication & customer-facing skills


Preferred: Cloud (AWS/Azure/GCP), MLOps knowledge, and startup/AI services experience.


🌍 Remote role | High-impact opportunity | Backed by strong leadership & growth


If this sounds like you (or someone in your network), let’s connect!

Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Remote only
6 - 10 yrs
₹30L - ₹40L / yr
skill iconPython
Software engineering
Top tier 1 college only
IIT, NIT, BITS
  • Strong Software Engineering Profile
  • Mandatory (Experience 1): Must have 5+ years of experience using Python to design software solutions.
  • Mandatory (Skills 1): Strong working experience with Python (with Django framework experience) and Microservices architecture is a must.
  • Mandatory (Skills 2): Must have experience with event-driven architectures using Kafka
  • Mandatory (Skills 3): Must have Experience in DevOps practices and container orchestration using Kubernetes, along with cloud platforms like AWS, GCP, or Azure
  • Mandatory (Company): Product companies, Experience working in fintech, banking, or product companies is a plus.
  • Mandatory (Education): From IIT (Candidate should have done bachelor degree Btech or Dual degree Btech+Mtech or Intergrated Msc), From other premium institutes NIT, MNNIT, VITS, BITS (Candidates should have done B.E/B.Tech)


Read more
Pitchline
Jeevanth R
Posted by Jeevanth R
Remote only
0 - 1 yrs
₹20000 - ₹30000 / mo
skill iconPython
skill iconDjango
skill iconReact.js
skill iconPostgreSQL
SQLAlchemy
+2 more

Full Stack Developer Internship – (Remote)

Pay: ₹20,000 - ₹30,000/month | Duration: 3 months


We’re building Pitchline - a voice based conversational sales AI agent, an ambitious AI-powered web app aimed at solving meaningful problems in the B2B space. It’s currently in MVP stage and has strong early demand. I’m looking for a hands-on an Full Stack Developer Intern who can work closely with me to bring this to life.

You’ll be one of the first people to touch the codebase — shaping the foundation and solving problems across AI integration, backend APIs, and a bit of frontend work.


What you'll be doing

  • Build and maintain backend APIs (Python)
  • Integrate AI models (OpenAI, LangChain, Pinecone/Weaviate etc.) for core workflows
  • Design DB schemas and manage basic infra (Postgres)
  • Support frontend development (basic UI integration in React or similar)
  • Rapidly iterate on product ideas and ship working features
  • Collaborate closely with me (Founder) to shape the MVP


What we're looking for

  • Curiosity to learn new things. You don’t wait for someone to unblock you and take full ownership and get things done by yourself.
  • Strong foundation in backend development
  • Experience working with APIs, databases, and deploying backend services
  • Curious about or experienced in AI/LLM tools like OpenAI APIs, LangChain, vector databases, etc.
  • Comfortable working with version control and basic dev workflows
  • Worked on real projects or shipped anything end-to-end (Even if it is a personal project)


Why join us?

You’ll be a core member of the team. What we’re building is one of a kind and being a part of the successful founding team will fast track your personal and professional growth.

You’ll work on a real product with potential, witnessing in real time the impact your hard work brings.

You’ll get ownership and be part of early decisions.

You'll learn how design, tech, and business come together in early-stage product building

Flexible working hours

Opportunity to convert to full-time upon successful conversion.


We’re a fast paced team, working hard to deploy the MVP as soon as possible. If you're excited about AI, startup building, and getting your hands dirty with real development then our company is a great place to grow.

Read more
NeoGenCode Technologies Pvt Ltd
Shivank Bhardwaj
Posted by Shivank Bhardwaj
Remote, Thiruvananthapuram, Kochi (Cochin)
11 - 12 yrs
₹15L - ₹30L / yr
skill iconJava
skill iconAngular (2+)
skill iconAmazon Web Services (AWS)
skill iconSpring Boot
skill iconPython
+8 more

About the Role

NeoGenCode Technologies is looking for a Senior Technical Architect with strong expertise in enterprise architecture, cloud, data engineering, and microservices. This is a critical role demanding leadership, client engagement, and architectural ownership in designing scalable, secure enterprise systems.


Key Responsibilities

  • Design scalable, secure, and high-performance enterprise software architectures.
  • Architect distributed, fault-tolerant systems using microservices and event-driven patterns.
  • Provide technical leadership and hands-on guidance to engineering teams.
  • Collaborate with clients, understand business needs, and translate them into architectural designs.
  • Evaluate, recommend, and implement modern tools, technologies, and processes.
  • Drive DevOps, CI/CD best practices, and application security.
  • Mentor engineers and participate in architecture reviews.


Must-Have Skills

  • Architecture: Enterprise Solutions, EAI, Design Patterns, Microservices (API & Event-driven)
  • Tech Stack: Java, Spring Boot, Python, Angular (recent 2+ years experience), MVC
  • Cloud Platforms: AWS, Azure, or Google Cloud
  • Client Handling: Strong experience with client-facing roles and delivery
  • Data: Data Modeling, RDBMS & NoSQL, Data Migration/Retention Strategies
  • Security: Familiarity with OWASP, PCI DSS, InfoSec principles


Good to Have

  • Experience with Mobile Technologies (native, hybrid, cross-platform)
  • Knowledge of tools like Enterprise Architect, TOGAF frameworks
  • DevOps tools, containerization (Docker), CI/CD
  • Experience in financial services / payments domain
  • Familiarity with BI/Analytics, AI/ML, Predictive Analytics
  • 3rd-party integrations (e.g., MuleSoft, BizTalk)
Read more
Grey Chain Technology

at Grey Chain Technology

5 candid answers
Ariba Khan
Posted by Ariba Khan
Remote only
5yrs+
Upto ₹25L / yr (Varies
)
skill iconPython
Generative AI
Large Language Models (LLM) tuning
Langchain

We are seeking a Lead AI Engineer to spearhead development of advanced agentic workflows and large language model (LLM) systems. The ideal candidate should bring deep expertise in agent building, LLM evaluation/tracing, and prompt operations, combined with strong deployment experience at scale.


Key Responsibilities:

  • Design and build agentic workflows leveraging modern frameworks.
  • Develop robust LLM evaluation, tracing, and prompt ops pipelines.
  • Lead MCP (Model Context Protocol) based system integrations.
  • Deploy and scale AI/ML solutions with enterprise-grade reliability.
  • Collaborate with product and engineering teams to deliver high-impact solutions.

Required Skills & Experience:

  • Proficiency with LangChain, LangGraph, Pydantic, Crew.ai, and MCP.
  • Strong understanding of LLM architecture, behavior, and evaluation methods.
  • Hands-on expertise in MLOps, DevOps, and deploying AI/ML workloads at scale.
  • Experience leading AI projects from prototyping to production.
  • Strong foundation in prompt engineering, observability, and agent orchestration.
Read more
Remote only
3 - 6 yrs
₹15L - ₹30L / yr
Go-to-market strategy
Product Management
skill iconPython
Artificial Intelligence (AI)
Large Language Models (LLM) tuning

Product Manager – AI Go-to-Market (GTM)

You know that feeling when you see a product not just built, but truly adopted? That’s what this role is about.


We built something that turns the endless scroll of social video into business intelligence. The product is already strong — now it’s time to take it to market, scale adoption, and own how it reaches the world.

This isn’t another PM role. This is where you become the strategist who shapes how AI meets the market.


Who We Are

Our team is small, global, and moves fast. Not startup-fast. Not “we say we’re agile” fast. Actually fast.

We ship meaningful features in days, and now we need someone who can do the same on the market side.

The people here don’t just work with AI — they think in AI. They dream in Python. They know how to build.

What we’re missing is the person who knows how to launch, position, and scale.


What We Need

We need someone who’s lived the GTM life.

Someone who has:

  • Shaped go-to-market strategies across multiple channels.
  • Crafted positioning, messaging, and pricing that drove adoption.
  • Partnered with sales & marketing to accelerate pipeline and conversion.
  • Translated market insights into product direction.

You don’t need to be taught what adoption metrics look like. You don’t need to “grow into” GTM strategy. You already know these things so deeply that you can focus on the only thing that matters: getting AI into the hands of people who can’t live without it.


Who You Are

  • Strong IT/product foundation with a track record in launching AI/tech products.
  • An AI believer who sees how it will reshape industries.
  • Obsessed with channels, adoption, differentiation, and growth loops.
  • Someone who thrives where market execution meets product credibility.


The Reality

The work is beautifully challenging. The pace is intense in the best way. The problems are complex but worth solving. And the team? They care deeply.

If you get your energy from taking innovation to market and building adoption strategies that matter, you’ll probably fall in love with what we do here. If you prefer more structure or slower rhythms, this might not align — and that’s completely valid.


How to Apply

If you’re reading this thinking “finally, somewhere that gets it” — we’d love to see something you’ve launched. Not a resume. Not a cover letter. Show us proof of how you’ve taken a product to market.

We’re excited to see what you’ve built and have a real conversation about whether this could be magic for both of us.


Read more
Springer Capital
Remote only
0 - 0 yrs
₹3000 - ₹5000 / mo
skill iconPython

About the Role

We are looking for enthusiastic LLM Interns to join our team remotely for a 3-month internship. This role is ideal for students or graduates interested in AI, Natural Language Processing (NLP), and Large Language Models (LLMs). You will gain hands-on experience working with cutting-edge AI tools, prompt engineering, and model fine-tuning. While this is an unpaid internship, interns who successfully complete the program will receive a Completion Certificate and a Letter of Recommendation.

Responsibilities

  • Research and experiment with LLMs, NLP techniques, and AI frameworks.
  • Design, test, and optimize prompts and workflows for different use cases.
  • Assist in fine-tuning or integrating LLMs for internal projects.
  • Evaluate model outputs and improve accuracy, efficiency, and reliability.
  • Collaborate with developers, data scientists, and product managers to implement AI-driven features.
  • Document experiments, results, and best practices.

Requirements

  • Strong interest in Artificial Intelligence, NLP, and Machine Learning.
  • Familiarity with Python and ML libraries (e.g., TensorFlow, PyTorch, Hugging Face Transformers).
  • Basic understanding of LLM concepts such as embeddings, fine-tuning, and inference.
  • Knowledge of APIs (OpenAI, Anthropic, Hugging Face, etc.) is a plus.
  • Good analytical and problem-solving skills.
  • Ability to work independently in a remote environment.

What You’ll Gain

  • Practical exposure to state-of-the-art AI tools and LLMs.
  • Mentorship from AI and software professionals.
  • Completion Certificate upon successful completion.
  • Letter of Recommendation based on performance.
  • Experience to showcase in research projects, academic work, or future AI roles.

Internship Details

  • Duration: 3 months
  • Location: Remote (Work from Home)
  • Stipend: Unpaid
  • Perks: Completion Certificate + Letter of Recommendation


Read more
Remote only
0 - 0 yrs
₹3000 - ₹5000 / mo
skill iconPython

About the Role

We are looking for enthusiastic Backend Developer Interns to join our team remotely for a 3-month internship. This is an excellent opportunity to gain hands-on experience in backend development, work on real projects, and expand your technical skills. While this is an unpaid internship, interns who successfully complete the program will receive a Completion Certificate and a Letter of Recommendation.

Responsibilities

  • Assist in developing and maintaining backend services and APIs.
  • Work with databases (SQL/NoSQL) for data storage and retrieval.
  • Collaborate with frontend developers to integrate user-facing elements with server-side logic.
  • Write clean, efficient, and reusable code.
  • Debug, troubleshoot, and optimize backend performance.
  • Participate in code reviews and team discussions.
  • Document technical processes and contributions.

Requirements

  • Basic knowledge of at least one backend language/framework (Node.js, Python/Django/Flask, Java/Spring Boot, or similar).
  • Understanding of RESTful APIs and web services.
  • Familiarity with relational and/or NoSQL databases (MySQL, PostgreSQL, MongoDB, etc.).
  • Knowledge of Git/GitHub for version control.
  • Strong problem-solving and analytical skills.
  • Ability to work independently in a remote environment.
  • Good communication skills and eagerness to learn.

What You’ll Gain

  • Real-world experience in backend development.
  • Mentorship and exposure to industry practices.
  • Completion Certificate at the end of the internship.
  • Letter of Recommendation based on performance.
  • Opportunity to strengthen your portfolio with live projects.

Internship Details

  • Duration: 3 months
  • Location: Remote (Work from Home)
  • Stipend: Unpaid
  • Perks: Completion Certificate + Letter of Recommendation


Read more
Unilog

at Unilog

3 candid answers
1 video
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote, Bengaluru (Bangalore), Mysore
4yrs+
Upto ₹40L / yr (Varies
)
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Generative AI
Large Language Models (LLM)
Google Vertex AI
+7 more

About Unilog

Unilog is the only connected product content and eCommerce provider serving the Wholesale Distribution, Manufacturing, and Specialty Retail industries. Our flagship CX1 Platform is at the center of some of the most successful digital transformations in North America. CX1 Platform’s syndicated product content, integrated eCommerce storefront, and automated PIM tool simplify our customers' path to success in the digital marketplace.

With more than 500 customers, Unilog is uniquely positioned as the leader in eCommerce and product content for Wholesale distribution, manufacturing, and specialty retail.


About the Role

We are looking for a highly motivated Innovation Engineer to join our CTO Office and drive the exploration, prototyping, and adoption of next-generation technologies. This role offers a unique opportunity to work at the forefront of AI/ML, Generative AI (Gen AI), Large Language Models (LLMs), Vertex AI, MCP, Vector Databases, AI Search, Agentic AI, Automation.

As an Innovation Engineer, you will be responsible for identifying emerging technologies, building proof-of-concepts (PoCs), and collaborating with cross-functional teams to define the future of AI-driven solutions. Your work will directly influence the company’s technology strategy and help shape disruptive innovations.


Key Responsibilities

  1. Research Implementation: Stay ahead of industry trends, evaluate emerging AI/ML technologies, and prototype novel solutions in areas like Gen AI, Vector Search, AI Agents, VertexAI, MCP and Automation.
  2. Proof-of-Concept Development: Rapidly build, test, and iterate PoCs to validate new technologies for potential business impact.
  3. AI/ML Engineering: Design and develop AI/ML models, AI Agents, LLMs, intelligent search capabilities leveraging Vector embeddings.
  4. Vector AI Search: Explore vector databases and optimize retrieval-augmented generation (RAG) workflows.
  5. Automation AI Agents: Develop autonomous AI agents and automation frameworks to enhance business processes.
  6. Collaboration Thought Leadership: Work closely with software developers and product teams to integrate innovations into production-ready solutions.
  7. Innovation Strategy: Contribute to the technology roadmap, patents, and research papers to establish leadership in emerging domains.

Required Qualifications

  1. 4–10 years of experience in AI/ML, software engineering, or a related field.
  2. Strong hands-on expertise in Python, TensorFlow, PyTorch, LangChain, Hugging Face, OpenAI APIs, Claude, Gemini, VertexAI, MCP.
  3. Experience with LLMs, embeddings, AI search, vector databases (e.g., Pinecone, FAISS, Weaviate, PGVector), MCP and agentic AI (Vertex, Autogen, ADK)
  4. Familiarity with cloud platforms (AWS, Azure, GCP) and AI/ML infrastructure.
  5. Strong problem-solving skills and a passion for innovation.
  6. Ability to communicate complex ideas effectively and work in a fast-paced, experimental environment.

Preferred Qualifications

  • Experience with multi-modal AI (text, vision, audio), reinforcement learning, or AI security.
  • Knowledge of data pipelines, MLOps, and AI governance.
  • Contributions to open-source AI/ML projects or published research papers.

Why Join Us?

  • Work on cutting-edge AI/ML innovations with the CTO Office.
  • Influence the company’s future AI strategy and shape emerging technologies.
  • Competitive compensation, growth opportunities, and a culture of continuous learning.


About Our Benefits

Unilog offers a competitive total rewards package including competitive salary, multiple medical, dental, and vision plans to meet all our employees’ needs, career development, advancement opportunities, annual merit, a generous time-off policy, and a flexible work environment.

Unilog is committed to building the best team and we are committed to fair hiring practices where we hire people for their potential and advocate for diversity, equity, and inclusion. As such, we do not discriminate or make decisions based on your race, color, religion, creed, ancestry, sex, national origin, age, disability, familial status, marital status, military status, veteran status, sexual orientation, gender identity, or expression, or any other protected class.

Read more
Unilog

at Unilog

3 candid answers
1 video
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote, Bengaluru (Bangalore), Mysore
4yrs+
Upto ₹40L / yr (Varies
)
Artificial Intelligence (AI)
AI Agents
Customer Support
Chatbot
Large Language Models (LLM)
+2 more

About Unilog

Unilog is the only connected product content and eCommerce provider serving the Wholesale Distribution, Manufacturing, and Specialty Retail industries. Our flagship CX1 Platform is at the center of some of the most successful digital transformations in North America.

With more than 500 customers, Unilog is uniquely positioned as the leader in eCommerce and product content for Wholesale distribution, manufacturing, and specialty retail.


About the Role

We are seeking a skilled AI CX Automation Engineer to design, build, and optimize AI-driven workflows for customer support automation.

This role will be responsible for enabling end-to-end L1 support automation using Freshworks AI (Freddy AI) and other conversational AI platforms. The ideal candidate will have strong technical expertise in conversational AI, workflow automation, and system integrations, working closely with Knowledge Managers and Customer Support teams to maximize case deflection and resolution efficiency.


Key Responsibilities

  • AI Workflow Design: Build, configure, and optimize conversational AI workflows for L1 customer query handling.
  • Automation Enablement: Implement automation using Freshworks AI (Freddy AI), chatbots, and orchestration tools to reduce manual support load.
  • Integration: Connect AI agents with knowledge bases, CRM, and ticketing systems to enable contextual and seamless responses.
  • Conversational Design: Craft natural, intuitive conversation flows for chatbots and virtual agents to improve customer experience.
  • Performance Optimization: Monitor AI agent performance, resolution rates, and continuously fine-tune workflows.
  • Cross-functional Collaboration: Partner with Knowledge Managers, Product Teams, and Support to ensure workflows align with up-to-date content and customer needs.
  • Scalability Innovation: Explore emerging agentic AI capabilities and recommend enhancements to future-proof CX automation.

Required Qualifications

  • 4–10 years of experience in conversational AI, automation engineering, or customer support technology.
  • Hands-on expertise with Freshworks AI (Freddy AI) or similar AI-driven CX platforms (Zendesk AI, Salesforce Einstein, Dialogflow, Rasa, etc.).
  • Strong experience in workflow automation, chatbot configuration, and system integrations (APIs, Webhooks, RPA).
  • Familiarity with LLMs, intent recognition, and conversational AI frameworks.
  • Strong analytical skills to evaluate and optimize AI agent performance.
  • Excellent problem-solving, collaboration, and communication skills.

Preferred Qualifications

  • Experience with agentic AI frameworks and multi-turn conversational flows.
  • Knowledge of scripting or programming languages (Python, Node.js) for custom AI integrations.
  • Familiarity with vector search, RAG (Retrieval-Augmented Generation), and AI search to improve context-driven answers.
  • Exposure to SaaS, product-based companies, or enterprise-scale customer support operations.

Why Join Us?

  • Be at the forefront of AI-driven customer support automation.
  • Directly contribute to achieving up to 60% case resolution through AI workflows.
  • Collaborate with Knowledge Managers and AI engineers to build next-gen CX solutions.
  • Competitive compensation, benefits, and a culture of continuous learning.

Benefits

Unilog offers a competitive total rewards package including:

  • Competitive salary
  • Multiple medical, dental, and vision plans
  • Career development and advancement opportunities
  • Annual merit increases
  • Generous time-off policy
  • Flexible work environment

We are committed to fair hiring practices and advocate for diversity, equity, and inclusion.

Read more
GroundtRuth

at GroundtRuth

1 recruiter
Priti Singh
Posted by Priti Singh
Remote only
0 - 2 yrs
₹5L - ₹6L / yr
skill iconPython
Artificial Intelligence (AI)
Langchaing
Scripting
Generative AI

GroundTruth is an advertising platform that turns real-world behavior into marketing that drives in-store visits and other real business results. We use observed real-world consumer behavior, including location and purchase data, to create targeted advertising campaigns across all screens, measure how consumers respond, and uncover unique insights to help optimize ongoing and future marketing efforts.


With this focus on media, measurement, and insights, we provide marketers with tools to deliver media campaigns that drive measurable impact, such as in-store visits, sales, and more.


Apply Here: https://apply.workable.com/groundtruth/j/FFCB55146B/apply/


Location: Remote (with hybrid touch points if needed)

Duration: 5–6 months (full-time commitment)

Stipend: 55,000 per month


About the Program

Are you passionate about Artificial Intelligence and eager to move from experiments to real-world impact?

GroundTruth India is launching a 6-month AI Fellowship/Internship designed for professionals who want to build deployable AI solutions in a business context. This is your chance to work with senior mentors, learn industry best practices, and contribute to projects that drive real value at scale.


What You’ll Do

  • Convert prototypes into production-ready AI solutions that solve real business problems.
  • Design, test, and deploy AI models and workflows with guidance from senior mentors.
  • Work with LLMs, automation frameworks, and data pipelines in enterprise-scale projects.
  • Participate in fast-paced sprints to deliver AI use cases end-to-end (idea → prototype → working solution).
  • Document your work to ensure scalability and reusability across teams.

What You’ll Gain

  • Hands-on experience in building and deploying real AI solutions, not just experiments.
  • Mentorship on scaling AI in enterprise environments.
  • Exposure to challenges around data, workflows, and business adoption.
  • A portfolio of impactful AI projects to boost your career.
  • Direct collaboration with US-based teams, giving you global exposure and insights into how AI is adopted in international markets.
  • Opportunity to be considered for future full time roles at GroundTruth based on performance.

Skills That Help

  • Familiarity with AI tools and frameworks (e.g., ChatGPT, Copilot, LangChain, Hugging Face, etc.).
  • Basic coding or scripting skills (Python preferred).
  • Curiosity, problem-solving mindset, and ability to adapt quickly.
  • Strong communication and teamwork skills.

Who Should Apply

This program is ideal for:

  1. Individuals ready to commit full-time for 5–6 months (extendable to up to 12 months in few cases) and eager to go from learning → impact.
  2. Those who have already:
  • Built prototypes, side projects, or hackathon solutions using AI tools.
  • Experimented with ChatGPT, Copilot, LangChain, Hugging Face, or Python ML libraries.


Apply Here: https://apply.workable.com/groundtruth/j/FFCB55146B/apply/

Read more
Data Axle

at Data Axle

2 candid answers
Eman Khan
Posted by Eman Khan
Remote, Pune
4 - 9 yrs
Best in industry
skill iconMachine Learning (ML)
skill iconPython
SQL
PySpark
XGBoost

About Data Axle:

Data Axle Inc. has been an industry leader in data, marketing solutions, sales and research for over 50 years in the USA. Data Axle now as an established strategic global centre of excellence in Pune. This centre delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business & consumer databases.


Data Axle Pune is pleased to have achieved certification as a Great Place to Work!


Roles & Responsibilities:

We are looking for a Data Scientist to join the Data Science Client Services team to continue our success of identifying high quality target audiences that generate profitable marketing return for our clients. We are looking for experienced data science, machine learning and MLOps practitioners to design, build and deploy impactful predictive marketing solutions that serve a wide range of verticals and clients. The right candidate will enjoy contributing to and learning from a highly talented team and working on a variety of projects.


We are looking for a Senior Data Scientist who will be responsible for:

  1. Ownership of design, implementation, and deployment of machine learning algorithms in a modern Python-based cloud architecture
  2. Design or enhance ML workflows for data ingestion, model design, model inference and scoring
  3. Oversight on team project execution and delivery
  4. If senior, establish peer review guidelines for high quality coding to help develop junior team members’ skill set growth, cross-training, and team efficiencies
  5. Visualize and publish model performance results and insights to internal and external audiences


Qualifications:

  1. Masters in a relevant quantitative, applied field (Statistics, Econometrics, Computer Science, Mathematics, Engineering)
  2. Minimum of 3.5 years of work experience in the end-to-end lifecycle of ML model development and deployment into production within a cloud infrastructure (Databricks is highly preferred)
  3. Proven ability to manage the output of a small team in a fast-paced environment and to lead by example in the fulfilment of client requests
  4. Exhibit deep knowledge of core mathematical principles relating to data science and machine learning (ML Theory + Best Practices, Feature Engineering and Selection, Supervised and Unsupervised ML, A/B Testing, etc.)
  5. Proficiency in Python and SQL required; PySpark/Spark experience a plus
  6. Ability to conduct a productive peer review and proper code structure in Github
  7. Proven experience developing, testing, and deploying various ML algorithms (neural networks, XGBoost, Bayes, and the like)
  8. Working knowledge of modern CI/CD methods This position description is intended to describe the duties most frequently performed by an individual in this position.


It is not intended to be a complete list of assigned duties but to describe a position level.

Read more
VDart
Abirami Ramdoss
Posted by Abirami Ramdoss
Remote only
4 - 30 yrs
₹15L - ₹32L / yr
Generative AI
NumPy
pandas
skill iconPython
Large Language Models (LLM)
+10 more

Role: GenAI Full Stack Engineer

Fulltime

Work Location: Remote


Job Description:


•             Python and familiar with AI/Gen AI frameworks. Experience with data manipulation libraries like Pandas and NumPy is crucial.

•             Specific expertise in implementing and managing large language models (LLMs) is a must.

•             Fast API experience for API development

•             A solid grasp of software engineering principles, including version control (Git), continuous integration and continuous deployment (CI/CD) practices, and automated testing, is required. Experience in MLOps, ML engineering, and Data Science, with a proven track record of developing and maintaining AI solutions, is essential.

•             We also need proficiency in DevOps tools such as Docker, Kubernetes, Jenkins, and Terraform, along with advanced CI/CD practices.

Read more
HelloRamp.ai

at HelloRamp.ai

2 candid answers
Eman Khan
Posted by Eman Khan
Remote, Bengaluru (Bangalore)
1 - 2 yrs
₹8L - ₹12L / yr
Computer Vision
NERF
CUDA
TensorRT
ONNX
+10 more

About HelloRamp.ai

HelloRamp is on a mission to revolutionize media creation for automotive and retail using AI. Our platform powers 3D/AR experiences for leading brands like Cars24, Spinny, and Samsung. We’re now building the next generation of Computer Vision + AI products, including cutting-edge NeRF pipelines and AI-driven video generation.


What You’ll Work On

  • Develop and optimize Computer Vision pipelines for large-scale media creation.
  • Implement NeRF-based systems for high-quality 3D reconstruction.
  • Build and fine-tune AI video generation models using state-of-the-art techniques.
  • Optimize AI inference for production (CUDA, TensorRT, ONNX).
  • Collaborate with the engineering team to integrate AI features into scalable cloud systems.
  • Research latest AI/CV advancements and bring them into production.


Skills & Experience

  • Strong Python programming skills.
  • Deep expertise in Computer Vision and Machine Learning.
  • Hands-on with PyTorch/TensorFlow.
  • Experience with NeRF frameworks (Instant-NGP, Nerfstudio, Plenoxels) and/or video synthesis models.
  • Familiarity with 3D graphics concepts (meshes, point clouds, depth maps).
  • GPU programming and optimization skills.


Nice to Have

  • Knowledge of Three.js or WebGL for rendering AI outputs on the web.
  • Familiarity with FFmpeg and video processing pipelines.
  • Experience in cloud-based GPU environments (AWS/GCP).


Why Join Us?

  • Work on cutting-edge AI and Computer Vision projects with global impact.
  • Join a small, high-ownership team where your work matters.
  • Opportunity to experiment, publish, and contribute to open-source.
  • Competitive pay and flexible work setup.
Read more
MatchMove

at MatchMove

2 candid answers
1 recruiter
Ariba Khan
Posted by Ariba Khan
Remote only
6yrs+
Upto ₹50L / yr (Varies
)
skill iconPython
ETL
skill iconAmazon Web Services (AWS)

About Us

MatchMove is a leading embedded finance platform that empowers businesses to embed financial services into their applications. We provide innovative solutions across payments, banking-as-a-service, and spend/send management, enabling our clients to drive growth and enhance customer experiences.


Are You The One?

As a Technical Lead Engineer - Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business.


You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight.


You will contribute to

  • Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services.
  • Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark.
  • Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services.
  • Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases.
  • Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment.
  • Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM).
  • Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights.
  • Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines.

Responsibilities

  • Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR.
  • Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication.
  • Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation.
  • Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards).
  • Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations.
  • Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership.
  • Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform.

 Requirements

  • At-least 6 years of experience in data engineering.
  • Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum.
  • Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs.
  • Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation.
  • Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale.
  • Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions.
  • Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments.
  • Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene.
  • Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders.

 Brownie Points

  • Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements.
  • Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection.
  • Familiarity with data contracts, data mesh patterns, and data as a product principles.
  • Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases.
  • Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3.
  • Experience building data platforms for ML/AI teams or integrating with model feature stores.

 MatchMove Culture:

  • We cultivate a dynamic and innovative culture that fuels growth, creativity, and collaboration. Our fast-paced fintech environment thrives on adaptability, agility, and open communication.
  • We focus on employee development, supporting continuous learning and growth through training programs, learning on the job and mentorship.
  • We encourage speaking up, sharing ideas, and taking ownership. Embracing diversity, our team spans across Asia, fostering a rich exchange of perspectives and experiences.
  • Together, we harness the power of fintech and e-commerce to make a meaningful impact on people's lives.

Grow with us and shape the future of fintech and e-commerce. Join us and be part of something bigger!

Read more
Palcode.ai

at Palcode.ai

2 candid answers
Team Palcode
Posted by Team Palcode
Remote only
0 - 1 yrs
₹3L - ₹4L / yr
skill iconReact.js
skill iconPython

Job description

Job Title: React JS Developer - (Core Skill - React JS)

Core Skills -

  • Minimum of 6 months of experience in frontend Dev using React JS (Excl any internship, Training programs)

The Company

Our mission is to enable and empower engineering teams to build world-class solutions, and release them faster than ever, we strongly believe engineers are the building block of a great society - we love building, and we love solving problems Talk about problem-solving and technical challenges. And unique problems faced by the Engineering Community. Our DNA of stems from Mohit’s passion for building technology products for solving problems which has a big impact.

We are a bootstrapped company largely and aspire to become the next household name in the engineering community and leave a signature on all the great technological products being built across the globe.


Who would be your customers - We, are going to shoulder the great responsibility of solving minute problems that you as an Engineer have faced over the years.


The Opportunity

An exciting opportunity to be part of a story, making an impact on How domain solutions will be built in years to come


Do you wish to lead the Engineering vertical, build your own fort, and shine through the journey of building the next-generation platform?


Blaash is looking to hire a problem solver with strong technical expertise in building large applications. You will build the next-generation AI solution for the Engineering Team - including backend and frontend.


Responsibility


Owning the front-end and back-end development in all aspects. Proposing high-level design solutions, and POCs to arrive at the right solution. Mentoring junior developers and interns.


What makes you an ideal team member we are eagerly waiting to meet - :

  • Demonstrate strong architecture and design skills in building high-performance APIs using AWS services.
  • Design and implement highly scalable, interactive web applications with high usability
  • Collaborate with product teams to iterate ideas on data monetization products/services and define feasibility
  • Rapidly iterate on product ideas, build prototypes, and participate in proof of concepts
  • Collaborate with internal and external teams in troubleshooting functional and performance issues
  • Work with DevOps Engineers to integrate any new code into existing CI/CD pipelines
  • 6 months + of experience in frontend dev using React JS
  • 6 moths + years of hands-on experience developing high-performance APIs & Web applications


Salary -

  • The first 4 months of the Training and Probation period = 15K-18K INR per month
  • On successful completion of the Probation period = 20K - 25K INR per month
  • Annual Performance Bonus of INR 40,000
  • Equity Benefits for deserving candidates



How we will set you up for success You will work closely with the Founding team to understand what we are building.

You will be given comprehensive training about the tech stack, with an opportunity to avail virtual training as well. You will be involved in a monthly one-on-one with the founders to discuss feedback


If you’ve made it this far, then maybe you’re interested in joining us to build something pivotal, carving a unique story for you - Get in touch with us, or apply now!

Read more
ConvertLens

at ConvertLens

2 candid answers
3 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote, Noida
2yrs+
Upto ₹16L / yr (Varies
)
skill iconPython
FastAPI
AI Agents
Artificial Intelligence (AI)
Large Language Models (LLM)
+9 more

🚀 About Us

At Remedo, we're building the future of digital healthcare marketing. We help doctors grow their online presence, connect with patients, and drive real-world outcomes like higher appointment bookings and better Google reviews — all while improving their SEO.


We’re also the creators of Convertlens, our generative AI-powered engagement engine that transforms how clinics interact with patients across the web. Think hyper-personalized messaging, automated conversion funnels, and insights that actually move the needle.

We’re a lean, fast-moving team with startup DNA. If you like ownership, impact, and tech that solves real problems — you’ll fit right in.


🛠️ What You’ll Do

  • Build and maintain scalable Python back-end systems that power Convertlens and internal applications.
  • Develop Agentic AI applications and workflows to drive automation and insights.
  • Design and implement connectors to third-party systems (APIs, CRMs, marketing tools) to source and unify data.
  • Ensure system reliability with strong practices in observability, monitoring, and troubleshooting.


⚙️ What You Bring

  • 2+ years of hands-on experience in Python back-end development.
  • Strong understanding of REST API design and integration.
  • Proficiency with relational databases (MySQL/PostgreSQL).
  • Familiarity with observability tools (logging, monitoring, tracing — e.g., OpenTelemetry, Prometheus, Grafana, ELK).
  • Experience maintaining production systems with a focus on reliability and scalability.
  • Bonus: Exposure to Node.js and modern front-end frameworks like ReactJs.
  • Strong problem-solving skills and comfort working in a startup/product environment.
  • A builder mindset — scrappy, curious, and ready to ship.


💼 Perks & Culture

  • Flexible work setup — remote-first for most, hybrid if you’re in Delhi NCR.
  • A high-growth, high-impact environment where your code goes live fast.
  • Opportunities to work with Agentic AI and cutting-edge tech.
  • Small team, big vision — your work truly matters here.
Read more
Syrencloud

at Syrencloud

3 recruiters
Sudheer Kumar
Posted by Sudheer Kumar
Remote, Hyderabad
3 - 10 yrs
₹10L - ₹30L / yr
Microsoft Fabric
ADF
Synapse
databricks
Microsoft Windows Azure
+5 more

We are seeking a highly skilled Fabric Data Engineer with strong expertise in Azure ecosystem to design, build, and maintain scalable data solutions. The ideal candidate will have hands-on experience with Microsoft Fabric, Databricks, Azure Data Factory, PySpark, SQL, and other Azure services to support advanced analytics and data-driven decision-making.


Key Responsibilities

  • Design, develop, and maintain scalable data pipelines using Microsoft Fabric and Azure data services.
  • Implement data integration, transformation, and orchestration workflows with Azure Data Factory, Databricks, and PySpark.
  • Work with stakeholders to understand business requirements and translate them into robust data solutions.
  • Optimize performance and ensure data quality, reliability, and security across all layers.
  • Develop and maintain data models, metadata, and documentation to support analytics and reporting.
  • Collaborate with data scientists, analysts, and business teams to deliver insights-driven solutions.
  • Stay updated with emerging Azure and Fabric technologies to recommend best practices and innovations.
  • Required Skills & Experience
  • Proven experience as a Data Engineer with strong expertise in the Azure cloud ecosystem.

Hands-on experience with:

  • Microsoft Fabric
  • Azure Databricks
  • Azure Data Factory (ADF)
  • PySpark & Python
  • SQL (T-SQL/PL-SQL)
  • Solid understanding of data warehousing, ETL/ELT processes, and big data architectures.
  • Knowledge of data governance, security, and compliance within Azure.
  • Strong problem-solving, debugging, and performance tuning skills.
  • Excellent communication and collaboration abilities.

 

Preferred Qualifications

  • Microsoft Certified: Fabric Analytics Engineer Associate / Azure Data Engineer Associate.
  • Experience with Power BI, Delta Lake, and Lakehouse architecture.
  • Exposure to DevOps, CI/CD pipelines, and Git-based version control.
Read more
NeoGenCode Technologies Pvt Ltd
Remote only
3 - 6 yrs
₹7L - ₹12L / yr
skill iconPython
FastAPI
API Development
third API Integration
Google Cloud Platform (GCP)
+2 more

Job Title: Backend Developer (Full Time)

Location: Remote

Interview: Virtual Interview

Experience Required: 3+ Years


Backend / API Development (About the Role)

  • Strong proficiency in Python (FastAPI) or Node.js (Express) (Python preferred).
  • Proven experience in designing, developing, and integrating APIs for production-grade applications.
  • Hands-on experience deploying to serverless platforms such as Cloudflare Workers, Firebase Functions, or Google Cloud Functions.
  • Solid understanding of Google Cloud backend services (Cloud Run, Cloud Functions, Secret Manager, IAM roles).
  • Expertise in API key and secrets management, ensuring compliance with security best practices.
  • Skilled in secure API development, including HTTPS, authentication/authorization, input validation, and rate limiting.
  • Track record of delivering scalable, high-quality backend systems through impactful projects in production environments.


Read more
aurusai

at aurusai

3 candid answers
Uday Ayyagari
Posted by Uday Ayyagari
Remote only
5 - 10 yrs
₹1L - ₹2L / yr
skill iconPython
skill iconReact.js
skill iconReact Native
Google Cloud Platform (GCP)
SQL Azure
+1 more

About Us

We are building the next generation of AI-powered products and platforms that redefine how businesses digitize, automate, and scale. Our flagship solutions span eCommerce, financial services, and enterprise automation, with an emerging focus on commercializing cutting-edge AI services across Grok, OpenAI, and the Azure Cloud ecosystem.

Role Overview

We are seeking a highly skilled Full-Stack Developer with a strong foundation in e-commerce product development and deep expertise in backend engineering using Python. The ideal candidate is passionate about designing scalable systems, has hands-on experience with cloud-native architectures, and is eager to drive the commercialization of AI-driven services and platforms.

Key Responsibilities

  • Design, build, and scale full-stack applications with a strong emphasis on backend services (Python, Django/FastAPI/Flask).
  • Lead development of eCommerce features including product catalogs, payments, order management, and personalized customer experiences.
  • Integrate and operationalize AI services across Grok, OpenAI APIs, and Azure AI services to deliver intelligent workflows and user experiences.
  • Build and maintain secure, scalable APIs and data pipelines for real-time analytics and automation.
  • Collaborate with product, design, and AI research teams to bring experimental features into production.
  • Ensure systems are cloud-ready (Azure preferred) with CI/CD, containerization (Docker/Kubernetes), and strong monitoring practices.
  • Contribute to frontend development (React, Angular, or Vue) to deliver seamless, responsive, and intuitive user experiences.
  • Champion best practices in coding, testing, DevOps, and Responsible AI integration.

Required Skills & Experience

  • 5+ years of professional full-stack development experience.
  • Proven track record in eCommerce product development (payments, cart, checkout, multi-tenant stores).
  • Strong backend expertise in Python (Django, FastAPI, Flask).
  • Experience with cloud services (Azure preferred; AWS/GCP is a plus).
  • Hands-on with AI/ML integration using APIs like OpenAI, Grok, Azure Cognitive Services.
  • Solid understanding of databases (SQL & NoSQL), caching, and API design.
  • Familiarity with frontend frameworks such as React, Angular, or Vue.
  • Experience with DevOps practices: GitHub/GitLab, CI/CD, Docker, Kubernetes.
  • Strong problem-solving skills, adaptability, and a product-first mindset.

Nice to Have

  • Knowledge of vector databases, RAG pipelines, and LLM fine-tuning.
  • Experience in scalable SaaS architectures and subscription platforms.
  • Familiarity with C2PA, identity security, or compliance-driven development.

What We Offer

  • Opportunity to shape the commercialization of AI-driven products in fast-growing markets.
  • A high-impact role with autonomy and visibility.
  • Competitive compensation, equity opportunities, and growth into leadership roles.
  • Collaborative environment working with seasoned entrepreneurs, AI researchers, and cloud architects.
Read more
impressai
Chaithannya K
Posted by Chaithannya K
Remote only
2 - 7 yrs
₹4L - ₹10L / yr
skill iconPython
skill iconDjango

Software engineers are the lifeblood of impress.ai. They build the software that powers our platform, the dashboard that recruiters around the world use, and all the other cool things we build and release. We are looking to expand our team with highly skilled backend engineers. As backend engineers, you don’t just build backend APIs and architect databases, you help bring to production the AI prototypes our Analytics/Data team builds, and you ensure that the cloud infrastructure, DevOps, and CI/CD processes that keep us ticking are optimal. 


The Job:

The ideal candidate should have a few years of experience under the belt and have the technical skill, competencies, and maturity necessary to independently execute projects with minimal supervision. They should also have the ability to architect engineering solutions that require minimal input from senior software engineers in order to satisfy the business requirements.

At impress.ai our mission is to make hiring fairer for all applicants. We combine I/O Psychology with AI to create an application screening process that gives an opportunity to all candidates to undergo a structured interview. impress.ai has consciously used it to ensure that people were chosen based on their talent, knowledge, and capabilities as opposed to their gender, race, or name.


Responsibilities:

  • Execute full software development life cycle (SDLC)
  • Write well-designed, testable code
  • Produce specifications and determine operational feasibility
  • Build and integrate new software components into the Impress Platform
  • Develop software verification plans and quality assurance procedures
  • Document and maintain software functionality
  • Troubleshoot, debug, and upgrade existing systems
  • Deploy programs and evaluate user feedback
  • Comply with project plans and industry standards
  • Develop flowcharts, layouts, and documentation to identify requirements and solutions


You Bring to the Table:

  • Proven work experience as a Software Engineer or Software Developer
  • Proficiency in software engineering tools
  • The ability to develop software in the Django framework (Python) is necessary for the role.
  • Excellent knowledge of relational databases, SQL, and ORM technologies
  • Ability to document requirements and specifications
  • A BSc degree in Computer Science, Engineering, or a relevant field is preferred but not necessary.


Our Benefits:

  • Work with cutting-edge technologies like Machine Learning, AI, and NLP and learn from the experts in their fields in a fast-growing international SaaS startup. As a young business, we have a strong culture of learning and development. Join our discussions, brown bag sessions, and research-oriented sessions.
  • A work environment where you are given the freedom to develop to your full potential and become a trusted member of the team.
  • Opportunity to contribute to the success of a fast-growing, market-leading product.
  • Work is important, and so is your personal well-being. The work culture at impress.ai is designed to ensure a healthy balance between the two.

Diversity and Inclusion are more than just words for us. We are committed to providing a respectful, safe, and inclusive workplace. Diversity at impress.ai means fostering a workplace in which individual differences are recognized, appreciated, and respected in ways that fully develop and utilize each person’s talents and strengths. We pride ourselves on working with the best and we know our company runs on the hard work and dedication of our talented team members. Besides having employee-friendly policies and benefit schemes, impress.ai assures unbiased pay purely based on performance.

Read more
Cymetrix Software

at Cymetrix Software

2 candid answers
Netra Shettigar
Posted by Netra Shettigar
Remote only
3 - 7 yrs
₹8L - ₹20L / yr
Google Cloud Platform (GCP)
ETL
skill iconPython
Big Data
SQL
+4 more

Must have skills:

1. GCP - GCS, PubSub, Dataflow or DataProc, Bigquery, Airflow/Composer, Python(preferred)/Java

2. ETL on GCP Cloud - Build pipelines (Python/Java) + Scripting, Best Practices, Challenges

3. Knowledge of Batch and Streaming data ingestion, build End to Data pipelines on GCP

4. Knowledge of Databases (SQL, NoSQL), On-Premise and On-Cloud, SQL vs No SQL, Types of No-SQL DB (At least 2 databases)

5. Data Warehouse concepts - Beginner to Intermediate level


Role & Responsibilities:

● Work with business users and other stakeholders to understand business processes.

● Ability to design and implement Dimensional and Fact tables

● Identify and implement data transformation/cleansing requirements

● Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data

from various systems to the Enterprise Data Warehouse

● Develop conceptual, logical, and physical data models with associated metadata including data lineage and technical

data definitions

● Design, develop and maintain ETL workflows and mappings using the appropriate data load technique

● Provide research, high-level design, and estimates for data transformation and data integration from source

applications to end-user BI solutions.

● Provide production support of ETL processes to ensure timely completion and availability of data in the data

warehouse for reporting use.

● Analyze and resolve problems and provide technical assistance as necessary. Partner with the BI team to evaluate,

design, develop BI reports and dashboards according to functional specifications while maintaining data integrity and

data quality.

● Work collaboratively with key stakeholders to translate business information needs into well-defined data

requirements to implement the BI solutions.

● Leverage transactional information, data from ERP, CRM, HRIS applications to model, extract and transform into

reporting & analytics.

● Define and document the use of BI through user experience/use cases, prototypes, test, and deploy BI solutions.

● Develop and support data governance processes, analyze data to identify and articulate trends, patterns, outliers,

quality issues, and continuously validate reports, dashboards and suggest improvements.

● Train business end-users, IT analysts, and developers.

Read more
Certa

at Certa

1 video
4 recruiters
Vibhavari Muppavaram
Posted by Vibhavari Muppavaram
Remote only
2 - 5 yrs
₹8L - ₹15L / yr
Manual testing
Test Automation (QA)
API QA
skill iconPython
skill iconJava
+2 more

About Certa

 Certa is a leading innovator in the no-code SaaS workflow space, powering the full lifecycle for suppliers, partners, and third parties. From onboarding and risk assessment to contract management and ongoing monitoring, Certa enables businesses with automation, collaborative workflows, and continuously updated insights. Join us in our mission to revolutionize third-party management!


What You'll Do

  • Partner closely with Customer Success Managers to understand client workflows, identify quality gaps, and ensure smooth solution delivery.
  • Design, implement, and execute both manual and automated tests for client-facing workflows across our web platform.
  • Write robust and maintainable test scripts using Python (Selenium) to validate workflows, integrations, and configurations.
  • Own test planning for client-specific features, including writing clear test cases and sanity scenarios — even in the absence of detailed specs.
  • Collaborate with Product, Engineering, and Customer Success teams to reproduce client-reported issues, root-cause them, and verify fixes.
  • Lead or contribute to exploratory testing, regression cycles, and release validations before client rollouts.
  • Proactively identify gaps, edge cases, and risks in client implementations and communicate them effectively to stakeholders.
  • Act as a client-facing QA representative during solution validation, ensuring confidence in delivery and post-deployment success.


What We're Looking For

  • 3–5 years of experience in Software QA (manual + automation), ideally with exposure to client-facing or Customer Success workflows.
  • Strong understanding of core QA principles (priority vs. severity, regression vs. sanity, risk-based testing).
  • Hands-on experience writing automation test scripts with Python (Selenium).
  • Experience with modern automation frameworks (Playwright + TypeScript or equivalent) is a strong plus.
  • Familiarity with SaaS workflows, integrations, or APIs (JSON, REST, etc.).
  • Excellent communication skills — able to interface directly with clients, translate feedback into testable requirements, and clearly articulate risks/solutions.
  • Proactive, curious, and comfortable navigating ambiguity when working on client-specific use cases.


Good to Have

  • Previous experience in a Customer Success, Professional Services, or client-facing QA role.
  • Experience with CI/CD pipelines, BDD/TDD frameworks, and test data management.
  • Knowledge of security testing, performance testing, or accessibility testing.
  • Familiarity with no-code platforms or workflow automation tools.


Perks

  • Best-in-class compensation
  • Fully remote work
  • Flexible schedules
  • Engineering-first, high-ownership culture
  • Massive learning and growth opportunities
  • Paid vacation, comprehensive health coverage, maternity leave
  • Yearly offsite, quarterly hacker house
  • Workstation setup allowance
  • Latest tech tools and hardware
  • A collaborative and high-trust team environment
Read more
Pluginlive

at Pluginlive

1 recruiter
Harsha Saggi
Posted by Harsha Saggi
Remote only
4 - 6 yrs
₹2L - ₹6L / yr
skill iconDocker
skill iconJenkins
skill iconKubernetes
DevOps
skill iconPython
+4 more

NOTE- This is a contractual role for a period of 3-6 months.


Responsibilities:

● Set up and maintain CI/CD pipelines across services and environments

● Monitor system health and set up alerts/logs for performance & errors ● Work closely with backend/frontend teams to improve deployment velocity

● Manage cloud environments (staging, production) with cost and reliability in mind

● Ensure secure access, role policies, and audit logging

● Contribute to internal tooling, CLI automation, and dev workflow improvements


Must-Haves:

● 2–3 years of hands-on experience in DevOps, SRE, or Platform Engineering

● Experience with Docker, CI/CD (especially GitHub Actions), and cloud providers (AWS/GCP)

● Proficiency in writing scripts (Bash, Python) for automation

● Good understanding of system monitoring, logs, and alerting

● Strong debugging skills, ownership mindset, and clear documentation habits

● Infra monitoring tools like Grafana dashboards

Read more
Hashone Careers

at Hashone Careers

2 candid answers
Madhavan I
Posted by Madhavan I
Remote only
7 - 11 yrs
₹7L - ₹15L / yr
skill iconPython
skill iconDjango

Job Summary:  

We are seeking a skilled and motivated Python Django Developer with experience in building high-performance APIs using Django Ninja.  

The ideal candidate will have a strong background in web development, API design, and backend systems.  

Experience with IX-API and internet exchange operations is a plus.  

You will play a key role in developing scalable, secure, and efficient backend services that support our network infrastructure and service delivery.  


Key Responsibilities:  

Design, develop, and maintain backend services using Python Django and Django Ninja. Build and document RESTful APIs for internal and external integrations.  

Collaborate with frontend, DevOps, and network engineering teams to deliver end-to-end solutions. Ensure API implementations follow industry standards and best practices.  

Optimize performance and scalability of backend systems.  

Troubleshoot and resolve issues related to API functionality and performance.  

Participate in code reviews, testing, and deployment processes.  

Maintain clear and comprehensive documentation for APIs and workflows.  


Required Skills & Qualifications:  

Proven experience with Python Django and Django Ninja for API development.  

Strong understanding of RESTful API design, JSON, and Open API specifications.  

Proficiency in Python and familiarity with asynchronous programming.  

Experience with CI/CD tools (e.g., Jenkins, GitLab CI).  

Knowledge of relational databases (e.g., PostgreSQL, MySQL).  

Familiarity with version control systems (e.g., Git).  

Excellent problem-solving and communication skills.  


Preferred Qualifications:  

Experience with IX-API development and integration.  

Understanding of internet exchange operations and BGP routing.  

Exposure to network automation tools (e.g., Ansible, Terraform).  

Familiarity with containerization and orchestration tools (Docker, Kubernetes).  

Experience with cloud platforms (AWS, Azure, GCP).  

Contributions to open-source projects or community involvement

Read more
Remote only
5 - 10 yrs
₹8L - ₹18L / yr
ServiceNow
ServiceNow platform security
Threat modeling
STRIDE
PASTA
+16 more

Job Title : Senior Security Engineer – ServiceNow Security & Threat Modelling

Experience : 6+ Years

Location : Remote

Type : Contract


Job Summary :

We’re looking for a Senior Security Engineer to strengthen our ServiceNow ecosystem with security-by-design.

You will lead threat modelling, perform security design reviews, embed security in SDLC, and ensure risks are mitigated across applications and integrations.


Mandatory Skills :

ServiceNow platform security, threat modelling (STRIDE/PASTA), SAST/DAST (Checkmarx/Veracode/Burp/ZAP), API security, OAuth/SAML/SSO, secure CI/CD, JavaScript/Python.


Key Responsibilities :

  • Drive threat modelling, design reviews, and risk assessments.
  • Implement & manage SAST/DAST, secure CI/CD pipelines, and automated scans.
  • Collaborate with Dev/DevOps teams to instill secure coding practices.
  • Document findings, conduct vendor reviews & support ITIL-driven processes.
  • Mentor teams on modern security tools and emerging threats.

Required Skills :

  • Strong expertise in ServiceNow platform security & architecture.
  • Hands-on with threat modelling (STRIDE/PASTA, attack trees).
  • Experience using SAST/DAST tools (Checkmarx, Veracode, Burp Suite, OWASP ZAP).
  • Proficiency in API & Web Security, OAuth, SAML, SSO.
  • Knowledge of secure CI/CD pipelines & automation.
  • Strong coding skills in JavaScript/Python.
  • Excellent troubleshooting & analytical abilities for distributed systems.

Nice-to-Have :

  • Certifications: CISSP, CEH, OSCP, CSSLP, ServiceNow Specialist.
  • Knowledge of cloud security (AWS/Azure/GCP) & compliance frameworks (ISO, SOC2, GDPR).
  • Experience with incident response, forensics, SIEM tools.
Read more
Market Dynamics
Shantanu Johri
Posted by Shantanu Johri
Remote only
2 - 3.5 yrs
₹9.5L - ₹13.5L / yr
skill iconPython
skill iconPostgreSQL
Celery

**Company Overview**

We are a VC-backed fintech startup developing an innovative online trading platform. As we scale, we're seeking a skilled Backend Engineer with expertise in Python to join our growing team and help build a robust, scalable infrastructure for our cutting-edge trading application.

We're based out of the UK and have our engineering team in India.


**Job Description**

We are looking for a backend developer who specialises in Python. Your role will focus on developing and maintaining the server-side logic, optimising performance, and ensuring seamless integration with the frontend. You’ll work closely with the engineering and product teams to deliver a high-quality, secure, and scalable platform.


**Responsibilities**

1. Develop and maintain server-side logic using Python

2. Design and implement APIs for seamless integration with frontend components

3. Optimise backend performance and scalability for high traffic and large data loads

4. Build and maintain databases, ensuring security, data integrity, and optimal performance

5. Collaborate with frontend engineers to ensure smooth integration between backend and frontend systems

6. Troubleshoot, debug, and optimise backend infrastructure

7. Implement data protection, security protocols, and authentication mechanisms (e.g., JWT)

8. Maintain and enhance real-time communication systems using WebSockets or similar protocols


**Required Skills**

1. Strong proficiency in Python and related technologies, knowledge of databases and SQL, and experience with web frameworks like Django or FastAPI

2. Strong analytical and troubleshooting skills, and the ability to solve problems

3. Good understanding of OOPs, task broking services, Queues, Redis

4. Familiarity with RESTful API design and integration

5. Strong understanding of database management (e.g., PostgreSQL, Redis) and caching strategies

6. Familiarity with modern authentication and authorization mechanisms (e.g., JWT, OAuth)

7. Proficiency in working with cloud hosting services (AWS, Google Cloud, etc.)

8. Experience with containerization and orchestration tools (Docker, Kubernetes)

9. Knowledge of real-time communication protocols (e.g., WebSockets, TCP, SSE)

10. Strong understanding of security best practices for server-side applications

11. Experience with version control (Git) and CI/CD pipelines

12. Minimum of 2-3.5 years of experience building scalable backend systems


**Perks**

1. Work From Anywhere Flexibility

2. Unlimited Leaves policy*

3. Competitive salary and unlimited growth opportunities

4. Insights on how HFT is done using cutting edge technology


If you’re passionate about building scalable, high-performance backend systems and want to be part of a cutting-edge fintech startup, we’d love to hear from you!

Read more
Data Axle

at Data Axle

2 candid answers
Eman Khan
Posted by Eman Khan
Remote, Pune
5 - 10 yrs
Best in industry
skill iconC++
skill iconDocker
skill iconKubernetes
ECS
skill iconAmazon Web Services (AWS)
+12 more

We are looking for a Senior Software Engineer to join our team and contribute to key business functions. The ideal candidate will bring relevant experience, strong problem-solving skills, and a collaborative

mindset.


Responsibilities:

  • Design, build, and maintain high-performance systems using modern C++
  • Architect and implement containerized services using Docker, with orchestration via Kubernetes or ECS
  • Build, monitor, and maintain data ingestion, transformation, and enrichment pipelines
  • Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and
  • managing applications in the cloud.
  • Implement and maintain modern CI/CD pipelines, ensuring seamless integration, testing, and delivery
  • Participate in system design, peer code reviews, and performance tuning


Qualifications:

  • 5+ years of software development experience, with strong command over modern C++
  • Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and managing applications in the cloud.
  • Apache Airflow for orchestrating complex data workflows.
  • EKS (Elastic Kubernetes Service) for managing containerized workloads.
  • Proven expertise in designing and managing robust data pipelines & Microservices.
  • Proficient in building and scaling data processing workflows and working with structured/unstructured data
  • Strong hands-on experience with Docker, container orchestration, and microservices architecture
  • Working knowledge of CI/CD practices, Git, and build/release tools
  • Strong problem-solving, debugging, and cross-functional collaboration skills


This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.

Read more
Palcode.ai

at Palcode.ai

2 candid answers
Team Palcode
Posted by Team Palcode
Remote only
1.5 - 2 yrs
₹4L - ₹6L / yr
skill iconPython
FastAPI
skill iconAmazon Web Services (AWS)
skill iconReact.js

At Palcode.ai, We're on a mission to fix the massive inefficiencies in pre-construction. Think about it - in a $10 trillion industry, estimators still spend weeks analyzing bids, project managers struggle with scattered data, and costly mistakes slip through complex contracts. We're fixing this with purpose-built AI agents that work. Our platform can do “magic” to Preconstruction workflows from Weeks to Hours. It's not just about AI – it's about bringing real, measurable impact to an industry ready for change. We are backed by names like AWS for Startups, Upekkha Accelerator, and Microsoft for Startups.



Why Palcode.ai


Tackle Complex Problems: Build AI that reads between the lines of construction bids, spots hidden risks in contracts, and makes sense of fragmented project data

High-Impact Code: Your code won't sit in a backlog – it goes straight to estimators and project managers who need it yesterday

Tech Challenges That Matter: Design systems that process thousands of construction documents, handle real-time pricing data, and make intelligent decisions

Build & Own: Shape our entire tech stack, from data processing pipelines to AI model deployment

Quick Impact: Small team, huge responsibility. Your solutions directly impact project decisions worth millions

Learn & Grow: Master the intersection of AI, cloud architecture, and construction tech while working with founders who've built and scaled construction software


Your Role:

  • Design and build our core AI services and APIs using Python
  • Create reliable, scalable backend systems that handle complex data
  • Help set up cloud infrastructure and deployment pipelines
  • Collaborate with our AI team to integrate machine learning models
  • Write clean, tested, production-ready code


You'll fit right in if:

  • You have 1 year of hands-on Python development experience
  • You're comfortable with full-stack development and cloud services
  • You write clean, maintainable code and follow good engineering practices
  • You're curious about AI/ML and eager to learn new technologies
  • You enjoy fast-paced startup environments and take ownership of your work


How we will set you up for success

  • You will work closely with the Founding team to understand what we are building.
  • You will be given comprehensive training about the tech stack, with an opportunity to avail virtual training as well.
  • You will be involved in a monthly one-on-one with the founders to discuss feedback
  • A unique opportunity to learn from the best - we are Gold partners of AWS, Razorpay, and Microsoft Startup programs, having access to rich talent to discuss and brainstorm ideas.
  • You’ll have a lot of creative freedom to execute new ideas. As long as you can convince us, and you’re confident in your skills, we’re here to back you in your execution.


Location: Bangalore, Remote


Compensation: Competitive salary + Meaningful equity


If you get excited about solving hard problems that have real-world impact, we should talk.


All the best!!


Read more
Proximity Works

at Proximity Works

1 video
5 recruiters
Eman Khan
Posted by Eman Khan
Remote only
4 - 8 yrs
₹35L - ₹70L / yr
Natural Language Processing (NLP)
Generative AI
GenAI
Large Language Models (LLM)
Open-source LLMs
+19 more

We’re seeking a highly skilled, execution-focused Data Scientist with 4–10 years of experience to join our team. This role demands hands-on expertise in fine-tuning and deploying generative AI models across image, video, and audio domains — with a special focus on lip-sync, character consistency, and automated quality evaluation frameworks. You will be expected to run rapid experiments, test architectural variations, and deliver working model iterations quickly in a high-velocity R&D environment.


Responsibilities

  • Run end-to-end fine-tuning experiments on state-of-the-art models (Flux family, LoRA, diffusion-based architectures, context-based composition).
  • Develop and optimize generative AI models for audio generation and lip-sync, ensuring high fidelity and natural delivery.
  • Extend current language models to support regional Indian languages beyond US/UK English for audio and content generation.
  • Enable emotional delivery in generated audio (shouting, crying, whispering) to enhance realism.
  • Integrate and synchronize background scores seamlessly with generated video content.
  • Work towards achieving video quality standards comparable to Veo3/Sora.
  • Ensure consistency in scenes and character generation across multiple outputs.
  • Design and implement an automated objective evaluation frameworks to replace subjective human review — for cover images, video frames, and audio clips. Implement scoring systems that standardize quality checks before editorial approval.
  • Run comparative tests across multiple model architectures to evaluate trade-offs in quality, speed, and efficiency.
  • Drive initiatives independently, showcasing high agency and accountability. Utilize strong first-principle thinking to tackle complex challenges.
  • Apply a research-first approach with rapid experimentation in the fast-evolving Generative AI space.


Requirements

  • 4-10 years of experience in Data Science, with a strong focus on Generative AI.
  • Familiarity with state-of-the-art models in generative AI (e.g., Flux, diffusion models, GANs).
  • Proven expertise in developing and deploying models for audio and video generation.
  • Demonstrated experience with natural language processing (NLP), especially for regional language adaptation.
  • Experience with model fine-tuning and optimization techniques.
  • Hands-on exposure to ML deployment pipelines (FastAPI or equivalent).
  • Strong programming skills in Python and relevant deep learning frameworks (e.g., TensorFlow, PyTorch).
  • Experience in designing and implementing automated evaluation metrics for generative content.
  • A portfolio or demonstrable experience in projects related to content generation, lip-sync, or emotional AI is a plus.
  • Exceptional problem-solving skills and a proactive approach to research and experimentation.


Benefits

  • Best in class salary: We hire only the best, and we pay accordingly.
  • Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field.
  • Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day.


About us

Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media, and Entertainment companies in the world! We’re headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies.


Today, we are a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting-edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company’s success will be huge. You’ll have the chance to work with experienced leaders who have built and led multiple tech, product, and design teams.

Read more
venanalytics

at venanalytics

2 candid answers
Rincy jain
Posted by Rincy jain
Remote only
2 - 4 yrs
₹7L - ₹10L / yr
skill iconPython
SQL
PowerBI
DAX

About Ven Analytics


At Ven Analytics, we don’t just crunch numbers — we decode them to uncover insights that drive real business impact. We’re a data-driven analytics company that partners with high-growth startups and enterprises to build powerful data products, business intelligence systems, and scalable reporting solutions. With a focus on innovation, collaboration, and continuous learning, we empower our teams to solve real-world business problems using the power of data.

Role Overview

We’re looking for a Power BI Data Engineer who is not just proficient in tools but passionate about building insightful, scalable, and high-performing dashboards. The ideal candidate should have strong fundamentals in data modeling, a flair for storytelling through data, and the technical skills to implement robust data solutions using Power BI, Python, and SQL.


Key Responsibilities

  • Technical Expertise: Develop scalable, accurate, and maintainable data models using Power BI, with a clear understanding of Data Modeling, DAX, Power Query, and visualization principles.
  • Programming Proficiency: Use SQL and Python for complex data manipulation, automation, and analysis.
  • Business Problem Translation: Collaborate with stakeholders to convert business problems into structured data-centric solutions considering performance, scalability, and commercial goals.
  • Hypothesis Development: Break down complex use-cases into testable hypotheses and define relevant datasets required for evaluation.
  • Solution Design: Create wireframes, proof-of-concepts (POC), and final dashboards in line with business requirements.
  • Dashboard Quality: Ensure dashboards meet high standards of data accuracy, visual clarity, performance, and support SLAs.
  • Performance Optimization: Continuously enhance user experience by improving performance, maintainability, and scalability of Power BI solutions.
  • Troubleshooting & Support: Quick resolution of access, latency, and data issues as per defined SLAs.


Must-Have Skills

  • Strong experience building robust data models in Power BI
  • Hands-on expertise with DAX (complex measures and calculated columns)
  • Proficiency in M Language (Power Query) beyond drag-and-drop UI
  • Clear understanding of data visualization best practices (less fluff, more insight)
  • Solid grasp of SQL and Python for data processing
  • Strong analytical thinking and ability to craft compelling data stories


Good-to-Have (Bonus Points)

  • Experience using DAX Studio and Tabular Editor
  • Prior work in a high-volume data processing production environment
  • Exposure to modern CI/CD practices or version control with BI tools

 

Why Join Ven Analytics?

  • Be part of a fast-growing startup that puts data at the heart of every decision.
  • Opportunity to work on high-impact, real-world business challenges.
  • Collaborative, transparent, and learning-oriented work environment.
  • Flexible work culture and focus on career development.
Read more
Certa

at Certa

1 video
4 recruiters
Gyan S
Posted by Gyan S
Remote only
3 - 6 yrs
Best in industry
skill iconPython
skill iconDjango
PyTorch
Large Language Models (LLM) tuning
Generative AI
+4 more

Certa (getcerta.com) is a Silicon Valley-based startup automating the vendor, supplier, and stakeholder onboarding processes for businesses globally. Serving Fortune 500 and Fortune 1000 clients, Certa's engineering team tackles expansive and deeply technical challenges, driving innovation in business processes across industries.


Location: Remote (India only)


Role Overview

We are looking for an experienced and innovative AI Engineer to join our team and push the boundaries of large language model (LLM) technology to drive significant impact in our products and services . In this role, you will leverage your strong software engineering skills (particularly in Python and cloud-based backend systems) and your hands-on experience with cutting-edge AI (LLMs, prompt engineering, Retrieval-Augmented Generation, etc.) to build intelligent features for enterprise (B2B SaaS). As an AI Engineer on our team, you will design and deploy AI-driven solutions (such as LLM-powered agents and context-aware systems) from prototype to production, iterating quickly and staying up-to-date with the latest developments in the AI space . This is a unique opportunity to be at the forefront of a new class of engineering roles that blend robust backend system design with state-of-the-art AI integration, shaping the future of user experiences in our domain.


Key Responsibilities

  • Design and Develop AI Features: Lead the design, development, and deployment of generative AI capabilities and LLM-powered services that deliver engaging, human-centric user experiences . This includes building features like intelligent chatbots, AI-driven recommendations, and workflow automation into our products.
  • RAG Pipeline Implementation: Design, implement, and continuously optimize end-to-end RAG (Retrieval-Augmented Generation) pipelines, including data ingestion and parsing, document chunking, vector indexing, and prompt engineering strategies to provide relevant context to LLMs . Ensure that our AI systems can efficiently retrieve and use information from knowledge bases to enhance answer accuracy.
  • Build LLM-Based Agents: Develop and refine LLM-based agentic systems that can autonomously perform complex tasks or assist users in multi-step workflows. Incorporate tools for planning, memory, and context management (e.g. long-term memory stores, tool use via APIs) to extend the capabilities of our AI agents . Experiment with emerging best practices in agent design (planning algorithms, self-healing loops, etc.) to make these agents more reliable and effective.
  • Integrate with Product Teams: Work closely with product managers, designers, and other engineers to integrate AI capabilities seamlessly into our products, ensuring that features align with user needs and business goals . You’ll collaborate cross-functionally to translate product requirements into AI solutions, and iterate based on feedback and testing.
  • System Evaluation & Iteration: Rigorously evaluate the performance of AI models and pipelines using appropriate metrics – including accuracy/correctness, response latency, and avoidance of errors like hallucinations . Conduct thorough testing and use user feedback to drive continuous improvements in model prompts, parameters, and data processing.
  • Code Quality & Best Practices: Write clean, maintainable, and testable code while following software engineering best practices . Ensure that the AI components are well-structured, scalable, and fit into our overall system architecture. Implement monitoring and logging for AI services to track performance and reliability in production.
  • Mentorship and Knowledge Sharing: Provide technical guidance and mentorship to team members on best practices in generative AI development . Help educate and upskill colleagues (e.g. through code reviews, tech talks) in areas like prompt engineering, using our AI toolchain, and evaluating model outputs. Foster a culture of continuous learning and experimentation with new AI technologies.
  • Research & Innovation: Continuously explore the latest advancements in AI/ML (new model releases, libraries, techniques) and assess their potential value for our products . You will have the freedom to prototype innovative solutions – for example, trying new fine-tuning methods or integrating new APIs – and bring those into our platform if they prove beneficial. Staying current with emerging research and industry trends is a key part of this role .


Required Skills and Qualifications

  • Software Engineering Experience: 3+ years (Mid-level) / 5+ years (Senior) of professional software engineering experience. Rock-solid backend development skills with expertise in Python and designing scalable APIs/services. Experience building and deploying systems on AWS or similar cloud platforms is required (including familiarity with cloud infrastructure and distributed computing) . Strong system design abilities with a track record of designing robust, maintainable architectures is a must.
  • LLM/AI Application Experience: Proven experience building applications that leverage large language models or generative AI. You have spent time prompting and integrating language models into real products (e.g. building chatbots, semantic search, AI assistants) and understand their behavior and failure modes . Demonstrable projects or work in LLM-powered application development – especially using techniques like RAG or building LLM-driven agents – will make you stand out .
  • AI/ML Knowledge: Prioritize applied LLM product engineering over traditional ML pipelines. Strong chops in prompt design, function calling/structured outputs, tool use, context-window management, and the RAG levers that matter (document parsing/chunking, metadata, re-ranking, embedding/model selection). Make pragmatic model/provider choices (hosted vs. open) using latency, cost, context length, safety, and rate-limit trade-offs; know when simple prompting/config changes beat fine-tuning, and when lightweight adapters or fine-tuning are justified. Design evaluation that mirrors product outcomes: golden sets, automated prompt unit tests, offline checks, and online A/Bs for helpfulness/correctness/safety; track production proxies like retrieval recall and hallucination rate. Solid understanding of embeddings, tokenization, and vector search fundamentals, plus working literacy in transformers to reason about capabilities/limits. Familiarity with agent patterns (planning, tool orchestration, memory) and guardrail/safety techniques.
  • Tooling & Frameworks: Hands-on experience with the AI/LLM tech stack and libraries. This includes proficiency with LLM orchestration libraries such as LangChain, LlamaIndex, etc., for building prompt pipelines . Experience working with vector databases or semantic search (e.g. Pinecone, Chroma, Milvus) to enable retrieval-augmented generation is highly desired.
  • Cloud & DevOps: Own the productionization of LLM/RAG-backed services as high-availability, low-latency backends. Expertise in AWS (e.g., ECS/EKS/Lambda, API Gateway/ALB, S3, DynamoDB/Postgres, OpenSearch, SQS/SNS/Step Functions, Secrets Manager/KMS, VPC) and infrastructure-as-code (Terraform/CDK). You’re comfortable shipping stateless APIs, event-driven pipelines, and retrieval infrastructure (vector stores, caches) with strong observability (p95/p99 latency, distributed tracing, retries/circuit breakers), security (PII handling, encryption, least-privilege IAM, private networking to model endpoints), and progressive delivery (blue/green, canary, feature flags). Build prompt/config rollout workflows, manage token/cost budgets, apply caching/batching/streaming strategies, and implement graceful fallbacks across multiple model providers.
  • Product and Domain Experience: Experience building enterprise (B2B SaaS) products is a strong plus . This means you understand considerations like user experience, scalability, security, and compliance. Past exposure to these types of products will help you design AI solutions that cater to a range of end-users.
  • Strong Communication & Collaboration: Excellent interpersonal and communication skills, with an ability to explain complex AI concepts to non-technical stakeholders and create clarity from ambiguity . You work effectively in cross-functional teams and can coordinate with product, design, and ops teams to drive projects forward.
  • Problem-Solving & Autonomy: Self-motivated and able to manage multiple priorities in a fast-paced environment . You have a demonstrated ability to troubleshoot complex systems, debug issues across the stack, and quickly prototype solutions. A “figure it out” attitude and creative approach to overcoming technical challenges are key.


Preferred (Bonus) Qualifications

  • Multi-Modal and Agents: Experience developing complex agentic systems using LLMs (for example, multi-agent systems or integrating LLMs with tool networks) is a bonus . Similarly, knowledge of multi-modal AI (combining text with vision or other data) could be useful as we expand our product capabilities.
  • Startup/Agile Environment: Prior experience in an early-stage startup or similarly fast-paced environment where you’ve worn multiple hats and adapted to rapid changes . This role will involve quick iteration and evolving requirements, so comfort with ambiguity and agility is valued.
  • Community/Research Involvement: Active participation in the AI community (open-source contributions, research publications, or blogging about AI advancements) is appreciated. It demonstrates passion and keeps you at the cutting edge. If you have published research or have a portfolio of AI side projects, let us know !


Perks of working at Certa.ai:

  • Best-in-class compensation
  • Fully-remote work with flexible schedules
  • Continuous learning
  • Massive opportunities for growth
  • Yearly offsite
  • Quarterly hacker house
  • Comprehensive health coverage
  • Parental Leave
  • Latest Tech Workstation
  • Rockstar team to work with (we mean it!)
Read more
Remote only
0 - 1 yrs
₹5000 - ₹7000 / mo
Attention to detail
Troubleshooting
Data modeling
warehousing concepts
Google Cloud Platform (GCP)
+15 more

Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence. The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.


Responsibilities:

-Design, build, and maintain scalable data pipelines for

structured and unstructured data sources

-Develop ETL processes to collect, clean, and transform data

from internal and external systems. Support integration of data into

dashboards, analytics tools, and reporting systems

-Collaborate with data analysts and software developers to

improve data accessibility and performance.

-Document workflows and maintain data infrastructure best

practices.

-Assist in identifying opportunities to automate repetitive data

tasks


Please send your resume to talent@springer. capital

Read more
Remote only
0 - 1 yrs
₹5000 - ₹7000 / mo
Attention to detail
Troubleshooting
Data modeling
warehousing concepts
Google Cloud Platform (GCP)
+15 more

Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence. The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.


Responsibilities:

-Design, build, and maintain scalable data pipelines for

structured and unstructured data sources

-Develop ETL processes to collect, clean, and transform data

from internal and external systems. Support integration of data into

dashboards, analytics tools, and reporting systems

-Collaborate with data analysts and software developers to

improve data accessibility and performance.

-Document workflows and maintain data infrastructure best

practices.

-Assist in identifying opportunities to automate repetitive data

tasks


Please send your resume to talent@springer. capital

Read more
B2B Product

B2B Product

Agency job
via Scaling Theory by Keerthana Prabkharan
Remote only
5 - 12 yrs
₹20L - ₹40L / yr
Bioinformatics
skill iconPython

What will you do?

● Design, develop and maintain various Bioinformatics solutions for analyzing and managing Biological data as per our best practices and quality standards.

● Collaborate with Product Management and Software Engineers, to build bioinformatics solutions.

● Produce high quality and detailed documentation for all projects.

● Provide consultation and technical solutions to cross functional bioinformatics projects.

● Identify and/or conceive novel approaches to better and more efficiently analyze client and public datasets.

● Participate in defining the high-level application architecture for future roadmap requirements and features.

● Coach other team members by sharing domain and technical knowledge and code reviews.

● Participate in activities such as hiring and onboarding.

● Work with cross-functional teams to ensure quality throughout the bioinformatics software and analytical pipelines development lifecycle.

● Ensure compliance with our SDLC process during product development.

● Stay up-to-date on technology to deliver quality at each phase of the product life-cycle. You take leadership in evangelizing technical excellence within the team.

 

What do you bring to the table?

● Ph.D. degree with 3-5+ years of postdoc or industrial experience or Masters Degree with 5-8+ years of industrial experience in Bioinformatics, Computer Science, Bioengineering, Computational Biology or related field

● Excellent programming skills in Python and Shell scripting

● Experience with Relational database such as PostgreSQL, mySQL or Oracle

● Experience with version control systems such as GitHub.

● Experience with Linux/UNIX/Mac OS X based systems

● Experience with high-performance Linux cluster and cloud computing (AWS is preferred).

● Deep understanding of analytical approaches and tools for genomic data analysis along with familiarity with genomic databases. Candidates with proven expertise in the analysis of NGS data generated on sequencing platforms such as Illumina, Oxford Nanopore, or Thermo will be prioritized.

● Experience with open source bioinformatics tools and publicly available variant databases.

● Ability to manage moderately complex projects and initiatives.

● Exceptionally strong communication, data presentation and visualization skills.

● Personal initiative and ability to work effectively within a cross functional team.

● Excellent communication skills and ability to learn and work independently when necessary.

● High energy and inquisitive and strong attention to detail

Read more
Tech AI startup in Bangalore

Tech AI startup in Bangalore

Agency job
via Recruit Square by Priyanka choudhary
Remote only
1 - 3 yrs
₹6L - ₹8L / yr
skill iconJavascript
skill iconPython
skill iconDjango
FastAPI
skill iconFlask
+7 more

About the Role:


We are looking for a skilled Full-Stack Developer with expertise in Python, JavaScript, and No-Code AI tools to join our dynamic team. The ideal candidate should be proficient in both backend and frontend development, capable of working with modern frameworks, and have experience in LLM prompt engineering, data extraction, and response formatting.


Key Responsibilities:


  • Develop and maintain scalable backend services using FastAPI / Flask / Django.
  • Build dynamic front-end applications using React / Next.js.
  • Implement LLM-based solutions for data extraction and response formatting.
  • Design and optimize databases using Milvus / Weaviate / Pinecone for vector storage and MongoDB / MySQL for structured data.
  • Collaborate with cross-functional teams to deliver high-quality AI-driven applications.
  • Ensure application performance, security, and scalability.
  • Communicate technical ideas effectively through written and verbal communication.



Required Skills & Qualifications:


Technical Skills:


  • Programming: Proficiency in Python and JavaScript.
  • Backend: Experience with FastAPI / Flask / Django.
  • Frontend: Strong understanding of React / Next.js.
  • Database: Knowledge of at least one vector database (Milvus / Weaviate / Pinecone) and one relational or NoSQL database (MongoDB / MySQL).
  • No-Code AI & LLM:
  • Expertise in LLM Prompt Engineering.
  • Experience with data extraction from context and response formatting.


Soft Skills:


  • Strong written and verbal communication skills.
  • Ability to collaborate effectively with teams and clients.
  • Problem-solving mindset with a focus on innovation and efficiency.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort