Cutshort logo

50+ Python Jobs in India

Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!

icon
InteligenAI
Ayushi Sarmah
Posted by Ayushi Sarmah
Gurugram
2 - 6 yrs
₹12L - ₹30L / yr
skill iconPython
Artificial Intelligence (AI)
Generative AI
skill iconMachine Learning (ML)
skill iconData Science
+2 more

Title: AI Solutions Architect

Location: Gurgaon

Experience: 2-6 years

Type: Full-Time

 

About the company:

InteligenAI is a fast growing, profitable AI product studio with a global clientele.

We design and deliver enterprise-grade, custom AI solutions that solve real problems - going far beyond makeshift PoCs and over-promising demos.

We’re building one of the most trusted AI services companies in the world - and are looking for a driven, entrepreneurial person to help us get there. Our work spans Agentic AI architectures, document digitization pipelines, retrieval-augmented generation (RAG) systems, and SFT + RLHF workflows - all built in-house so we can move fast, think deep and deliver with confidence.

If you are looking for meaningful work, high ownership and the freedom to push boundaries, you will feel right at home here.

 

About the role:

We are looking for a hands-on AI engineer to lead AI solution delivery across our client engagements. This role blends technical leadership with solution architecture and a strong product mindset. You will be at the frontline of AI solution delivery, where you will drive the full product lifecycle from understanding business objectives, designing technical approaches, building POCs to delivering production-grade AI systems.


This is not a backseat, “wait for instructions” role. You will work directly with founders, clients, and our growing AI team to shape solutions that make an impact. This role is ideal for someone with an entrepreneurial mindset, a desire to learn and grow constantly and someone who enjoys their work thoroughly. You will be handling multiple responsibilities simultaneously where you will be challenged every day. If you are looking for a 9-to-5 role, this may not be the right fit.

 

Key responsibilities:

·      Understand business problems, translate them into solution architectures and lead end-to-end AI solution delivery

·      Design and deliver production-grade ML/GenAI systems tailored to real-world use cases

·      Collaborate with clients to identify needs, present solutions and guide implementation

·      Act as a thought partner to the founder and contribute to strategic decisions

·      Lead and mentor a growing AI/Tech team

·      Collaborate with product and design teams to ship AI-driven features that solve real user problems

·      Continuously explore and experiment with cutting-edge GenAI tools, technologies and frameworks

 

Must have skills:

·      2+ years of hands-on experience building AI/ML solutions across domains

·      Proven ability to understand business workflows and design relevant AI solutions

·      Strong knowledge of GenAI and experience building scalable applications using LLMs, prompt engineering and embedding models

·      Proficient in Python and familiar with libraries/frameworks such as LangChain, Hugging Face Transformers, OpenAI APIs, Pinecone/FAISS

·      Solid understanding of data pipelines, data analytics and ability to take solutions from prototype to production

·      Self-starter mindset- ability to independently manage projects, make decisions and deliver outcomes from day 1

·      Excellent communication and problem-solving skills

 

Good to have:

·      Open-source contributions or personal GenAI projects

·      Experience working in startups or fast-paced, tech-first organizations

·      Experience with MLOps tools

·      Entrepreneurial experience


Read more
Inteliment Technologies

at Inteliment Technologies

2 candid answers
Ariba Khan
Posted by Ariba Khan
Pune
6 - 10 yrs
Upto ₹20L / yr (Varies
)
skill iconPython
ETL
Snowflake
Spark
PowerBI
+3 more

About the company:

At Inteliment, we help organizations turn data into powerful decisions. With two decades of proven expertise, we work with global customers to solve complex business problems using advanced data and analytics solutions. Our ACE – Analytical Centre of Excellence brings together some of the best minds in data engineering, analytics, and AI to build next-generation decision intelligence platforms. If you are passionate about data engineering, modern data platforms, and solving real business problems, this role will give you the opportunity to work on global enterprise data ecosystems. 


About the role

We are seeking a highly skilled Data Architect with strong hands-on expertise in Data Engineering and/or Data Visualization tools, having 6+ years of experience in the pure Data Analytics domain. The ideal candidate will be responsible for architecting scalable data solutions, guiding technical teams, and ensuring robust data pipelines, analytics frameworks, and visualization ecosystems aligned with business objectives. 


Requirements:

  • Bachelor’s or master’s degree in computer sciences, Information Technology, or a related field.
  • 6+ years of hands-on experience in Data Analytics domain.
  • Strong experience in designing enterprise data solutions.
  • Proven experience in handling large-scale data systems.
  • Experience in client-facing roles is preferred. 
  • Certifications with related field will be an added advantage

Technical Skills

✔ Data Engineering Stack

  • Python / PySpark / SQL
  • ETL Tools (e.g., Informatica, Talend, SSIS, or equivalent)
  • Cloud Platforms (AWS / Azure / GCP)
  • Data Warehousing (Snowflake, Redshift, BigQuery, etc.)
  • Big Data Technologies (Spark, Hadoop – preferred)

✔ Visualization & BI Tools (At least one advanced tool mandatory)

  • Power BI
  • Tableau
  • Qlik
  • Looker or equivalent

✔ Database Technologies

  • SQL (MySQL, PostgreSQL, SQL Server, Oracle)
  • NoSQL (MongoDB, Cassandra – preferred)

✔ Additional Preferred Skills

  • Data Modeling (Star/Snowflake schema)
  • API integrations
  • CI/CD for data pipelines
  • Version control (Git)
  • Agile methodology exposure

Soft Skills

  • Leadership: Strong leadership and mentoring capabilities to guide technical teams.
  • Communication: Excellent communication skills for collaborating with cross-functional teams and stakeholders.
  • Problem-Solving: Analytical mindset with a keen attention to detail.
  • Adaptability: Ability to manage shifting priorities and requirements effectively.
  • Team Collaboration: Strong interpersonal skills for fostering a collaborative work environment.


Responsibilities:

✔ Solution Architecture & Design

  • Design end-to-end data architecture solutions including data ingestion, transformation, storage, and visualization.
  • Architect scalable and high-performance data pipelines.
  • Define best practices, standards, and governance frameworks for data analytics projects.

✔ Data Engineering

  • Build and optimize ETL/ELT pipelines.
  • Work with structured and unstructured datasets.
  • Design and implement data lakes, data warehouses, and modern data platforms.
  • Ensure data quality, integrity, and performance tuning.

✔ Data Visualization & Analytics

  • Architect and implement enterprise-level dashboards and reporting solutions.
  • Define data models optimized for BI tools.
  • Guide teams in building intuitive, performance-driven visualizations.
  • Translate business requirements into scalable analytics solutions.

✔ Technical Leadership

  • Provide technical direction to data engineers, BI developers, and analysts.
  • Conduct code reviews and enforce architectural standards.
  • Collaborate with cross-functional teams including business stakeholders and delivery teams.
  • Mentor junior team members and drive capability building.

✔ Stakeholder Engagement

  • Participate in client discussions, solution presentations, and requirement workshops.
  • Provide effort estimations and solution proposals.
  • Act as a technical escalation point. 
Read more
Superclaims
Akshith Daithala
Posted by Akshith Daithala
Hyderabad
1 - 3 yrs
₹5L - ₹7.5L / yr
skill iconPython
FastAPI
skill iconPostgreSQL
SQLAlchemy
LangGraph
+11 more

About Superclaims

Superclaims modernizes health insurance claims adjudication with intelligent automation. We help insurers and TPAs replace manual, document-heavy workflows with faster, more accurate decisions at scale.


Role: Python Backend Developer

We are looking for a Python Backend Developer who is excited to build AI-powered automation products in a fast-paced startup environment.


What you'll do

- Build and maintain scalable backend systems and APIs

- Develop intelligent data extraction pipelines using AI/ML

- Design and implement agentic workflows with LangGraph

- Design efficient database schemas and optimize queries in PostgreSQL

- Integrate and work with LLMs (OpenAI, Gemini, or similar)

- Collaborate with product, frontend, and data teams to deliver end-to-end features

- Write clean, tested, and well-documented code


Must-have skills

- Strong proficiency in Python and a modern web framework (FastAPI or similar)

- Experience with PostgreSQL and an ORM (SQLAlchemy preferred)

- Solid understanding of RESTful API design and best practices

- Hands-on experience or strong familiarity with LangGraph

- Experience working with LLMs (OpenAI, Gemini, or similar providers)

- Comfort with Git/version control and collaborative development workflows


Nice-to-have skills

- Experience with Docker and containerized deployments

- Knowledge of Redis for caching or background tasks

- Exposure to cloud platforms (GCP, AWS, or Azure)

- Experience with vector databases and retrieval-augmented generation

- Basic prompt engineering skills

- Experience with object storage (S3/MinIO)


What we're looking for

- 1+ years of Python backend development experience (open to exceptional freshers)

- Fast learner with genuine curiosity about AI/ML and automation

- Prior startup experience preferred

- Ownership mindset, bias for action, and comfort with ambiguity

- Ready to relocate to Hyderabad (work location)


How to apply

Please share:

- Your resume

- GitHub/Portfolio link

- A brief note on why you're interested in AI-powered automation and Superclaims

Read more
Verse
Ravi K
Posted by Ravi K
Remote only
2 - 5 yrs
₹15L - ₹20L / yr
skill iconPython
FastAPI
skill iconPostgreSQL
Neo4J
LangGraph

Founding Engineer (Bangalore)


The problem:

Business enterprises overpay vendors - on every batch of invoices, on every month because the data that would catch lives in different systems. We are building an AI agent that processes invoices end-to-end, reasons across all the relevant sources, flags genuine discrepancies, and acts - without a human having to investigate each one.


What you will own

Everything engineering. Schema design to deployment to the 2am fix when something breaks in production. There is no tech lead above you. There is no platform team. There is the architecture, you, and the founders. Concretely, this means building:

  • A multi-stage agentic pipeline that takes a vendor invoice and produces a structured decision - fully autonomous for clear cases, escalating to human review for genuinely ambiguous ones. We use LangGraph, but if you've built equivalent systems with Temporal, Prefect, or custom state machines with LLM orchestration, that works
  • An LLM-powered extraction layer that handles real invoices - scanned PDFs, stamped documents, inconsistent layouts - and returns structured output
  • A graph data model that connects invoices to various sources and can traverse those relationships to detect discrepancies
  • ERP connectors, GST validation logic, and a write-back layer that closes the loop


What we need

  • Strong Python. Async FastAPI, clean service boundaries, tests that actually catch bugs. You have shipped Python backends that handled real production load
  • Solid Postgres. Complex queries, schema design, migrations without downtime, row-level security for multi-tenant data. pgvector is a plus - if not, you pick it up fast
  • LLM API experience in production. You have called an LLM API for something that real users depended on. You know about structured output, retry logic, cost management, prompt versioning. A side project counts if it was genuinely deployed
  • Comfort with graph data models. You understand when a graph is the right structure and when it is not. You do not need deep Neo4j production experience - you need to understand graph relationships conceptually and be willing to learn Cypher. It is a 2-day ramp for the right person
  • Working knowledge of deployment. Deployed and operated production workloads on GCP. Cloud Run, Cloud SQL, Cloud Storage, Redis — you're comfortable across the stack. If you've done it on AWS, the translation isn't hard, but GCP is where we are
  • You own things. Not "I contributed to" - you designed it, shipped it, and fixed it when it broke. That pattern needs to be visible in your history


Good to have, not mandatory

  • Built an agentic pipeline with multiple stages
  • Any fintech, P2P domain experience - even tangential
  • Worked at a startup with under 20 people
  • Has a GitHub, blog, or writeup that shows how you think about a hard technical problem


What you get

  • The hardest engineering problem you would have worked on. This is not CRUD with an LLM bolted on
  • Real ownership. First engineering hire. Your architectural decisions will be in this product five years from now
  • Equity that matters. ESOP - Open to discussion. We are pre-seed - this is a bet, not a guarantee. We will not pretend otherwise
  • No meetings tax. You work directly with the founders. The product is specified clearly. You know what you are building and why


Honest about stage: We do not have a production ready infra yet. We have a complete architecture specification and a working prototype. If you need the stability of an established engineering org, this is not the right moment. If you want to build something real from zero and own a meaningful piece of it, it is.


The founders

One of us has spent 20 years building revenue and operational engines at companies where there was no playbook - part of the pilot team that established the world's largest search company's direct sales operations in India, managed global operations for a global mobile advertising platform, scaled a B2C platform to become one of India’s leading edtech platforms and most recently worked on building an enterprise Agentic Voice AI platform. The other has spent 15 years taking AI from demo to production in domains where failure is expensive - voice, lending, and conversational systems across a Series D conversational AI company, a major telco, a Big 4, and a leading NBFC.


Two IIT/IIM alumni who have both watched AI work in enterprise, and know exactly what it takes to get it there. We are not building this product because it sounds interesting. We are building it because we have both sat across the table from CFOs who know they are losing margin and have no tool capable of doing anything about it.

Read more
-

-

Agency job
via Qubit Labs by Solomiia Kuzbyt
Remote only
4 - 15 yrs
$48K - $67.2K / yr
Blockchain
skill iconGo Programming (Golang)
skill iconPython
skill iconJavascript
TypeScript
+1 more

About the Role

Join the Blockchain Backend Infrastructure team and take a position in building and maintaining a leading blockchain management platform. You'll be responsible for building cutting-edge blockchain infrastructure while implementing high-throughput, real-time scalable software solutions.

As a Blockchain Engineer, you will be instrumental in the research and integration of blockchain technologies into the platform. Your responsibilities will include collaborating closely with foundations and developers to gain a deep understanding of blockchain protocols and on-chain projects, then applying that knowledge to implement new features within the platform.

You will focus equally on external protocol integration patterns and internal wallet infrastructure. This role serves as a technical bridge between raw on-chain capabilities and the wallet features delivered to customers.

What You'll Do

  • Implement modern backend applications and infrastructure in a microservices architecture, using the latest technologies and development practices.
  • Deep dive into the latest blockchain technology and become an expert in the fundamentals, protocols, and features of the chains we support.
  • Collaborate effectively with developers, engineers, and other roles while demonstrating strong independent problem-solving abilities.
  • Contribute to production reliability through on-call participation, incident response, and post-incident follow-ups.

What You'll Bring

  • 5+ years of backend development experience in modern languages (Go, Python, JavaScript/TypeScript).
  • 3+ years of hands-on blockchain development experience.
  • Experience working on high-scale distributed systems.
  • Understanding of microservices architecture and API design.
  • Knowledge of consensus mechanisms, cryptographic primitives, and distributed systems.
  • Strong problem-solving skills, attention to detail, and a collaborative mindset.

Preferred

  • Experience building blockchain solutions for enterprise or institutional use cases.
  • Understanding of security best practices for smart contracts and blockchain systems.
  • Demonstrated ability to apply AI tools in day-to-day development.
  • Understanding of MPC, multi-signature wallets, or other advanced cryptographic techniques.
  • Bachelor's degree in Computer Science, Engineering, or a related field.
  • Experience with Docker, Kubernetes, and Helm.
  • Location:
  • - EU preferred or availability to travel to one of dev hubs in Europe once per quarter.


Read more
Improving
Rohini Jadhav
Posted by Rohini Jadhav
Bengaluru (Bangalore)
5 - 8 yrs
₹25L - ₹35L / yr
skill iconPython
skill iconKubernetes
skill iconJenkins
CI/CD
skill iconDocker
+1 more

What are we looking for??

  1. You have a good understanding and work experience in AKS, Kubernetes, and EKS.
  2. You are able to manage multi region clusters for disaster recovery.
  3. You have a good understanding of AWS stack.
  4. You have experience of production level in Kubernetes. 
  5. You are comfortable coding/programming and can do so whenever required. 
  6. You have worked with programmable infrastructure in some way - Built a CI/CD pipeline, Provisioned infrastructure programmatically or Provisioned monitoring and logging infrastructure for large sets of machines.
  7. You love automating things, sometimes even what seems like you can’t automate - such as one of our engineers used Ansible to set up the Ubuntu workstation and runs a playbook every time something has to be installed.
  8. You don’t throw around words such as “high availability” or “resilient systems” without understanding at least their basics. Because you know that words are easy to talk about but there is a fair amount of work to build such a system in practice.
  9. You love coaching people - about the 12-factor apps or the latest tool that reduced your time of doing a task by X times and so on. You lead by example when it comes to technical work and community.
  10. You understand the areas you have worked on very well but, you are curious about many systems that you may not have worked on and want to fiddle with them.
  11. You know that understanding applications and the runtime technologies gives you a better perspective - you never looked at them as two different things.

What you will be learning and doing?

  1. You will be working with customers trying to transform their applications and adopt cloud-native technologies. The technologies used will be Kubernetes, Prometheus, Service Mesh, Distributed tracing and public cloud technologies or on-premise infrastructure.
  2. The problems and solutions are continuously evolving in space but fundamentally you will be solving problems with simplest and scalable automation.
  3. You will be building open source tools for problems that you think are common across customers and industry. No one ever benefited from re-inventing the wheel, did they? 
  4. You will be hacking around open source projects, understand their capabilities, limitations and apply the right tool for the right job.
  5. You will be educating the customers - from their operations engineers to developers on scalable ways to build and operate applications in modern cloud-native infrastructure.


Read more
Searce Inc

at Searce Inc

3 recruiters
Karthika Senthilkumar
Posted by Karthika Senthilkumar
Coimbatore
7 - 10 yrs
Best in industry
Data engineering
skill iconPython
SQL
Google Cloud Platform (GCP)

Who are we ?


Searce means ‘a fine sieve’ & indicates ‘to refine, to analyze, to improve’. It signifies our way of working: To improve to the finest degree of excellence, ‘solving for better’ every time. Searcians are passionate improvers & solvers who love to question the status quo.


The primary purpose of all of us, at Searce, is driving intelligent, impactful & futuristic business outcomes using new-age technology. This purpose is driven passionately by HAPPIER people who aim to become better, everyday.


Tech Superpowers


End-to-End Ecosystem Thinker: You build modular, reusable data products across ingestion, transformation (ETL/ELT), and consumption layers. You ensure the entire data lifecycle is governed, scalable, and optimized for high-velocity delivery.


The MDS Architect. You reimagine business with the Modern Data Stack (MDS) to deliver Data Mesh implementations and real value. You treat every dataset as a measurable "Data Product with a clear focus on ROI and time-to-insight.


Distributed Compute & Scale Savant: You craft resilient architectures that survive petabyte scale volume and data skew without "breaking the bank. You prove your designs with cost-performance benchmarks, not just slideware.


Al-Ready Orchestrator: You engineer the bridge between structured data and Unstructured/Vector stores. By mastering pipelines for RAG models and GenAl, you turn raw data into the fuel for intelligent, automated workflows.


The Quality Craftsman (Builder @ Heart): You are an outcome-focused leader who lives in the code. From embedding GDPR/PII privacy-by-design to optimizing SQL, Python, and Spark daily, you ensure integrity is baked into every table


Experience & Relevance


Engineering Depth: 7-10 years of professional experience in end-to-end data product development. You have a portfolio that proves your ability to build complex, high-velocity pipelines for both Batch and Streaming workloads


Cloud-Native Fluency: Deep, hands-on experience designing and deploying scalable data solutions on at least one major cloud platform (AWS, GCP, or Azure). You are comfortable navigating the nuances of EMR, BigQuery, or Synapse at scale.


Al-Native Workflow: You don't just build for Al you build with Al. You must be proficient in using Al coding assistants (e.g.. GitHub Copilot) to accelerate your delivery and have a track record of building the data foundations required for Generative Al.


Architectural Portfolio: Evidence of leading 2-3 large-scale transformations-including platform migrations, data lakehouse builds, or real-time analytics architectures.


Foster a culture of technical excellence by mentoring and inspiring a team of Data analysts and engineers. Lead deep-dive code reviewa, prompte best-practice data modeling and ensure the squad adopts modern engineering standards like CI/CD For data


Client-Facing Acumen: You have direct experience in a consultative, client-facing role. You can confidently translate a CEO's business vision into a Lead Engineer's technical specification without losing anything in translation.


The "Solver" Mindset: A track record of solving 'impossible data problems-whether it's fixing massive data skew, optimizing spiraling cloud costs, or architecting 99.9% available data services.



Read more
Searce Inc

at Searce Inc

3 recruiters
Vaivashhya VN
Posted by Vaivashhya VN
Coimbatore
7 - 10 yrs
Best in industry
Data engineering
Data migration
Datawarehousing
ETL
SQL
+6 more

Who are we ?


Searce means ‘a fine sieve’ & indicates ‘to refine, to analyze, to improve’. It signifies our way of working: To improve to the finest degree of excellence, ‘solving for better’ every time. Searcians are passionate improvers & solvers who love to question the status quo.


The primary purpose of all of us, at Searce, is driving intelligent, impactful & futuristic business outcomes using new-age technology. This purpose is driven passionately by HAPPIER people who aim to become better, everyday.


Tech Superpowers


End-to-End Ecosystem Thinker: You build modular, reusable data products across ingestion, transformation (ETL/ELT), and consumption layers. You ensure the entire data lifecycle is governed, scalable, and optimized for high-velocity delivery.


The MDS Architect. You reimagine business with the Modern Data Stack (MDS) to deliver Data Mesh implementations and real value. You treat every dataset as a measurable "Data Product with a clear focus on ROI and time-to-insight.


Distributed Compute & Scale Savant: You craft resilient architectures that survive petabyte scale volume and data skew without "breaking the bank. You prove your designs with cost-performance benchmarks, not just slideware.


Al-Ready Orchestrator: You engineer the bridge between structured data and Unstructured/Vector stores. By mastering pipelines for RAG models and GenAl, you turn raw data into the fuel for intelligent, automated workflows.


The Quality Craftsman (Builder @ Heart): You are an outcome-focused leader who lives in the code. From embedding GDPR/PII privacy-by-design to optimizing SQL, Python, and Spark daily, you ensure integrity is baked into every table


Experience & Relevance


Engineering Depth: 7-10 years of professional experience in end-to-end data product development. You have a portfolio that proves your ability to build complex, high-velocity pipelines for both Batch and Streaming workloads


Cloud-Native Fluency: Deep, hands-on experience designing and deploying scalable data solutions on at least one major cloud platform (AWS, GCP, or Azure). You are comfortable navigating the nuances of EMR, BigQuery, or Synapse at scale.


Al-Native Workflow: You don't just build for Al you build with Al. You must be proficient in using Al coding assistants (e.g.. GitHub Copilot) to accelerate your delivery and have a track record of building the data foundations required for Generative Al.


Architectural Portfolio: Evidence of leading 2-3 large-scale transformations-including platform migrations, data lakehouse builds, or real-time

analytics architectures.


Client-Facing Acumen: You have direct experience in a consultative, client-facing role. You can confidently translate a CEO's business vision into a Lead Engineer's technical specification without losing anything in translation.


The "Solver" Mindset: A track record of solving 'impossible data problems-whether it's fixing massive data skew, optimizing spiraling cloud costs, or architecting 99.9% available data services.

Read more
Bengaluru (Bangalore)
2 - 4 yrs
₹21L - ₹28L / yr
Artificial Intelligence (AI)
skill iconPython

Strong Junior GenAI / AI Backend Engineer Profiles

Mandatory (Experience 1) – Must have 1.5+ years of full time expeirence in software development using LLMs (OpenAI / Gemini / similar) in projects (internship / full-time / strong personal projects)

Mandatory (Experience 2) – Must have strong coding skills in Python and hands-on backend development experience (FastAPI / Django preferred)

Mandatory (Experience 3) – Must have built or contributed to AI/LLM-based applications, such as chatbots, copilots, document processing tools, etc.

Mandatory (Experience 4) – Must have basic understanding of RAG concepts (embeddings, vector DBs, retrieval)

Mandatory (Experience 5) – Must have experience building APIs or backend services and integrating with external systems

Mandatory (Experience 6) – Must have AI/LLM projects clearly mentioned in CV (with what was built, not just tools used)

Mandatory (Experience 7) – Must have worked with modern development tools (Git, APIs, basic cloud exposure)

Mandatory (Tech Stack) – Strong in Python + basic AI/LLM ecosystem

Mandatory (Company) – Product companies / Funded startups (Series A / B / high-growth environments)

Mandatory (Education) - Tier-1 institutes (IITs, BITS, IIITs), Can be skipped if from Top notch product companies

Mandatory (Exclusion) - Avoid candidates with Only Prompt Engineers, from pure Data Science / ML theory background without backend coding, Frontend-heavy engineers.

Read more
Talent Pro
Bengaluru (Bangalore)
4 - 7 yrs
₹37L - ₹48L / yr
Artificial Intelligence (AI)
skill iconPython

Strong Senior GenAI / AI Backend Engineer Profiles

Mandatory (Experience 1) – Must have 4+ years of total software development experience, with at least 2+ years working on AI/LLM-based features in production

Mandatory (Experience 2) – Must have strong backend engineering experience using Python (FastAPI / Django preferred) and building production-grade systems

Mandatory (Experience 3) – Must have hands-on experience building LLM-based applications, including OpenAI / Gemini / similar models in real projects

Mandatory (Experience 4) – Must have experience with RAG (Retrieval Augmented Generation) including chunking, embeddings, and retrieval pipelines

Mandatory (Experience 5) – Must have experience designing end-to-end AI pipelines, including chaining, tool usage, structured outputs, and handling failure cases

Mandatory (Experience 6) – Must have experience building agentic AI systems (multi-step workflows, tool orchestration like LangGraph / CrewAI or custom agents)

Mandatory (Experience 7) – Must have strong coding and system design skills, not just prompt engineering or experimentation

Mandatory (Experience 8) – Must have experience shipping AI features in production, not just POCs or research projects

Mandatory (Experience 9) – Must have experience working with APIs, backend services, and integrations

Mandatory (Experience 10) – Must have understanding of AI system reliability, including latency, cost optimization, fallback handling, and basic eval thinking

Mandatory (Company) – Product companies / startups, preferably Series A to Series D

Mandatory (Note) - Candidate's overall experience should not be more than 7 Yrs

Mandatory (Tech Stack) – Strong in Python + AI/LLM ecosystem, experience with modern AI tooling and frameworks

Mandatory (Exclusion) – Reject profiles that are only Prompt Engineers, Data Scientists, or Frontend Engineers without strong backend + system building experience

Read more
Newmi Care
Parnika Sangwar
Posted by Parnika Sangwar
Gurugram
1 - 4 yrs
₹4L - ₹6.5L / yr
skill iconPython
skill iconDjango

Company Description

Newmi Care is India's leading outpatient healthcare platform specializing in women's and child health. The services are delivered through an integrated digital platform, physical clinics, and outpatient department (OPD) solutions for corporate and insurance partners. Newmi Care is dedicated to empowering women with seamless and specialized healthcare solutions.


Role Description

This is a full-time on-site role for a Python Developer based in Gurugram. The Python Developer will design, develop, test, and maintain efficient back-end components, APIs, and systems that support the company's platform. This role requires a candidate with 2–4 years of hands-on project experience.


Qualifications

  • Proficiency in Back-End Web Development and comprehensive knowledge of Python programming.
  • Hands on project experience of at least 2+ years is mandatory.
  • Experience in Software Development with a strong understanding of Object-Oriented Programming (OOP) concepts and principles.
  • Experience with Django Framework is Mandatory.
  • Familiarity with working on Databases, including designing, querying, and optimizing database performance.
  • Strong problem-solving abilities and a keen eye for detail in coding and debugging processes.
  • Ability to work independently and collaboratively in an agile development environment.
  • Understanding of front-end technologies and their integration with back-end services is beneficial.
  • Bachelor's degree in Computer Science, Software Engineering, or a related technical field is preferred.
  • Immediate joiners or candidates with a notice period of 15–20 days will be preferred.


Read more
AI-powered content creation and automation platform

AI-powered content creation and automation platform

Agency job
via Uplers by Shrishti Singh
Bengaluru (Bangalore)
2 - 4 yrs
₹15L - ₹28L / yr
skill iconPython
skill iconNodeJS (Node.js)
TypeScript
Artificial Intelligence (AI)
Generative AI
+2 more

Software Engineer

Onsite - HSR Bangalore

6 Days work from Office (Flexible working hours)


Product is a PowerPoint AI assistant used by consulting companies and Fortune 500 teams. A typical professional spends 1 to 3 hours creating one slide. With Product company, they create a v1 of their entire deck in 10 minutes, and make changes like “turn this table to a chart” in seconds directly within PowerPoint.

In the next 2 years, our goal at company is to forever change the way business presentations are made.


Who are we?

  • small, strong team of 5
  • founders are CS graduates from IIT Kharagpur with a specialisation in AI
  • work 6 days a week from our office in HSR Layout in Bangalore
  • funded by Y Combinator and other amazing investors
  • used by consulting companies and Fortune 500 teams


Your responsibilities (in order)

  • Design, implement, test, and deploy full features
  • Design and implement a robust infrastructure to enable rapid development and automated testing
  • Look at usage data to iterate on features


What we’re looking for

  • Undergraduate or master's in Computer Science or equivalent degree
  • 2+ years of backend or DevOps software engineering experience
  • Experience with TypeScript (JavaScript) or Python


You’ll be a good fit if

  • You want to work on a product that can change the way a very large number of people work
  • The chaos of high growth and things breaking is exciting to you
  • You are a workaholic, looking to upskill faster than most people think is possible. This role is not a good fit for you if you’re looking to prioritise work-life balance.
  • You prefer working in-person with other smart people who are excited and passionate about what they’re building
  • You love solving very hard problems at a rapid pace. We discuss timelines in days or weeks, so you’ll constantly be expected to ship really high-quality work.



Perks

  • Comprehensive health insurance for you and dependents
  • Workstation enhancements
  • Subscriptions to AI tools such as Cursor, ChatGPT, etc.

(If there's anything else we can do to make your work more enjoyable, just ask)


If you are interested in proceeding, we would be happy to move your profile to the next stage of the evaluation process.

Kindly share the following details to help us take this forward :


  • Current CTC (Fixed + Variable):
  • Expected CTC:
  • Notice Period (If currently serving, please mention your Last Working Day)
  • Details of any active offers in hand (if applicable)
  • Expected/Available Date of Joining (if applicable)
  • Attach Updated CV:
  • Attach Github Link / Leet code link or other:
  • Current Location:
  • Preffered Location:
  • Reason for job Change:
  • Reason for relocation (if applicable):
  • Are you comfortable with 6 days wfo (flexible working hours)?( Yes / No):

Read more
Blitzy

at Blitzy

2 candid answers
1 product
Bisman Gill
Posted by Bisman Gill
Pune
5yrs+
Upto ₹50L / yr (Varies
)
skill iconPython
skill iconKubernetes
Terraform
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
+2 more

About the role

We are looking for talented Senior Backend Engineers (5+ years of experience) to join our team and take ownership of different parts of our stack. You will be working alongside a team of Engineers locally and directly with the U.S. Engineering team on all aspects of product/application development. You will leverage your experiences and abilities to inform decisions across product development and technology. You will help us build the foundation of our 2nd Headquarters in Pune: its culture, its processes, and its practices. There are a ton of interesting problems to solve, so come hungry. If your colleagues describe you as curious, driven, kind, and creative you are a culture fit.

What Success Looks Like

  • You write, review and ship code in production. Your employer or client's success depends on the software you build
  • You use Generative AI tools on a daily basis to enhance the quality and efficacy of your software and non-software deliverables
  • You are a self-starter and enjoy working with minimal supervision
  • You evaluate and make technical architecture decisions with a long-term view, optimizing for speed, quality, and safety
  • You take pride in the product you create and the code that you write
  • Your team can rely on you to get them out of a sticky situation in production
  • You can work well on a team of sales executives, designers and engineers in an in-person environment
  • You are passionate about the enterprise software development lifecycle and feel strongly about improving it
  • You are a first principles engineer who exercises curiosity about the technologies you work with
  • You can learn quickly about technologies, software and code that you are not familiar with, often from rudimentary documentation
  • You take ownership of the code that you write, and you help the team operate with everything that you build, throughout its lifecycle
  • You communicate openly and solicit feedback on important decisions, keeping the team aligned on your rationale
  • You exercise an optimistic mindset and are willing to go the extra mile to make things work

Areas of Ownership

Our hiring process is designed for you to demonstrate a generalist set of capabilities, with a specialization in Backend Technologies.

Required Technical Experience (MUST HAVE):

  • Expertise in Python -
  • Deep hands-on experience with Terraform -
  • Proficiency in Kubernetes -
  • Experience with cloud platforms (GCP strongly preferred, AWS/Azure acceptable) -

Additional experience with some of the following:

  • Backend Frameworks and Technologies (Node.js, NuxtJS, Express.js)
  • Programming languages (JavaScript, TypeScript, Java, C++, Go)
  • RPCs (REST, gRPC or GraphQL)
  • Databases (SQL, NoSQL, Postgres, MongoDB, or Firebase)
  • CI/CD (Jenkins, CircleCI, GitLab or similar)
  • Source code versioning tools such as Git or Perforce
  • Microservices architecture

Ways to stand out

  • Familiarity with AI Platforms
  • Extensive experience with building enterprise-scale applications with >99% SLAs
  • Deep expertise across the full required stack: Python, Terraform, Kubernetes, and GCP

You'll Get...

  • Competitive Salary
  • Medical Insurance Benefits
  • Employer Provident Fund contributions with Gratuity after 5 years of service
  • Company-sponsored US onsite trips for high performers, based on business requirements
  • Potential international transfer support for top performers, based on business requirements
  • Technology (hardware, software, trainings, etc.) equipment and/or allowance
  • The opportunity to re-shape an entire industry
  • Beautiful office environment
  • Meal allowance and/or food provision on site

Culture

Who we are: Our Co-Founder and CTO is a Serial Gen AI Inventor who grew up in Pune, India, is a BITS Pilani graduate, and worked at NVIDIA's Pune office for 6 years. There, he was promoted 5 times in 6 years and was transferred to the NVIDIA Headquarters in Santa Clara, California. After making significant contributions to NVIDIA, he proceeded to attend Harvard for his dual Masters in Engineering and MBA from HBS. Our other Co-Founder/CEO is a successful Serial Entrepreneur who has built multiple companies. As a team, we work very hard, have a curious mind-set, and believe in a low-ego high output approach.


Read more
oil and Gas Industry (petroleum refinery)

oil and Gas Industry (petroleum refinery)

Agency job
via First Tek, Inc. by David Ingale
Bengaluru (Bangalore)
8 - 12 yrs
₹15L - ₹25L / yr
skill iconPython
MLOps
skill iconMachine Learning (ML)
API
CI/CD
+5 more

🔹 Role: Python Engineer – Python & MLOps

📍 Location: Bellandur, Bangalore

🕐 Work Timings: 01:30 PM – 10:30 PM

🏢 Work Mode: Monday (WFH), Tuesday–Friday (WFO)

📅 Experience: 8-12 Years (Ideal: 8-10 Years)

🔹 Role Overview

This role focuses on building and maintaining a production-grade AI/ML platform. You will work on scalable Python systems, MLOps pipelines, APIs, and CI/CD workflows in an enterprise environment.

🔹 Key Responsibilities

✔ Develop production-grade Python applications using OOP principles

✔ Build and enhance MLOps pipelines (training, validation, deployment)

✔ Design and optimize REST APIs with OpenAI/Swagger

✔ Implement async programming for high-performance systems

✔ Work on CI/CD pipelines (Azure Pipelines / GitHub Actions)

✔ Ensure clean, testable, and maintainable code (PyTest, TDD)

🔹 Required Skills

✔ Strong Python (OOP, modular design)

✔ MLOps & CI/CD pipeline experience

✔ REST API development

✔ Async programming (async/await, concurrency)

✔ Pandas / Polars & Scikit-learn

✔ JSON Schema–driven development

✔ Testing using PyTest

🔹 Nice to Have

➕ Azure ML SDK

➕ Pydantic

➕ Azure Cosmos DB

➕ Experience with large enterprise platforms

Read more
Vikgol
Madhuri D R
Posted by Madhuri D R
Remote only
3 - 6 yrs
₹8L - ₹15L / yr
Linux/Unix
TCP/IP
DNS
Voice Over IP (VoIP)
skill iconAmazon Web Services (AWS)
+16 more

Job role: Systems Engineer (L2)

Location: Remote/Bengaluru

Experience: 3-6 years


About the Role:

We are looking for a Systems Engineer (L2) to join our growing infrastructure team. You will be responsible for managing, optimizing, and scaling our cloud communication platform that handles billions of messages and voice calls annually.


Key Responsibilities:

 — Design, deploy, and maintain scalable cloud infrastructure — AWS/GCP/Azure.

 — Manage and optimize networking components — routers, switches, firewalls, load balancers.

 — Handle incident response — monitor systems, identify issues, resolve production problems.

 — Implement DevOps best practices — CI/CD pipelines, automation, containerization.

 — Collaborate with backend and product teams on system architecture.

— Performance tuning — ensure high availability and reliability of platform.

— Security management — implement security protocols and compliance standards.


Required Skills:

Technical:

  • Linux/Unix administration — strong fundamentals
  • Networking — TCP/IP, DNS, BGP, VoIP protocols
  • Cloud platforms — AWS/GCP/Azure — minimum 2 years
  • DevOps tools — Docker, Kubernetes, Jenkins, CI/CD
  • Monitoring tools — Grafana, Prometheus, Kibana, Datadog
  • Scripting — Python, Bash, Shell
  • Databases — MySQL, PostgreSQL, Redis


Soft skills:

  • Strong problem-solving under pressure
  • Good communication — English written and verbal
  • Team player — collaborative mindset


Good to Have:

  • Experience in telecom/CPaaS/cloud communications industry
  • Knowledge of VoIP, SIP, RTP protocols
  • AI/ML operations experience
  • CCNA/AWS certifications


Read more
Hashone Career
Madhavan I
Posted by Madhavan I
Coimbatore
10 - 15 yrs
₹20L - ₹38L / yr
skill iconPython
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Large Language Models (LLM) tuning

Job Description – AI Tech Lead

Location: Bangaluru

Experience: 10+ Years

Function: AI Center of Excellence (CoE)

Reporting To: Senior Vice President – CX / Head of AI CoE

 

 

We are seeking two highly experienced AI Tech Leads (AVP/DGM level) to drive the architecture, development, and delivery of large‑scale AI solutions spanning Predictive AI, GenAI, and Agentic AI across BPM, IT Services, Digital, Data Engineering, and Enterprise Transformation programs.

The role demands strong technical leadershipsolution design capabilities, hands‑on execution ownership, and the ability to lead multi‑disciplinary teams to deliver scalable, production‑grade AI systems.

 

2. Key Responsibilities

A. Solution Architecture & Strategy

  • Lead end‑to‑end solution architecture across Predictive AI, GenAI, Agentic AI, and enterprise data ecosystems.
  • Partner with business and technology teams to define AI strategy, technical roadmaps, and implementation frameworks.
  • Translate business goals into scalable AI architectures leveraging microservices, distributed systems, and modern AI toolchains.
  • Own architectural decisions on model design, data pipelines, deployment frameworks, MLOps stack, and scaling strategies.

B. Project Delivery & Execution Leadership

  • Drive the complete AI project lifecycle: Requirement Analysis → Architecture → Model Development → Engineering → Deployment → Monitoring.
  • Lead AI engineering teams in developing production‑grade ML/GenAI/Agentic solutions with high reliability and performance.
  • Establish and enforce engineering best practices, coding standards, DevOps/MLOps processes, and quality controls.
  • Manage multiple concurrent AI initiatives with strong governance, risk mitigation, and stakeholder communication.

C. Technical Hands-on Expertise

  • Architect and build complex AI systems involving:
  • Large Language Models (LLMs) & GenAI apps
  • Agentic workflows and autonomous task orchestration
  • Predictive modeling, forecasting, optimization, and statistical modeling
  • Knowledge graphs, vector databases, embeddings
  • Data engineering pipelines (ETL/ELT) and cloud-native architectures
  • Drive model evaluation, experimentation, benchmarking, A/B testing, and continuous improvements.

D. Team Leadership & Mentoring

  • Lead and mentor a team of AI engineers, data scientists, MLOps engineers and developers.
  • Build internal capabilities by establishing training, code reviews, reusable accelerators, and technical playbooks.
  • Actively collaborate with product managers, data engineering teams, CX strategy teams, and domain SMEs.

E. Stakeholder & Client Management

  • Act as a technology partner during client discussions, proposals, RFP responses, and solution demonstrations.
  • Communicate complex AI concepts to CXOs, business leaders, and non-technical stakeholders seamlessly.
  • Support pre-sales with solutioning, effort estimation, and technical presentations.

 

3. A. Technical Skills

  • Strong proficiency in Python, cloud platforms (Azure/AWS/GCP), and AI frameworks (TensorFlow, PyTorch, LangChain, LlamaIndex).
  • Hands-on experience building applications using:
  • LLMs, RAG, fine‑tuning, prompt engineering
  • Autonomous AI agents & multi-agent systems
  • Predictive ML models (Regression, Classification, Clustering, NLP, CV)
  • Expertise in microservices architecture, API design, scalable deployments.
  • Strong command over SDLC, Agile methodologies, CI/CD, DevOps & MLOps.
  • Experience with data engineering tools: Spark, Databricks, Airflow, Kafka, SQL/NoSQL, and modern data lakehouse platforms.

B. Functional & Domain Skills

  • Experience working in BPM, Customer Experience, Digital Transformation, IT Services.
  • Ability to map AI use cases to business value: workflow optimization, automation, customer experience, operations, and analytics.

C. Leadership & Soft Skills

  • Strong team leadership and mentoring experience.
  • Excellent communication, client-facing abilities, and stakeholder management skills.
  • Strong decision-making, problem-solving, and delivery ownership.

4. Qualifications

  • Bachelor’s / Master’s in Computer Science, Engineering, Data Science, or related fields.
  • 10–15 years total experience with at least 5+ years leading AI/ML projects.
  • Demonstrated success delivering large-scale AI programs in enterprise environments.
  • Certifications in AI/ML, cloud, or architecture (preferred).

 

 

Read more
Nevis Software Solutions Pvt Ltd
Pune
3 - 5 yrs
₹7L - ₹12L / yr
skill iconDjango
skill iconPython
RESTful APIs
Web API

About the Role

We are looking for an experienced Django Developer to join our on-site engineering team in Pune. This role involves building and scaling high-performance backend systems for our SaaS products. You will work closely with product, frontend, and DevOps teams to design robust APIs, optimize databases, and deliver production-grade solutions.

This is a hands-on role with ownership, technical depth, and real impact.


Key Responsibilities

  • Design, develop, test, and maintain scalable backend services using Django & Python
  • Architect and implement secure, high-performance RESTful APIs
  • Work extensively with PostgreSQL for schema design, query optimization, indexing, and performance tuning
  • Build and manage asynchronous workflows using Celery
  • Implement real-time features using Daphne, Redis, and WebSockets (ASGI stack)
  • Containerize applications using Docker; manage Docker Compose and environment setups
  • Collaborate with frontend developers, product managers, and designers for seamless delivery
  • Perform code reviews, mentor junior developers, and enforce best practices
  • Ensure application security, scalability, and reliability
  • Monitor system performance and handle debugging, logging, and error management
  • Maintain clear documentation for APIs, services, and deployment workflows

Required Skills & Qualifications

  • 3-4 years of hands-on experience with Django & Python
  • Strong expertise in REST API design and backend architecture
  • Advanced knowledge of PostgreSQL (queries, indexing, optimization)
  • Solid experience with Celery for background tasks
  • Hands-on experience with Daphne, Redis, and WebSockets
  • Strong command over Docker & containerized deployments
  • Proficiency with Git/GitHub workflows, PR reviews, and basic CI
  • Excellent understanding of ORM concepts and database modeling
  • Strong problem-solving, debugging, and communication skills
  • Experience using AI/LLM tools to improve productivity is a plus

Nice-to-Have

  • Experience with cloud platforms (AWS / GCP / Azure)
  • Exposure to CI/CD pipelines and deployment automation
  • Familiarity with monitoring tools (Sentry, Prometheus, Grafana, etc.)
  • Basic frontend understanding (HTML, CSS, JavaScript)
  • Experience handling high-traffic systems and performance optimization
  • Exposure to Agile / Scrum environments

What We Offer

  • Competitive salary package
  • Opportunity to work on scalable SaaS and AI-driven platforms
  • Strong engineering culture with ownership and autonomy
  • On-site collaborative environment with fast decision-making
  • Learning, growth, and leadership opportunities
  • Challenging projects with end-to-end responsibility

Expectations & Deliverables

  • Production-ready, well-tested, and maintainable code
  • Proactive communication and ownership of deliverables
  • High-quality documentation and clean architecture practices
  • Adherence to security, compliance, and IP standards


Read more
CLOUDSUFI

at CLOUDSUFI

3 recruiters
Ayushi Dwivedi
Posted by Ayushi Dwivedi
Noida
6 - 12 yrs
₹35L - ₹45L / yr
Agentic AI
Large Language Models (LLM)
Natural Language Processing (NLP)
skill iconPython
Retrieval Augmented Generation (RAG)

About Us

CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.

 

Our Values 

We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.

  

Equal Opportunity Statement 

CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, colour, religion, gender, gender identity or expression, sexual orientation and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/


Location: Noida, India (Hybrid) - 2 days from office

Position: Full-time

As a Senior Data Scientist, you will be a key player in our technical leadership. You will be responsible for designing, developing, and deploying sophisticated AI and Machine Learning solutions, with a strong emphasis on Generative AI and Large Language Models (LLMs). You will architect and manage scalable AI microservices, drive research into state-of-the-art techniques, and translate complex business requirements into tangible, high-impact products. This role requires a blend of deep technical expertise, strategic thinking, and leadership.

 

Key Responsibilities

  • Architect & Develop AI Solutions: Design, build, and deploy robust and scalable machine learning models, with a primary focus on Natural Language Processing (NLP), Generative AI, and LLM-based Agents.
  • Build AI Infrastructure: Create and manage AI-driven microservices using frameworks like Python FastAPI, ensuring high performance and reliability.
  • Lead AI Research & Innovation: Stay abreast of the latest advancements in AI/ML. Lead research initiatives to evaluate and implement state-of-the-art models and techniques for performance and cost optimization.
  • Solve Business Problems: Collaborate with product and business teams to understand challenges and develop data-driven solutions that create significant business value, such as building business rule engines or predictive classification systems.
  • End-to-End Project Ownership: Take ownership of the entire lifecycle of AI projects—from ideation, data processing, and model development to deployment, monitoring, and iteration on cloud platforms.
  • Team Leadership & Mentorship: Lead learning initiatives within the engineering team, mentor junior data scientists and engineers, and establish best practices for AI development.
  • Cross-Functional Collaboration: Work closely with software engineers to integrate AI models into production systems and contribute to the overall system architecture.

 

Required Skills and Qualifications

  • Master’s (M.Tech.) or Bachelor's (B.Tech.) degree in Computer Science, Artificial Intelligence, Information Technology, or a related field.
  • 7+ years of professional experience in a Data Scientist, AI Engineer, or related role.
  • Expert-level proficiency in Python and its core data science libraries (e.g., PyTorch, Huggingface Transformers, Pandas, Scikit-learn).
  • Demonstrable, hands-on experience building and fine-tuning Large Language Models (LLMs) and implementing Generative AI solutions.
  • Proven experience in developing and deploying scalable systems on cloud platforms, particularly in GCP.
  • Strong background in Natural Language Processing (NLP), including experience with multilingual models and transcription.
  • Experience with containerization technologies, specifically Docker.
  • Solid understanding of software engineering principles and experience building APIs and microservices.

 

Preferred Qualifications

  • A strong portfolio of projects. A track record of publications in reputable AI/ML conferences is a plus.
  • Experience with full-stack development (Node.js, Next.js) and various database technologies (SQL, MongoDB, Elasticsearch).
  • Familiarity with setting up and managing CI/CD pipelines (e.g., Jenkins).
  • Proven ability to lead technical teams and mentor other engineers.
  • Experience developing custom tools or packages for data science workflows.

 

Read more
Pune
7 - 10 yrs
₹20L - ₹30L / yr
skill iconPython
API
Databases
RESTful APIs
skill iconPostgreSQL

About the Role

We are looking for a strong software engineer who actively uses AI to build better software faster. This is a backend-heavy engineering role where LLMs and AI systems are integrated into real production applications.

You will design, build, deploy, and maintain AI-enabled systems, while maintaining strong engineering discipline and code quality.


Key Responsibilities

  • Architect and build scalable backend systems (Python / FastAPI preferred)
  • Integrate LLM APIs, RAG pipelines, and AI workflows into production applications
  • Deploy and maintain containerized applications on AWS/Azure
  • Use AI coding assistants and agents to accelerate development, without compromising code quality
  • Convert ambiguous requirements into production-ready systems



Must-Have Skills

  • 7+ years of professional software engineering experience
  • Strong Python backend experience
  • Experience building REST APIs and production systems
  • Solid understanding of system design and clean architecture
  • Hands-on experience with Docker and Linux
  • Experience deploying to AWS or Azure
  • Experience integrating LLM APIs into applications
  • Experience with embeddings / vector databases / RAG pipelines
  • Strong Git and collaborative development workflows
  • Ability to operate independently in ambiguous environments

Most importantly:

  • Strong engineering fundamentals
  • High ownership mindset
  • Comfort using AI tools to move fast
  • Discipline to maintain structure and quality

Good-to-Have

  • Experience with real-time streaming or WebSockets
  • Kubernetes deployment experience
  • Experience in a fast-paced startup or consulting environment
  • Familiarity with agent frameworks or voice/multimodal systems

Screening Requirement

Please include:

  • A portfolio of systems you’ve built (especially AI-enabled systems), and
  • A short note explaining how you use AI tools in your development workflow.


We are specifically looking for engineers who lean into AI agents and coding assistants, but still understand architecture, performance, and clean code.

Read more
enParadigm

at enParadigm

2 candid answers
3 products
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
0 - 1 yrs
Upto ₹10L / yr (Varies
)
skill iconJava
skill iconPython
skill iconPHP
skill iconJavascript
skill iconAngular (2+)
+6 more

We are looking for a Junior Full Stack Developer to join our growing engineering team and contribute to building high-quality software solutions. In this role, you will support the entire development lifecycle—from design to deployment—while working closely with product managers and senior engineers.


If you have a passion for technology, enjoy learning new tools, and thrive in a collaborative environment, we’d love to hear from you.


Current Technology Stack

  • Backend: FastAPI (active), PHP (legacy), Java (legacy)
  • Frontend: Svelte, TypeScript, JavaScript

Key Responsibilities

  • Collaborate with development teams and product managers to ideate and deliver software solutions
  • Assist in designing client-side and server-side architecture
  • Contribute to building intuitive and visually appealing user interfaces
  • Support database design and application development
  • Help develop and maintain APIs
  • Participate in testing to ensure performance, scalability, and responsiveness
  • Assist in troubleshooting, debugging, and enhancing existing systems
  • Support security and data-protection initiatives
  • Contribute to mobile-responsive feature development
  • Help maintain technical documentation

Candidate Requirements

Education

  • B.Tech / BE in Computer Science, Statistics, or a related field

Location

  • Bangalore

Role-Based Skills

  • Exposure to web application development
  • Familiarity with common technology stacks
  • Basic knowledge of front-end technologies such as HTML, CSS, JavaScript, XML, and jQuery
  • Working understanding of back-end languages such as Java, Python, or PHP
  • Familiarity with JavaScript frameworks/libraries like Angular, React, Svelte, or Node.js
  • Awareness of databases such as PostgreSQL, MySQL, or MongoDB
  • Basic understanding of web servers (e.g., Apache) and UI/UX principles

Behavioral Skills

  • Strong communication and teamwork abilities
  • High attention to detail
  • Good organizational skills
  • Analytical and problem-solving mindset


Read more
enParadigm

at enParadigm

2 candid answers
3 products
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
2 - 4 yrs
Upto ₹16L / yr (Varies
)
skill iconJava
skill iconPython
skill iconNodeJS (Node.js)
skill iconGo Programming (Golang)
skill iconPHP
+4 more

We are looking for a Full Stack Developer to build scalable software solutions and contribute across the entire software development lifecycle—from conception to deployment.

You will work closely with cross-functional teams and should be comfortable with both front-end and back-end technologies, modern frameworks, and third-party libraries. If you enjoy building visually appealing, functional applications and thrive in Agile environments, we’d love to connect.


Current Technologies Used

  • Backend: FastAPI (active), PHP (legacy), Java (legacy)
  • Frontend: Svelte, TypeScript, JavaScript

Experience with Python and PHP is a plus, but not mandatory.


Role Responsibilities

  • Collaborate with development teams and product managers to ideate software solutions
  • Design client-side and server-side architecture
  • Build visually appealing front-end applications
  • Develop and manage efficient databases and applications
  • Write effective and scalable APIs
  • Test software for responsiveness and performance
  • Troubleshoot, debug, and upgrade systems
  • Implement security and data-protection measures
  • Build mobile-responsive features and applications
  • Create and maintain technical documentation

Candidate Requirements:


Education

  • B.Tech / BE in Computer Science, Statistics, or a relevant field

Experience

  • 2–4 years as a Full Stack Developer or in a similar role

Location

  • Bangalore (Hybrid)

Skill Set – Role Based

  • Experience building web applications
  • Familiarity with common technology stacks
  • Knowledge of front-end languages and libraries:
  • HTML, CSS, JavaScript, XML, jQuery
  • Knowledge of back-end languages and frameworks:
  • Java, Python, PHP
  • Angular, React, Svelte, Node.js
  • Familiarity with:
  • Databases: PostgreSQL, MySQL, MongoDB
  • Web servers: Apache
  • UI/UX principles

Skill Set – Behavioural

  • Excellent communication and teamwork skills
  • Strong attention to detail
  • Good organizational skills
  • Analytical mindset


Read more
enParadigm

at enParadigm

2 candid answers
3 products
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
2 - 4 yrs
Upto ₹8L / yr (Varies
)
skill iconJava
skill iconPython
Selenium Web driver
cypress
playwright

Job Description:


Test Design & Execution

Design and execute detailed, well-structured test plans, test cases, and test scenarios to ensure high-quality product releases.


Automation Development

Develop and maintain automated test scripts for functional and regression testing using tools such as Selenium, Cypress, or Playwright.


Defect Management

Identify, log, and track defects through to resolution using tools like Jira, ensuring minimal impact on production releases.


API & Backend Testing

Conduct API testing using Postman, perform backend validation, and execute database testing using SQL/Oracle.


Collaboration

Work closely with developers, product managers, and UX designers in an Agile/Scrum environment to embed quality across the SDLC.


CI/CD Integration

Integrate automated test suites into CI/CD pipelines using platforms such as Jenkins or Azure DevOps.


Required Skills & Experience

  • Minimum 2+ years of experience in Software Quality Assurance or Automation Testing.
  • Hands-on experience with Selenium WebDriver, Cypress, or Playwright.
  • Proficiency in at least one programming/scripting language: Java, Python, or JavaScript.
  • Strong experience in functional, regression, integration, and UI testing.
  • Solid understanding of SQL for data validation and backend testing.
  • Familiarity with Git for version control, Jira for defect tracking, and Postman for API testing.


Desirable Skills

  • Experience in mobile application testing (Android/iOS).
  • Exposure to performance testing tools such as JMeter.
  • Experience working with cloud platforms like AWS or Azure.


Read more
BigThinkCode Technologies
Divya Mohandass
Posted by Divya Mohandass
Chennai
2 - 5 yrs
₹7L - ₹16L / yr
skill iconPython
skill iconDjango
skill iconFlask

Responsibilities

  • Build and enhance backend features as part of the tech team.
  • Take ownership of features end-to-end in a fast-paced product/startup environment.
  • Collaborate with managers, designers, and engineers to deliver user-facing functionality.
  • Design and implement scalable REST APIs and supporting backend systems.
  • Write clean, reusable, well-tested code and contribute to internal libraries.
  • Participate in requirement discussions and translate business needs into technical tasks.
  • Support the technical roadmap through architectural input and continuous improvement.



Requirements


  • Experience: 2 - 5 years.
  • Strong understanding of Algorithms, Data Structures, and OOP principles.
  • Integrate with third-party systems (payment/SMS APIs, mapping services, etc. ).
  • Proficiency in Python and experience with at least one framework (Flask / Django / FastAPI).
  • Hands-on experience with design patterns, debugging, and unit testing (pytest/unittest).
  • Working knowledge of relational or NoSQL databases and ORMs (SQLAlchemy / Django ORM).
  • Familiarity with asynchronous programming (async/await, FastAPI async).
  • Experience with caching mechanisms (Redis).
  • Ability to perform code reviews and maintain code quality.
  • Exposure to cloud platforms (AWS/Azure/GCP) and containerization (Docker).
  • Experience with CI/CD pipelines.
  • Basic understanding of message brokers (RabbitMQ / Kafka / Redis streams).


Read more
Mid Size Product Engineering Services Company

Mid Size Product Engineering Services Company

Agency job
via Vidpro Consultancy Services by Vidyadhar Reddy
Remote, Bengaluru (Bangalore), Chennai, Hyderabad
20 - 26 yrs
₹65L - ₹120L / yr
skill iconVue.js
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconJavascript
+18 more

This role will report to the Chief Technology Officer


You Will Be Responsible For


* Driving decision-making on enterprise architecture and component-level software design to our software platforms' timely build and delivery.

* Leading a team in building a high-performing and scalable SaaS product.

* Conducting code reviews to maintain code quality and follow best practices

* DevOps practice development on promoting automation, including asset creation, enterprise strategy definition, and training teams

* Developing and building microservices leveraging cloud services

* Working on application security aspects

* Driving innovation within the engineering team, translating product roadmaps into clear development priorities, architectures, and timely release plans to drive business growth.

* Creating a culture of innovation that enables the continued growth of individuals and the company

* Working closely with Product and Business teams to build winning solutions

* Led talent management, including hiring, developing, and retaining a world-class team


Ideal Profile


* You possess a Degree in Engineering or a related field and have at least 20+ years of experience as a Software Engineer, with a 10+ years of experience leading teams and at least 4 Years of experience in building a SaaS / Fintech platform.

* Proficiency in MERN / Java / Full Stack.

* Led a team in optimizing the performance and scalability of a product

* You have extensive experience with DevOps environment and CI/CD practices and can train teams.

* You're a hands-on leader, visionary, and problem solver with a passion for excellence.

* You can work in fast-paced environments and communicate asynchronously with geographically distributed teams.


What's on Offer?


* Exciting opportunity to drive the Engineering efforts of a reputed organisation

* Work alongside & learn from best in class talent

* Competitive compensation + ESOPs

Read more
ARDEM Incorporated
Remote only
8 - 12 yrs
₹9L - ₹11L / yr
skill iconPython
skill icon.NET
skill iconJavascript
skill iconNodeJS (Node.js)
SQL
+12 more

Senior Project Owner / Project Manager Technology


Department - Technology / Software Development

Work Mode - Work From Home (WFH), Full Time

Experience - Minimum 10 Years (Development Background)

Location - Tier-1 Cities Only (Mumbai, Delhi, Bengaluru, Hyderabad, Chennai, Pune, Kolkata)

Time Zone - Candidate should be comfortable working in US time zone overlap and attending client calls accordingly.


ABOUT ARDEM

ARDEM Incorporated is a leading Business Process Outsourcing (BPO) and Automation company serving USbased clients across diverse industries. Our Technology Team builds and maintains in-house applications that power data processing pipelines, automation workflows, internal platforms, and domain-specific training modules all engineered to deliver operational excellence at scale. To our clients, we provide cloud-based platforms to assist in their day-to-day business analytics. Our cloud services focus on finance, logistics and utility management.


ROLE SUMMARY

We are looking for a seasoned Senior Project Owner / Project Manager with a strong development foundation to lead our technology initiatives. This role bridges client management and technical execution you will own endto-end delivery of multiple concurrent projects while supporting a high-performing remote team.


KEY RESPONSIBILITIES

Project & Delivery Management

  • Own and manage multiple concurrent technology projects from initiation to production release
  • Define project scope, timelines, milestones, and resource allocation plans
  • Distribute tasks effectively across a team of developers, QA, and support engineers
  • Track assigned work daily, follow up on progress, and proactively remove blockers
  • Ensure all projects meet deadlines and quality benchmarks without compromise
  • Participate actively in production activities and take full accountability for live deployments


US Client Management

  • Serve as the Technology single point of contact for all assigned US clients
  • Attend and lead client calls that are focused on an ARDEM Technical Solution. This may include discussions related to future clients or existing clients (US time zone overlap required)
  • Resolve client queries, manage escalations, and ensure high client satisfaction
  • Showcase company-developed applications and software demos confidently to clients
  • Translate complex client requirements into clear technical deliverables for the team


Team Leadership

  • Lead, mentor, and performance-manage a distributed remote team of technical members
  • Foster accountability, ownership, and a high-delivery culture within the team
  • Conduct sprint planning, stand-ups, retrospectives, and performance reviews
  • Identify skill gaps and work with HR/training teams to bridge them


Process & Operations

  • Deeply understand ARDEM's internal processes and align project execution accordingly
  • Ensure development standards and best practices are followed across all projects
  • Manage crisis situations with composure, identify root causes and drive swift resolution
  • Coordinate with cross-functional teams including HR, Operations, Training, and QA
  • Maintain project documentation, status reports, and risk registers


REQUIRED EXPERIENCE

  • 10+ years of total experience in software development and project management
  • 5–7 years of hands-on coding experience in one or more technologies listed below
  • 2–3 years in a team management or tech lead role overseeing 5+ members
  • Proven experience managing multiple simultaneous projects in a remote/WFH environment
  • Prior experience working with US-based clients strong understanding of US work culture and expectations


TECHNICAL SKILLS

  • Python: scripting, automation, data processing, backend services
  • JavaScript / Node.js: server-side development, REST APIs, async workflows
  • NET Core: enterprise application development and service integration
  • SQL Databases: query optimization, schema design, stored procedures
  • Familiarity with CI/CD pipelines, Git workflows, and deployment processes
  • Ability to review code, understand architectural decisions, and guide the team technically


SKILLS & COMPETENCIES

  • Exceptional verbal and written communication skills in English client-facing confidence is a must
  • Strong crisis management and conflict resolution ability under tight deadlines
  • Highly organized with a structured approach to planning, prioritization, and execution
  • Self-driven and accountable capable of operating independently in a remote environment
  • Strong presentation skills able to demo software to non-technical stakeholders
  • Empathetic leadership style with the ability to motivate and align diverse team members


QUALIFICATIONS

  • Bachelor's or master's degree in computer science
  • PMP Certification: Preferred (candidates without PMP must demonstrate equivalent project management rigor)
  • Agile / Scrum certifications (CSM, PMI-ACP) are an added advantage


LOCATION PREFERENCE

  • Candidates must be based in a Tier-1 city: Mumbai, Delhi NCR, Bengaluru, Hyderabad, Chennai, Pune, or Kolkata
  • This is a full-time Work From Home role: reliable internet, a dedicated workspace, and availability during US business hours are mandatory
Read more
Mercari, Inc

at Mercari, Inc

2 candid answers
1 video
Ashwin S
Posted by Ashwin S
Bengaluru (Bangalore)
6 - 9 yrs
Best in industry
skill iconMachine Learning (ML)
PyTorch
TensorFlow
NumPy
skill iconPython
+2 more

Introduction

About Us:


Mercari is a Japan-based C2C marketplace company founded in 2013 with the mission to “Create value in a global marketplace where anyone can buy & sell.” From being the first tech unicorn from Japan before its IPO in 2018 we have come a long way towards becoming a global player and continuously and diligently work towards our transformation journey with a strong focus on our mission.

Since its inception, Mercari Group has worked to grow its services, investing in both our people and technology. Over time Mercari has expanded from being the top player in the C2C marketplace in Japan to new geographies like the U.S. We have also successfully launched new businesses such as Merpay, which is a mobile payment service platform with a vision to create a society where anyone can realize their dreams through a new ecosystem centered not only on payment service but also on credit. Today, Mercari Group is made up of multiple subsidiary businesses including logistics, B2C platform, blockchain, and sports team management.


For our services to be utilized by people worldwide; however, there is still a mountain of work ahead of us. This endeavor naturally requires the capability of the best talent and minds, and that is exactly the reason for us to launch the India Center of Excellence. With your help, we will continue to take on the world stage and strive to grow into a successful global tech company.


Our Culture:

To achieve our mission at Mercari, our organization and each of our employees share the same values and perspectives. Our individual guidelines for action are defined by our four values: Go Bold, All for One, Be a Pro and Move Fast. Our organization is also shaped by our four foundations: Sustainability, Diversity & Inclusion, Trust & Openness, and Well-being for Performance. Regardless of how big Mercari gets, the culture will remain essential to achieving our mission and something we want to preserve throughout our organization. We invite you to read the Mercari Culture Doc which summarizes the behaviors and mindset shared by Mercari and its employees. We continue to build an environment where all of our members of diverse backgrounds are accepted and recognized, and where they can thrive while holding dear to Mercari’s culture.


Work Responsibilities

  • Machine learning engineers working in the Recommendation domain develop the functions and services of the marketplace app Mercari through the development and maintenance of machine learning systems like Recommender systems while leveraging necessary infrastructure and companywide platform tools. 
  • Mercari is actively applying advanced machine learning technology to provide a more convenient, safer, and more enjoyable marketplace. Machine learning engineers use the cloud and Kubernetes to operate and improve machine learning systems.


Bold Challenges

  • We are looking for people who are interested in our services, mission, and values, and want to work where engineers can go bold, use the latest technology, make autonomous decisions, and take on challenges at a rapid pace.
  • Develop and optimize machine learning algorithms and models to enhance recommendation system to improve discovery experience of users
  • Collaborate with cross-functional teams and product stakeholders to gather requirements, design solutions, and implement features that improve user engagement
  • Conduct data analysis and experimentation with large-scale data sets to identify patterns, trends, and insights that drive the refinement of recommendation algorithms
  • Utilize machine learning frameworks and libraries to deploy scalable and efficient recommendation solutions.
  • Monitor system performance and conduct A/B testing to evaluate the effectiveness of features.
  • Continuously research and stay updated on advancements in AI/machine learning techniques and recommend innovative approaches to enhance recommendation capabilities.


Minimum Requirements:

  • Over 5-9 years of professional experience in end-to-end development of large-scale ML systems in production
  • Strong experience demonstrating development and delivery of end-to-end machine learning solutions starting from experimentation to deploying models, including backend engineering and MLOps, in large scale production systems.
  • Experience using common machine learning frameworks (e.g., TensorFlow, PyTorch) and libraries (e.g., scikit-learn, NumPy, pandas)
  • Deep understanding of machine learning and software engineering fundamentals
  • Basic knowledge and skills related to monitoring system, logging, and common operations in production environment
  • Communication skills to carry out projects in collaboration with multiple teams and stakeholders


Preferred skills:

  • Experience developing Recommender systems utilizing large-scale data sets
  • Basic knowledge of enterprise search systems and related stacks (e.g. ELK)
  • Functional development and bug fixing skills necessary to improve system performance and reliability
  • Experience with technology such as Docker and Kubernetes
  • Experience with cloud platforms (AWS, GCP, Microsoft Azure, etc.)
  • Microservice development and operation experience with Docker and Kubernetes
  • Utilizing deep learning models/LLMs in production
  • Experience in publications at top-tier peer-reviewed conferences or journals


Employment Status

Full-time

Office

Bangalore

Hybrid workstyle

  • We believe in high performance and professionalism. We work from office for 2 days/week and work from home 3 days/week
  • To build a strong & highly-engaged organization in India, we highly encourage everyone to work from our Bangalore office, especially during the initial office setup phase
  • We will continue to review and update the policy to address future organizational needs

Work Hours

  • Full flextime (no core time)

*Flexible to choose working hours other than team common meetings

Media


Owned Media

  • Mercari Engineering Portal
  • AI at Mercari portal
  • Mercan - Introduces the people that make Mercari
  • Mercari US Blog

Related Articles

  • Development Platforms and Platformers: On Rising to the Global Standard Ken Wakasa, Mercari CTO | mercan
  • “I'm Not a Talented Engineer” Insists the Member-Turned-Manager Revamping Our Internal CS Tool | mercan
  • Personalize to globalize:How Mercari is reshaping their app, their company, and the world | mercan
  • The Providers of the Safe and Secure Mercari Experience: The TnS Team, Introduced by Its Members! | mercan
Read more
Searce Inc

at Searce Inc

3 recruiters
Srishti Dani
Posted by Srishti Dani
Mumbai, Pune, Bengaluru (Bangalore)
7 - 10 yrs
Best in industry
Data migration
Datawarehousing
ETL
SQL
Google Cloud Platform (GCP)
+7 more

Lead Data Engineer


What are we looking for

real solver?

Solver? Absolutely. But not the usual kind. We're searching for the architects of the audacious & the pioneers of the possible. If you're the type to dismantle assumptions, re-engineer ‘best practices,’ and build solutions that make the future possible NOW, then you're speaking our language.


Your Responsibilities

What you will wake up to solve.

  • Lead Technical Design & Data Architecture: Architect and lead the end-to-end development of scalable, cloud-native data platforms. You’ll guide the squad on critical architectural decisions—choosing between Batch vs. Streaming or ETL vs. ELT—while remaining 100% hands-on, contributing high-quality, production-grade code.
  • Build High-Velocity Data Pipelines: Drive the implementation of robust data transports and ingestion frameworks using Python, SQL, and Spark. You will build integration layers that connect heterogeneous sources (SaaS, RDBMS, NoSQL) into unified, high-availability environments like BigQuery, Snowflake, or Redshift.
  • Mentor & Elevate the Squad: Foster a culture of technical excellence by mentoring and inspiring a team of data analysts and engineers. Lead deep-dive code reviews, promote best-practice data modeling (Star/Snowflake schema), and ensure the squad adopts modern engineering standards like CI/CD for data.
  • Drive AI-Ready Data Strategy: Be the expert in designing data foundations optimized for AI and Machine Learning. You will champion the use of GCP (Dataflow, Pub/Sub, BigQuery) and AWS (Lambda, Glue, EMR) to create "clean room" environments that fuel advanced analytics and generative AI models.
  • Partner with Clients as a Technical DRI: Act as the Directly Responsible Individual for client success. Translate ambiguous business questions into elegant data services, manage project deliverables using Agile methodologies, and ensure that the data provided is accurate, consistent, and mission-critical.
  • Troubleshoot & Optimize for Scale: Own the reliability of the reporting layer. You will proactively monitor pipelines, troubleshoot complex transformation bottlenecks, and propose ways to improve platform performance and cost-efficiency.
  • Innovate and Build Reusable IP: Spearhead the creation of reusable data frameworks, custom operators, and transformation libraries that accelerate future projects and establish Searce’s unique technical advantage in the market.


Welcome to Searce


The AI-Native tech consultancy that's rewriting the rules.

Searce is an AI-native, engineering-led, modern tech consultancy that empowers clients to futurify their business by delivering intelligent, impactful, real business outcomes. Searce solvers co-innovate with clients as their trusted transformational partners ensuring sustained competitive advantage. Searce clients realize smarter, faster, better business outcomes delivered by AI-native Searce solver squads. 


Functional Skills 

the solver personas.

  • The Data Architect: This persona deconstructs ambiguous business goals into scalable, elegant data blueprints. They don't just move data; they design the foundation—from schema design to partitioning strategies—that allows data scientists and analysts to thrive, foreseeing technical bottlenecks and making pragmatic trade-offs.
  • The Player-Coach: As a hands-on leader, this persona leads from the front by writing exemplary, production-grade SQL and Python while simultaneously mentoring and elevating the skills of the squad. Their success is measured by the team's ability to deliver high-quality, maintainable code and their growth as engineers.
  • The Pragmatic Innovator: This individual balances a passion for modern data tech (like Generative AI and Real-time Streaming) with a sharp focus on business outcomes. They champion new tools where they add real value but are disciplined enough to choose stable, cost-effective solutions to meet deadlines and deliver robust products.
  • The Client-Facing Technologist: This persona acts as the crucial technical bridge between the data squad and the client. They build trust by listening actively, explaining complex data concepts (like data latency or idempotency) in simple terms, and demonstrating how engineering decisions align with the client’s strategic goals.
  • The Quality Craftsman: This individual possesses an unwavering commitment to data integrity and treats data engineering as a craft. They are the guardian of the reporting layer, advocating for robust testing, data validation frameworks, and clean, modular code to ensure the long-term reliability of the data platform.


Experience & Relevance 

  • Engineering Depth: 7-10 years of professional experience in end-to-end data product development. You have a portfolio that proves your ability to build complex, high-velocity pipelines for both Batch and Streaming workloads.
  • Cloud-Native Fluency: Deep, hands-on experience designing and deploying scalable data solutions on at least one major cloud platform (AWS, GCP, or Azure). You are comfortable navigating the nuances of EMR, BigQuery, or Synapse at scale.
  • AI-Native Workflow: You don’t just build for AI; you build with AI. You must be proficient in using AI coding assistants (e.g., GitHub Copilot) to accelerate your delivery and have a track record of building the data foundations required for Generative AI.
  • Architectural Portfolio: Evidence of leading 2-3 large-scale transformations—including platform migrations, data lakehouse builds, or real-time analytics architectures.
  • Client-Facing Acumen: You have direct experience in a consultative, client-facing role. You can confidently translate a CEO’s business vision into a Lead Engineer’s technical specification without losing anything in translation.


Join the ‘real solvers’

ready to futurify?

If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’. Don’t Just Send a Resume. Send a Statement.



Read more
Risosu Consulting LLP
Remote only
2 - 4 yrs
₹6L - ₹9L / yr
skill iconData Analytics
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
skill iconPython
API
+1 more

Job Title: Data Analyst (AI/ML Exposure)

Experience: 1–3 Years

Location: Mumbai

Job Description:

We are looking for a Data Analyst with strong experience in data handling, analysis, and visualization, along with exposure to AI/ML concepts. The role involves working with structured and unstructured data (SQL, CSV, JSON), building data pipelines, performing EDA, and deriving actionable insights. Candidates should have hands-on experience with Python (Pandas, NumPy), data visualization tools, and basic knowledge of NLP/LLMs. Exposure to APIs, data-driven applications, and client interaction will be an added advantage.

Skills Required: Python, SQL, Data Analysis, EDA, Visualization, APIs

Apply: Share your resume or connect with us.


Read more
PGAGI
Javeriya Shaik
Posted by Javeriya Shaik
Remote only
0 - 1 yrs
₹1 - ₹3 / mo
skill iconPython
skill iconJava
TensorFlow
Keras
PyTorch
+2 more

Job Title: AI Architecture Intern

Company: PGAGI Consultancy Pvt. Ltd.

Location: Remote

Employment Type: Internship


Position Overview

We're at the forefront of creating advanced AI systems, from fully autonomous agents that provide intelligent customer interaction to data analysis tools that offer insightful business solutions. We are seeking enthusiastic interns who are passionate about AI and ready to tackle real-world problems using the latest technologies.


Duration: 6 months


Key Responsibilities:

  • AI System Architecture Design: Collaborate with the technical team to design robust, scalable, and high-performance AI system architectures aligned with client requirements.
  • Client-Focused Solutions: Analyze and interpret client needs to ensure architectural solutions meet expectations while introducing innovation and efficiency.
  • Methodology Development: Assist in the formulation and implementation of best practices, methodologies, and frameworks for sustainable AI system development.
  • Technology Stack Selection: Support the evaluation and selection of appropriate tools, technologies, and frameworks tailored to project objectives and future scalability.
  • Team Collaboration & Learning: Work alongside experienced AI professionals, contributing to projects while enhancing your knowledge through hands-on involvement.


Requirements:

  • Strong understanding of AI concepts, machine learning algorithms, and data structures.
  • Familiarity with AI development frameworks (e.g., TensorFlow, PyTorch, Keras).
  • Proficiency in programming languages such as Python, Java, or C++.
  • Demonstrated interest in system architecture, design thinking, and scalable solutions.
  • Up-to-date knowledge of AI trends, tools, and technologies.
  • Ability to work independently and collaboratively in a remote team environment


Perks:

- Hands-on experience with real AI projects.

- Mentoring from industry experts.

- A collaborative, innovative and flexible work environment

Compensation:

- Joining Bonus: A one-time bonus of INR 2,500 will be awarded upon joining.

- Stipend: Base is INR 8000/- & can increase up to 20000/- depending upon performance matrix.


After completion of the internship period, there is a chance to get a full-time opportunity as an AI/ML engineer (Up to 12 LPA).


Preferred Experience:

  • Prior experience in roles such as AI Solution Architect, ML Architect, Data Science Architect, or AI/ML intern.
  • Exposure to AI-driven startups or fast-paced technology environments.
  • Proven ability to operate in dynamic roles requiring agility, adaptability, and initiative.
Read more
Appiness Interactive
Chennai
6 - 12 yrs
₹10L - ₹24L / yr
skill iconPython
PowerBI
SQL
databricks
Data Warehouse (DWH)
+1 more

Overview


We are looking for a highly skilled Lead Data Engineer with strong expertise in Data Warehousing & Analytics to join our team. The ideal candidate will have extensive experience in designing and managing data solutions, advanced SQL proficiency, and hands-on expertise in Python & POWER BI .


Skills : Python, Databricks, SQL


Key Responsibilities:


  • Design, develop, and maintain scalable data warehouse solutions.
  • Write and optimize complex SQL queries for data extraction, transformation, and reporting.
  • Develop and automate data pipelines using Python.
  • Work with AWS cloud services for data storage, processing, and analytics.
  • Collaborate with cross-functional teams to provide data-driven insights and solutions.
  • Ensure data integrity, security, and performance optimization.

 


Required Skills & Experience:


  • Must have a minimum of 6-10 years of experience in Data Warehousing & Analytics.
  • Must have strong experience in Databricks
  • Strong proficiency in writing complex SQL queries with deep understanding of query optimization, stored procedures, and indexing.
  • Hands-on experience with Python for data processing and automation.
  • Experience working with AWS cloud services.
  • Hands-on experience with reporting tools like Power BI or Tableau.
  • Ability to work independently and collaborate with teams across different time zones.


Read more
CAW.Tech

at CAW.Tech

5 recruiters
Archita Srivastava
Posted by Archita Srivastava
Hyderabad
2 - 4 yrs
Best in industry
LangGraph
LangChain
skill iconPython
CrewAI
Retrieval Augmented Generation (RAG)
+7 more

About the Role

At CAW Studios, we are building the future with agentic AI systems, RAG pipelines, and intelligent automation. From

autonomous AI agents at KnackLabs to developer productivity tools at CodeKnack, we ship production-ready AI

products that solve real problems for enterprises and startups alike.

This is your chance to work on cutting-edge GenAI, LLM fine-tuning, and agent frameworks—and see your code

power products used in the real world. If you’re excited about experimenting, shipping fast, and solving complex AI

challenges hands-on, you’ll love it here.


Who should apply

Engineer with 2 to 4 years of full-time experience building high-scale software systems, with a proven track record

of deploying complex Generative AI products to production.

Role Overview

We are hiring an AI/ML Engineer II (SE2) to own the architectural implementation and deployment of production-grade

agentic AI systems. This role requires a hybrid of traditional engineering rigour (OOPS, SOLID, high-concurrency)

and advanced AI specialization to build the next generation of intelligent tools.


Responsibilities

● Independently design modular and maintainable multi-agent AI systems aligned with SOLID principles

● Build high-concurrency, async FastAPI backends for complex AI workloads with enterprise stability

● Architect sophisticated agentic workflows using LangGraph with a focus on state persistence and error-recovery

● Design and optimize RAG pipelines involving advanced chunking, hybrid search, and re-ranking

● Take ownership of containerization and cloud deployment for observable, cost-efficient AI services

● Collaborate on reusable AI components and internal frameworks to enhance team engineering velocity


Expectations

● Deep obsession with automation, DevOps, OOPS, and SOLID principles

● Advanced experience deploying RAG or agent-based systems with LangGraph orchestration

● Expert-level mastery of async Python, system thinking, and building scalable backends

● High ownership and a "production-first" mindset for end-to-end system reliability

● Hands-on experience across multiple AI modalities (Vision, Audio, Text) and their architectures

Read more
OIP Insurtech

at OIP Insurtech

2 candid answers
Katarina Vasic
Posted by Katarina Vasic
Remote only
3 - 8 yrs
₹11L - ₹20L / yr
skill iconPython
ETL
API
SQL server

Join our team as a Data Engineer (ETL & Migration) and be a key contributor in our dynamic, technology-driven environment. Your expertise in building complex data pipelines, migrating and transforming data between systems, and collaborating with stakeholders, project managers, and Data and Business analysts will drive impactful solutions and help tackle exciting data challenge.


What We’re Looking For:

  • Minimum 3 years of experience as a Data Engineer, ETL Engineer, or Data Migration Engineer
  • Expertise in database design and management, including SQL databases such as SQL Server
  • Strong command in Python or similar programming languages for building custom pipelines, data cleaning, transformations, and automation
  • Hands-on experience designing and implementing ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) processes for data integration
  • Proficiency in Data validation and Reconciliation
  • Hands-on experience working with APIs (Python, Postman)
  • Strong knowledge of Data modeling principles and techniques
  • Experience with Version control systems (Git)
  • Experience with Cloud platforms (AWS, Azure, or Google Cloud)
  • Familiarity with PowerShell or Bash scripting is a plus


What You’ll Be Doing:

  • Build robust ETL data pipelines using SQL, APIs, and custom import logic
  • Create data mappings between source and target systems within the transformation layer
  • Develop testing processes to prevent data loss or data corruption
  • Work with both relational and non-relational data to achieve full data mapping
  • Collaborate with Business and Data analysts to ensure data quality and proper business logic
  • Work closely with stakeholders to ensure on-time delivery
  • Collaborate with an agile delivery team by working on backlog items and priorities



Read more
Pipaltree AI

at Pipaltree AI

2 candid answers
Mudit Tanwani
Posted by Mudit Tanwani
Hyderabad
3 - 7 yrs
₹10L - ₹20L / yr
skill iconPython
Large Language Models (LLM)
Databases
Microservices

Key Responsibilities:

  • Work with distributed systems and implement asynchronous programming patterns
  • Design and develop scalable backend applications using Python
  • Build and integrate applications leveraging LLMs or traditional Machine Learning techniques
  • Develop and maintain microservices-based architectures
  • Work with databases and caching systems to optimize application performance
  • Participate in code reviews and maintain high code quality standards
  • Write clean, maintainable, and well-documented code following best practices

Required Skills:

  • 3+ years of relevant experience
  • Strong understanding of distributed systems and asynchronous programming in Python
  • Experience building scalable applications using LLMs or traditional ML techniques
  • Hands-on experience with databases, caching mechanisms, and microservices architecture
  • Good problem-solving and debugging skills


Read more
Searce Inc

at Searce Inc

3 recruiters
Jatin Gereja
Posted by Jatin Gereja
Bengaluru (Bangalore), Mumbai, Pune
10 - 18 yrs
Best in industry
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Enterprise Data Warehouse (EDW)
Data modeling
Big Data
+9 more

Director - Data engineering


What are we looking for

real solver?

Solver? Absolutely. But not the usual kind. We're searching for the architects of the audacious & the pioneers of the possible. If you're the type to dismantle assumptions, re-engineer ‘best practices,’ and build solutions that make the future possible NOW, then you're speaking our language.


Your Responsibilities

what you will wake up to solve.

1. Delivery & Tactical Rigor

  • Methodology Implementation: Implement and manage a unified, 'DataOps-First' methodology for data engineering delivery (ETL/ELT pipelines, Data Modeling, MLOps, Data Governance) within assigned business units. This ensures predictable outcomes and trusted data integrity by reducing architecture variability at the project level.
  • Operational Stewardship: Drive initiatives to optimize team utilization and enhance operational efficiency within the practice. You manage the commercial success of your squads, ensuring data delivery models (from migration to modern data stack implementation) are executed profitably, scalably, and cost-effectively.
  • Execution & Technical Resolution
  • Technical Escalation: Serve as the primary escalation point for delivery issues, personally leading the resolution of complex data integration bottlenecks and pipeline failures to protect client timelines and data reliability standards.
  • Quality Enforcement
  • Quality Oversight: Execute and monitor technical data quality standards, ensuring engineering teams adhere to strict policies regarding data lineage, automated quality checks (observability), security/privacy compliance (GDPR/CCPA/PII), and active catalog management.

2. Strategic Growth & Practice Scaling

  • Talent & Scaling Execution: Execute the strategy for data engineering talent acquisition and development within your business units. Implement objective metrics to assess and grow the 'Data-Native' DNA of your teams, ensuring squads are consistently equipped to handle petabyte-scale environments and high-impact delivery.
  • Offerings Alignment: Drive the adoption of standardized regional offerings (e.g., Modern Data Platform, Data Mesh, Lakehouse Implementation). Ensure your teams leverage the profitable frameworks defined by the practice to accelerate time-to-insight and eliminate architectural fragmentation in client environments.
  • Innovation & IP Development: Lead the practical integration of Vector Databases and LLM-ready architectures into project delivery. Champion the hands-on development of IP and reusable accelerators (e.g., automated ingestion engines) that improve delivery speed and enhance data availability across your portfolio.

3. Leadership & Unit Management

  • Unit Leadership: Directly lead, mentor, and manage the Engineering Managers and Lead Architects within your business unit. Hold your teams accountable for project-level operational consistency, technical talent development, and strict adherence to the practice's data governance standards.
  • Stakeholder Communication: Clearly articulate the business unit’s operational performance, technical quality metrics, and delivery progress to the C-suite Stakeholders and regional client leadership, bridging the gap between technical execution and business value.
  • Ecosystem Alignment: Maintain strong technical relationships with key partner contacts (Snowflake, Databricks, AWS/GCP). Align team delivery capabilities with current product roadmaps and ensure squad-level participation in training, certifications, and partner-led enablement opportunities.


Welcome to Searce

The ‘process-first’, AI-native modern tech consultancy that's rewriting the rules.

We don’t do traditional.

As an engineering-led consultancy, we are dedicated to relentlessly improving the real business outcomes. Our solvers co-innovate with clients to futurify operations and make processes smarter, faster & better.


Functional Skills

1. Delivery Management & Operational Excellence

  • Methodology Execution: Expert capability in implementing and enforcing a unified delivery methodology (DataOps, Agile, Mesh Principles) within specific business units. Proven track record of auditing squad-level adherence to ensure consistency across the project lifecycle.
  • Operational Performance: High proficiency in managing day-to-day operational metrics, including squad utilization, resource forecasting, and productivity tracking. Skilled at optimizing team performance to meet profitability and efficiency targets.
  • SOW & Risk Mitigation: Proven experience in operationalizing Statement of Work (SOW) requirements and identifying technical delivery risks early. Expert at mitigating scope creep and data-specific bottlenecks (e.g., latency, ingestion gaps) before they impact client outcomes.
  • Technical Escalation Leadership: Demonstrated ability to lead "war room" efforts to resolve complex pipeline failures or data integrity issues. Skilled at providing clear, rapid remediation plans and communicating technical status directly to regional stakeholders.

2. Architectural Implementation & Technical Oversight

  • Modern Stack Proficiency: Deep, hands-on expertise in implementing Cloud-Native architectures (Lakehouse, Data Mesh, MPP) on Snowflake, Databricks, or hyperscalers. Ability to conduct deep-dive architectural reviews and course-correct design decisions at the squad level to ensure scalability.
  • Operationalizing Governance: Proven experience in embedding data quality and observability (completeness, freshness, accuracy) directly into the CI/CD pipeline. Responsible for technical enforcement of regulatory compliance (GDPR/PII) and maintaining the integrity of data catalogs across active projects.
  • Applied Domain Expertise: Practical experience leading the delivery of high-growth solutions, specifically Generative AI infrastructure (RAG, Vector DBs), Real-Time Streaming, and large-scale platform migrations with a focus on zero-downtime execution.
  • DataOps & Engineering Standards: Expert-level mastery of DataOps, including the setup and management of orchestration frameworks (Airflow, Dagster) and Infrastructure as Code (IaC). You ensure that automation is a baseline requirement, not an afterthought, for all delivery teams.

3. Unit Management & Commercial Execution

  • Unit & Team Management: Proven success in leading and mentoring Engineering Managers and Lead Architects. Responsible for the operational metrics, technical output, and career development of the business unit's talent pool.
  • Offerings Implementation & Scoping: Expertise in translating service offerings (e.g., Data Maturity Assessments, Lakehouse Builds) into accurate project scopes, technical estimates, and resource plans to ensure delivery is both profitable and competitive.
  • Talent Growth & Mentorship: Functional ability to implement growth frameworks for data engineering roles. Focus on hands-on coaching and scaling high-performance technical talent to meet the demands of complex, petabyte-scale environments.
  • Partner Enablement: Functional competence in managing regional technical relationships with major partners (Snowflake, Databricks, GCP/AWS). Drives squad-level certifications, joint technical enablement, and alignment with partner product roadmaps.

Tech Superpowers

  • Modern Data Architect – Reimagines business with the Modern Data Stack (MDS) to deliver data mesh implementations, insights, & real value to clients.
  • End-to-End Ecosystem Thinker – Builds modular, reusable data products across ingestion, transformation (ETL/ELT), governance, and consumption layers.
  • Distributed Compute Savant – Crafts resilient, high-throughput architectures that survive petabyte-scale volume and data skew without breaking the bank.
  • Governance & Integrity Guardian – Embeds data quality, complete lineage, and privacy-by-design (GDPR/PII) into every table, view, and pipeline.
  • AI-Ready Orchestrator – Engineers pipelines that bridge structured data with Unstructured/Vector stores, powering RAG models and Generative AI workflows.
  • Product-Minded Strategist – Balances architectural purity with time-to-insight; treats every dataset as a measurable "Data Product" with clear ROI.
  • Pragmatic Stack Curator – Chooses the simplest tools that compound reliability; fluent in SQL, Python, Spark, dbt, and Cloud Warehouses.
  • Builder @ Heart – Writes, reviews, and optimizes queries daily; proves architectures with cost-performance benchmarks, not slideware. Business-first, data-second, outcome focused technology leader.

Experience & Relevance

  • Executive Experience: Minimum 10+ years of progressive experience in data engineering and analytics, with at least 3 years in a Senior Manager or Director -level role managing multiple technical teams and owning significant operational and efficiency metrics for a large data service line.
  • Delivery Standardization: Demonstrated success in defining and implementing globally consistent, repeatable delivery methodologies (DataOps/Agile Data Warehousing) across diverse teams.
  • Architectural Depth: Must retain deep, current expertise in Modern Data Stack architectures (Lakehouse, MPP, Mesh) and maintain the ability to personally validate high-level architectural and data pipeline design decisions.
  • Operational Leadership: Proven expertise in managing and scaling large professional services organizations, demonstrated ability to optimize utilization, resource allocation, and operational expense.
  • Domain Expertise: Strong background in Enterprise Data Platforms, Applied AI/ML, Generative AI integration, or large-scale Cloud Data Migration.
  • Communication: Exceptional executive-level presentation and negotiation skills, particularly in communicating complex operational, data quality, and governance metrics to C-level stakeholders.

Join the ‘real solvers’

ready to futurify?

If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’. Don’t Just Send a Resume. Send a Statement.

Read more
Background is in Oil&Gas

Background is in Oil&Gas

Agency job
via First Tek, Inc. by David Ingale
Bengaluru (Bangalore)
8 - 10 yrs
₹15L - ₹30L / yr
Apache Spark
databricks
Delta lake
CI/CD
skill iconPython
+5 more

Role: Sr. Azure Data Engineer

Experience: 8–10 Years

Work Timings: 1:30 PM – 10:30 PM IST

Location: Bellandur Bengaluru (Work from Office)

Company: Chevron

Employment Type: 6- 12 months Contract

 

Role Overview

We are seeking an experienced Senior Data Engineer to design and deliver scalable cloud data solutions on Azure. The ideal candidate will have strong expertise in Databricks, PySpark, and modern data architectures, with exposure to energy domain standards like OSDU.

Key Responsibilities

  • Architect and design robust Azure-based data solutions using Databricks, ADLS, and PaaS services
  • Define and implement scalable data Lakehouse architectures aligned with OSDU standards
  • Build and manage end-to-end data pipelines for batch and real-time processing using PySpark
  • Establish data governance frameworks including metadata, lineage, security, and access control
  • Implement DevOps best practices (CI/CD, Azure Pipelines, GitHub, automated deployments)
  • Collaborate with stakeholders to translate business needs into technical solutions
  • Develop and maintain architecture documentation, solution patterns, and standards
  • Provide technical leadership and mentorship to engineering teams
  • Optimize solutions for performance, cost, reliability, and security
  • Ensure alignment with enterprise architecture and compliance standards
  • Drive adoption of modular and reusable cloud data components

Required Skills & Qualifications

Core Technical Skills

  • Azure Databricks, Apache Spark (PySpark), Delta Lake, Unity Catalog
  • Azure Data Lake Storage (ADLS), Azure Data Factory, Synapse Analytics
  • Strong experience in Python-based data engineering
  • Data pipeline development (batch + real-time)

Architecture & Advanced Skills

  • Data Lakehouse architecture and distributed systems
  • Microservices, APIs, and integration frameworks
  • OSDU (Open Subsurface Data Universe) or similar energy data models

DevOps & Tools

  • CI/CD tools: Azure Pipelines, GitHub Actions
  • Infrastructure as Code: Terraform or similar

Other Skills

  • Data governance, security, compliance, and cost optimization
  • Strong analytical and problem-solving skills
  • Excellent communication and stakeholder management


Read more
Automate Accounts

at Automate Accounts

2 candid answers
Nilesh Rajpal
Posted by Nilesh Rajpal
Remote only
1 - 6 yrs
₹4.5L - ₹15L / yr
skill iconPython
skill iconNodeJS (Node.js)
Debugging
RESTful APIs
zoho
+3 more

What You'll Do

  • Build and maintain web & backend systems using Python & Node.js
  • Create custom workflows and automations
  • Do code reviews, fix bugs, manage databases
  • Work with teams to understand and deliver solutions
  • Write clean, well-documented code
  • Mentor junior developers


What We Need

  • 2–6 years of software development experience
  • Strong in Python, Node.js & REST APIs
  • Experience with workflow/automation tools
  • Self-driven, good communicator, team player


Perks of This Role

  • Lead your own projects
  • Mentor junior devs
  • Direct access to stakeholders & leadership



Read more
Outpilot AI
Remote only
1 - 6 yrs
₹7.2L - ₹12L / yr
n8n
airtable
claude code
codex
clay
+9 more

You will own the end to end implementation and operation of AI powered outbound campaigns for our clients. That means taking a client brief, understanding their target market, building the systems that research and engage prospects, and making sure those systems run reliably without hand holding.


This is not a "connect two Zapier steps and call it automation" kind of role. You will be designing multi step workflows where AI agents research companies, enrich data through APIs, personalize messaging intelligently, and deliver outputs into client tools. Each campaign is a custom system with moving parts that need to work together cleanly.


What your weeks will look like:


You will onboard new clients, understand their ICP and outreach goals, then build and deploy the technical infrastructure to execute those campaigns. You will monitor live campaigns, troubleshoot when something breaks, and optimize for better results over time. You will hop on video calls with clients when needed, but the bulk of your time is building and maintaining systems that work.


Specifically, you will:


Build and manage complex n8n workflows that pull data from multiple sources, enrich it through APIs and AI, and deliver personalized outputs. Design Airtable bases that structure client data, automate processing, and integrate with external tools. Set up and manage email infrastructure: domains, deliverability, sending sequences. Use AI tools (Claude, GPT) to build research and personalization layers into client workflows. Handle client onboarding, ongoing communication, and technical troubleshooting. Own your campaigns. When something breaks at 2 PM on a Tuesday, you fix it. When a client asks why response rates dropped, you investigate and have an answer.


Who This Is For


You are someone who figures things out. You read documentation, test until it works, and do not give up when the first approach fails. You have strong technical intuition even if you are not a traditional developer. You understand how APIs work, how data flows between systems, and how to debug when something is not behaving as expected.


You follow AI developments closely. Not casually. You know the practical performance difference between Claude Opus 4.6 & GPT 5.4 🙃, you have opinions on which tools are overhyped, and you have probably built something with AI that you are proud of, even if it was just for yourself.


You are hungry. Not in a cliché motivational poster way. You genuinely want to get better at what you do, you take ownership of your work, and you do not need someone checking in on you every few hours to make sure you are making progress.


Core skills (non negotiable):


  • n8n: You have built workflows and understand how nodes, data flow, and error handling work.
  • AI tools: Regular, meaningful use of Claude or ChatGPT. You know how to prompt effectively and understand the limitations.
  • Technical aptitude: You pick up new tools fast and figure things out from documentation, not tutorials.
  • English proficiency: Written and spoken. You will be communicating with international clients.


Great to have (you can learn these on the job):


Experience with cold email and outbound systems (Smartlead, Instantly, or similar). Understanding of email deliverability (SPF, DKIM, domain setup). API integration and webhook experience. Data enrichment workflows using tools like Apollo, web scraping, or similar.


Work Setup


Fully remote. Work from anywhere. 5 day week. You manage your own schedule as long as the work gets done and you are available for client calls when needed.


Why This Role Is Different


You are not joining a company to work on traditional endangered tech job, but working with AI all day long, and deploying it's outbound systems for clients. You are building and running AI infrastructure for organizations that most people only see on TV. The problems you solve are genuinely novel. There is no tutorial for most of what we do. You will learn faster here in 3 months than you would in years at most places, because we operate at the intersection of AI, automation, and high stakes client work.


If you are the kind of person who gets excited about building systems that actually work in the real world, not just demos, this is your role.


How to Apply


If you have a portfolio of n8n workflows, Airtable bases, or any AI projects you have built, include links. We value what you have actually built over what is listed on your resume. Practical proof of work is valued 100x than just writing cool things in your application.


AVOID WRITING USING AI ANYWHERE IN YOUR APPLICATION. WE WORK WITH AI ALL DAY. ANY APPLICATIONS WRITTEN USING AI WILL NOT BE READ AND WILL BE REMOVED BY OUR AI QUALIFIER AGENT ITSELF 🙂

Read more
Srijan Technologies

at Srijan Technologies

6 recruiters
Devendra Singh
Posted by Devendra Singh
Bengaluru (Bangalore), Delhi, Gurugram, Noida, Ghaziabad, Faridabad
4 - 6 yrs
₹15L - ₹26L / yr
skill iconPython
skill iconReact.js
Generative AI (GenAI)

About US:-

We turn customer challenges into growth opportunities.

Material is a global strategy partner to the world’s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences.


We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve.


Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners.

 

Experience Range: 4-8 Years

Role: Full Stack Developer


Duties: 

As Full Stack Engineer, you will work in small teams in a highly collaborative way, use the latest technologies and enjoy seeing the direct impact from your work. Our highly skilled system architects and development managers configure software packages and build custom applications, creating the foundation for rapid and cost-effective implementation of systems that maximize value from day one. Our development teams are small, flexible and employ agile methodologies to quickly provide our consultants with the solutions they need. We combine the latest open source technologies together with traditional Enterprise software products. 

 

The Role: 

 

We create both rapid prototypes, usually in 2 to 3 weeks, as well as full-scale applications typically within 2 to 3 months, by working collaboratively and iteratively through design and development to deliver fully functioning web-based and mobile applications that meet business goals. Our Front-End Developers contribute to the architecture across the technology stack, from database to native apps. 


Skills: 

Minimum of 5–9 years of experience, with a proven record of hands-on software development in at least one of the following languages: Java, C#, C/C++, Python, JavaScript, Ruby, plus modern frontend proficiency in React and TypeScript. Demonstrated ownership of delivering end-to-end solutions (from design through production support), with strong proactivity in identifying opportunities, anticipating risks, and driving improvements without waiting for direction. 

Significant experience designing, implementing, and operating Web Services and APIs (REST, SOAP, RPC, RMI) including API monitoring/observability and performance tuning. Solid understanding of network communication protocols (HTTP, TCP/IP, UDP, SMTP, DNS) and distributed system behaviors. 

Capable of applying best coding practices, design patterns, and evaluating tradeoffs in complex, microservices-based architectures. Well versed in cloud computing (AWS), automated testing, CI/CD, and DevOps tooling; comfortable owning reliability, scalability, and operational excellence. Bonus: hands-on knowledge of Terraform (infrastructure as code). 

Experience with relational data stores (MySQL, SQL Server, Oracle) and non-relational technologies, with strong proficiency in MongoDB (schema design, indexing, performance optimization), plus exposure to Elasticsearch, Cassandra, and related ecosystems. Strong professional experience with frameworks such as Node.js, AngularJS, Spring, Guice, and expertise building mobile, responsive/adaptive applications. 

First-hand understanding of Agile development methodologies, with a commitment to engineering excellence (e.g., DRY, TDD, CI) and pragmatic delivery. 


Non-Technical: First and foremost, passionate about technology, especially AI and emerging/disruptive technologies, and excited about translating innovation into real product impact. Strong command of English (verbal and written), excellent interpersonal skills, and a highly collaborative mindset, able to partner effectively across engineering, product, design, and stakeholders. Sound problem-solving ability to quickly process complex information and communicate it clearly and simply. Demonstrated leadership/mentorship, accountability, and a self-starter attitude suited to environments that foster entrepreneurial thinking. 


 What We Offer 

  •  Professional Development and Mentorship.
  •  Hybrid work mode with remote friendly workplace. (6 times in a row Great Place To Work Certified).
  •  Health and Family Insurance.
  •  40+ Leaves per year along with maternity & paternity leaves.
  •  Wellness, meditation and Counselling sessions.


Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Bengaluru (Bangalore)
4 - 10 yrs
₹10L - ₹30L / yr
skill iconPython
SQL
Spark
skill iconAmazon Web Services (AWS)
Amazon S3
+13 more

Job Title : AWS Data Engineer

Experience : 4+ Years

Location : Bengaluru (HSR – Hybrid, 3 Days WFO)

Notice Period : Immediate Joiner


💡 Role Overview :

We are looking for a skilled AWS Data Engineer to design, build, and scale modern data platforms. The role involves working with AWS-native services, Python, Spark, and DBT to deliver secure, scalable, and high-performance data solutions in an Agile environment.


🔥 Mandatory Skills :

Python, SQL, Spark, AWS (S3, Glue, EMR, Redshift, Athena, Lambda), DBT, ETL/ELT pipeline development, Airflow/Step Functions, Data Lake (Parquet/ORC/Iceberg), Terraform & CI/CD, Data Governance & Security


🚀 Key Responsibilities :

  • Design, build, and optimize ETL/ELT pipelines using Python, DBT, and AWS services
  • Develop and manage scalable data lakes on S3 using formats like Parquet, ORC, and Iceberg
  • Build end-to-end data solutions using Glue, EMR, Lambda, Redshift, and Athena
  • Implement data governance, security, and metadata management using Glue Data Catalog, Lake Formation, IAM, and KMS
  • Orchestrate workflows using Airflow, Step Functions, or AWS-native tools
  • Ensure reliability and automation via CloudWatch, CloudTrail, CodePipeline, and Terraform
  • Collaborate with data analysts and data scientists to deliver actionable insights
  • Work in an Agile environment to deliver high-quality data solutions

✅ Mandatory Skills :

  • Strong Python (including AWS SDKs), SQL, Spark
  • Hands-on experience with AWS data stack (S3, Glue, EMR, Redshift, Athena, Lambda)
  • Experience with DBT and ETL/ELT pipeline development
  • Workflow orchestration using Airflow / Step Functions
  • Knowledge of data lake formats (Parquet, ORC, Iceberg)
  • Exposure to DevOps practices (Terraform, CI/CD)
  • Strong understanding of data governance and security best practices
  • Minimum 4–7 years in Data Engineering (3+ years on AWS)

➕ Good to Have :

  • Understanding of Data Mesh architecture
  • Experience with platforms like Data.World
  • Exposure to Hadoop / HDFS ecosystems

🤝 What We’re Looking For :

  • Strong problem-solving and analytical skills
  • Ability to work in a collaborative, cross-functional environment
  • Good communication and stakeholder management skills
  • Self-driven and adaptable to fast-paced environments

📝 Interview Process :

  1. Online Assessment
  2. Technical Interview
  3. Fitment Round
  4. Client Round
Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Bengaluru (Bangalore)
5 - 12 yrs
₹10L - ₹32L / yr
skill iconPython
Azure OpenAI
databricks
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
+6 more

Job Title : Azure Data Scientist (AI/ML)

Experience : 5 to 10 Years

Location : Bengaluru

Work Mode : Hybrid (4 Days WFO, Tue to Fri – Non-Negotiable)

Notice Period : Immediate Joiner


💡 Role Overview :

We are looking for a highly skilled Azure Data Scientist with strong expertise in AI/ML, Python, and cloud-based data platforms. The role involves building scalable ML solutions, working on GenAI & RAG use cases, and delivering business impact through data-driven insights.


🔥 Mandatory Skills :

Python, Azure Machine Learning, Databricks, AI/ML model development (5+ yrs), Statistics & Probability, EDA & Data Modeling, Machine Learning algorithms, GenAI/RAG experience


✅ Key Responsibilities :

  • Design, develop, and deploy AI/ML models to solve complex business problems
  • Perform Exploratory Data Analysis (EDA) for data cleaning, discovery, and insights
  • Build and optimize ML pipelines using Azure Machine Learning & Databricks
  • Work on GenAI applications, RAG implementations, and advanced analytics solutions
  • Collaborate with data engineers, business stakeholders, and domain experts
  • Translate complex data into actionable business insights
  • Manage model lifecycle (development, validation, deployment, monitoring)
  • Communicate model outputs and insights to technical & non-technical stakeholders
  • Drive innovation and contribute to AI/ML best practices and strategy

🧠 Required Skills (Must Have) :

  • Strong experience in Python (ML/AI development)
  • Hands-on with Azure Machine Learning & Databricks
  • Deep understanding of Mathematics, Probability, and Statistics
  • Expertise in Machine Learning & Data Science methodologies
  • Experience in EDA, data visualization, and model development
  • Exposure to GenAI, RAG, and ML application development
  • Minimum 5+ years of experience in AI/ML model development
  • Strong problem-solving and analytical skills

➕ Good to Have :

  • Experience with MLOps practices
  • Domain knowledge in Energy / Oil & Gas value chain
  • Experience in data visualization tools
  • Team collaboration or mentoring experience

🤝 What We’re Looking For :

  • Strong communication & stakeholder management skills
  • Ability to work in a cross-functional, global team environment
  • Self-driven, adaptable, and innovation-focused mindset

📝 Interview Process :

  1. Geektrust Assessment (Assemble)
  2. Technical Interview
  3. Fitment Round
  4. Client Round
Read more
Remote only
3 - 10 yrs
₹6L - ₹12L / yr
Odoo (OpenERP)
skill iconPython
RESTful APIs
API
skill iconJavascript
+4 more

Summary

We are looking for a motivated Odoo Developer to design, develop, and maintain ERP solutions on both Odoo Community and Enterprise editions. The ideal candidate will have strong Python skills, practical experience with the Odoo framework, and the ability to deliver scalable, customized modules that align with business requirements. Compensation will be offered as a 25% to 50% hike on the candidate’s last drawn salary, based on experience and skill set.


Key Responsibilities

  • Develop, customize, and maintain Odoo ERP modules for both Community and Enterprise editions.
  • Create new custom modules and enhance existing ones to extend system functionality.
  • Write clean, efficient, and well-documented Python code following Odoo development standards.
  • Troubleshoot, debug, and resolve technical issues to ensure optimal system performance.
  • Collaborate with functional consultants and business stakeholders to deliver scalable ERP solutions.
  • Design and implement integrations between Odoo and third-party systems such as APIs, payment gateways, CRM tools, and other business applications.
  • Optimize database queries and improve system performance.
  • Participate in code reviews, testing, and deployment processes.

Required Skills & Experience

  • Minimum 3 years of experience in Odoo development (Community and/or Enterprise editions).
  • Strong proficiency in Python and understanding of the Odoo framework.
  • Experience with PostgreSQL and database design concepts.
  • Knowledge of Odoo ORM, QWeb, XML, and JavaScript.
  • Hands-on experience developing and customizing Odoo modules.
  • Familiarity with REST APIs and third-party integrations.
  • Good debugging and problem-solving skills.
  • Understanding of Git or other version control systems.
  • Ability to work independently and in a team environment.

Preferred Qualifications

  • Bachelor’s degree in Computer Science, Information Technology, or a related field.
  • Experience working with both Odoo Community and Enterprise editions.
  • Exposure to Odoo.sh or cloud deployment environments.
  • Basic understanding of business processes such as Accounting, Sales, Inventory, or HR in ERP systems.
  • Experience in Agile development methodologies is a plus.

Note

This is an immediate full-time remote requirement. Candidates who are passionate about ERP development and can work with both Odoo Community and Enterprise editions are encouraged to apply.

Read more
A leading data & analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage

A leading data & analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage

Agency job
via HyrHub by Neha Koshy
Bengaluru (Bangalore)
8 - 10 yrs
₹14L - ₹20L / yr
skill iconPython
skill iconReact.js
skill iconAmazon Web Services (AWS)
Architecture
skill iconLeadership
+1 more

Responsibilities:

  • Lead architecture, technical decisions, and ensure code quality, scalability, and performance
  • Develop backend systems using Python & SQL; build APIs and optimize databases
  • Work with frontend (React/Angular) and API-driven architectures
  • Integrate AI/ML models and support analytics/LLM-based solutions
  • Manage cloud deployments (Azure/AWS) and implement CI/CD practices
  • Ensure system reliability, monitoring, and production readiness
  • Mentor team members, conduct reviews, and collaborate with cross-functional teams
Read more
Remote only
3 - 5 yrs
₹15L - ₹18L / yr
SQL
skill iconPython
Linux/Unix
Large Language Models (LLM) tuning
skill iconMachine Learning (ML)
+1 more

Python Developer (Performance Optimization Focus)

Experience: 3–5 Years

Location: Remote (India-based candidates only)

Employment Type: Full-time


Role Overview

We are seeking a Python Developer with a strong focus on performance optimization and system efficiency. In this role, you will identify bottlenecks, enhance system performance, and contribute to building scalable, high-performance applications in a Linux-based environment.


Key Responsibilities

  • Analyze and troubleshoot performance bottlenecks in applications and systems
  • Optimize code, database queries, and architecture for scalability and speed
  • Design, develop, test, and maintain robust Python applications
  • Work with large datasets and improve data processing efficiency
  • Collaborate with cross-functional teams to improve system reliability and performance
  • Monitor system performance and implement proactive improvements
  • Write clean, maintainable, and efficient code following best practices


Required Skills & Qualifications

  • 3–5 years of hands-on experience in Python development
  • Strong expertise in performance tuning and optimization techniques
  • Experience with debugging and profiling tools
  • Solid understanding of data structures and algorithms
  • Experience with REST APIs and backend development
  • Strong analytical and problem-solving skills


Linux & System Knowledge (Must-Have)

  • Comfortable working in Linux/Unix environments
  • Command-line proficiency, including:
  • File editing (vi, nano)
  • File permissions (chmod, chown)
  • File downloads (wget, curl)
  • Basic file and directory operations


Basic Python Knowledge (Interview Scope)

  • Writing simple scripts and reusable functions
  • String manipulation and data handling
  • Example task: Count words in a file/string efficiently


Good to Have

  • Familiarity with AI/ML concepts or tools
  • Experience optimizing data-intensive or distributed systems
  • Exposure to cloud platforms (AWS, GCP, Azure)


Why Join Us

  • Work on performance-critical systems with real-world impact
  • Fully remote work environment
  • Opportunity to work with modern, scalable technologies
  • Collaborative, growth-focused team culture


Read more
Euphoric Thought Technologies
Bengaluru (Bangalore)
6 - 8 yrs
₹12L - ₹22L / yr
skill iconJava
skill iconPython
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
Agile/Scrum
+4 more

Key Responsibilities:

  • Lead and mentor a team of Java and Python developers, providing technical guidance and fostering a culture of continuous learning and improvement.
  • Oversee the design, development, and implementation of high-performance, scalable, and secure software solutions for the financial services industry.
  • Collaborate with product managers and architects to translate business requirements into technical specifications and ensure alignment with overall product strategy.
  • Drive the adoption of best practices in software development, including code reviews, testing, and continuous integration/continuous deployment (CI/CD).
  • Manage project timelines and resources effectively, ensuring on-time and within-budget delivery of projects.
  • Identify and mitigate technical risks, proactively addressing potential issues and ensuring the stability and reliability of our platforms.
  • Stay abreast of emerging technologies and trends in Java, Python, and related fields, and evaluate their potential application to our products and services.
  • Contribute to the development of technical documentation and training materials.

Required Skillset:

  • Demonstrated expertise in Java and Python development, with a strong understanding of object-oriented principles, design patterns, and data structures.
  • Proven ability to lead and mentor a team of software engineers, fostering a collaborative and high-performing environment.
  • Experience in designing and developing scalable, high-performance, and secure software solutions.
  • Strong understanding of software development methodologies, including Agile and Waterfall.
  • Excellent communication, interpersonal, and problem-solving skills.
  • Ability to work effectively in a fast-paced, dynamic environment.
  • Bachelor's or Master's degree in Computer Science or a related field.
  • Experience with relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra).
  • Experience with cloud platforms (e.g., AWS, Azure, GCP) is a plus.
Read more
Remote, Chennai
5 - 8 yrs
₹1L - ₹2L / yr
skill iconReact.js
skill iconNextJs (Next.js)
skill iconReact Native
skill iconExpress
skill iconAmazon Web Services (AWS)
+5 more

Brikito — Lead Full-Stack Developer

Job Description

About Brikito

Brikito is an early-stage PropTech startup building a construction management platform for SME developers and contractors. The founder has 7+ years of hands-on construction experience and an MBA from Warwick Business School. We have initial funding, a domain (brikito.com), wireframes ready, and active customer validation underway. We need our first technical leader to take this from wireframes to a live product.

This is a ground-floor opportunity. You will be the first technical hire — the person who makes every architecture decision and writes the first line of production code.


The Role

Title: CTO / Lead Full-Stack Developer (title depends on experience and equity arrangement)

Location: India (remote OK, occasional visits to Chennai office and overseas office planning to set up in Singapore or Dubai)

Type: Full-time

Compensation: ₹1,00,000–₹2,50,000/month + meaningful equity (0.5%–5% depending on role level, vesting over 4 years based on vesting schedule with a cliff.)

Start Date: May 2026

Reports to: Founder/CEO


  • What You Will DoMonths 1–3: Build the MVPOwn all technical decisions — architecture, tech stack, database design, hosting
  • Build and ship a working MVP with 3 core features: project dashboard, billing/invoicing, and indent/procurement management
  • Set up CI/CD pipeline, staging, and production environments
  • Integrate payment gateway (Razorpay for India)
  • Build both web and mobile-responsive interfaces
  • Ship the MVP within 12 weeks
  • Months 3–6: Iterate and ScaleOnboard beta users and fix bugs based on real usage
  • Build features based on customer feedback (not assumptions)
  • Integrate AI capabilities where they add clear user value (e.g., auto-generated progress reports)
  • Hire and manage 1–2 junior developers as the team grows
  • Set up monitoring, error tracking, and basic analytics
  • Months 6–12: Lead the Technical TeamGrow the engineering team to 4–6 people
  • Establish code review processes, documentation standards, and sprint rhythms
  • Own the technical roadmap alongside the founder
  • Participate in investor conversations as the technical co-founder (if CTO-level)
  • Make build-vs-buy decisions for new features


  • Required SkillsMust Have7+ years of professional software development experience
  • Strong proficiency in React or Next.js (frontend)
  • Strong proficiency in Node.js (backend) — Express, Nest.js, or similar
  • PostgreSQL or MySQL — database design, query optimisation, migrations
  • REST API design — clean, well-documented APIs
  • Cloud deployment — AWS (EC2, RDS, S3) or GCP equivalent
  • Expertise in AI tools and integrations - Anthropic, OpenAI, Perplexity, etc.
  • Git — clean branching, PR-based workflow
  • Has shipped at least one product that real users used — not just academic or internal tools
  • Comfortable working independently — no one will tell you what to do step by step
  • Strongly PreferredPrevious experience at a startup (Series A or earlier)
  • Experience building SaaS or B2B products
  • Experience with mobile development (React Native or Flutter)
  • Experience integrating payment gateways (Razorpay, Stripe)
  • Experience with third-party API integrations (OpenAI, Twilio, etc.)
  • Understanding of CI/CD pipelines (GitHub Actions, Docker)
  • Basic understanding of construction, real estate, or field operations (not required, but a plus)
  • Nice to HaveExperience with TypeScript
  • Experience with real-time features (WebSockets, push notifications)
  • Familiarity with Figma (to translate wireframes into UI)
  • Experience hiring and mentoring junior developers
  • Open source contributions or a personal project portfolio


  • What We Are NOT Looking ForSomeone who needs detailed specifications for every task — we move fast and figure things out together
  • Someone who only wants to code and not think about the product — you will be in customer calls and strategy discussions
  • Someone who optimises for perfect code over shipping — we ship first, refactor later
  • Someone looking for a stable corporate job — this is a startup with all the chaos and excitement that comes with it


  • What You GetEquity ownership in an early-stage company with a large addressable market ($14.9B global construction SaaS)
  • Founding team credit — you will be recognised as a technical co-founder if you take the CTO role
  • Direct impact — every line of code you write will be used by real customers within weeks
  • Technical freedom — you choose the stack, the tools, the architecture
  • A founder who understands the domain — you will never have to guess what contractors need because the CEO has built construction projects himself
  • Growth path — as we raise funding and scale, you grow into VP Engineering or CTO of a funded company

How to Apply

Send the following:

  • A short note (5–10 lines) on why this role interests you and what you'd bring
  • Your LinkedIn profile or resume
  • One link to something you've built — a live product, a GitHub repo, an app, anything that shows your work
  • Your availability — when can you start?

We will respond within 48 hours. The process is:

  • 30-minute video call with the founder
  • Small paid technical task (8 hours of work, ₹5,000 paid regardless of outcome)
  • Final conversation about role, equity, and start date
  • Offer within 1 week of first call

Questions?

DM the founder on LinkedIn: https://www.linkedin.com/in/aashiqahamed/

This is not a job posting from HR. This is a founder looking for his first technical partner. If this excites you, reach out.

Read more
Remote only
6 - 9 yrs
₹30L - ₹51L / yr
skill iconPython
TypeScript

Strong Software Engineer fullstack profile using NodeJS / Python and React

Mandatory (Experience) - Must have 6+ YOE in Software Development using Python OR NodeJS (For backend) & React (For frontend)

Mandatory (Core Skills 1): Must have strong experience in working on Typescript

Mandatory (Core Skills 2): Must have experience in message based systems like Kafka, RabbitMq, Redis

Mandatory (Core Skills 3): Databases - PostgreSQL & NoSQL databases like MongoDB

Mandatory (Company) - Product Companies Only

Mandatory (Education) - B.Tech or Dual degree (Btech and Mtech or Integrated Msc/MS) from Tier 1 Engineering Institutes (Top 7 IITs, Top 5 NITs, IIIT Bangalore, IIIT Hyderabad, IIIT Allahabad, MNNIT, IIT Dhanbad, BITS Pilani). Candidates from other institutions will not be considered unless they come from top-tier product companies

Mandatory (Note) : This role is a hybrid role (2 days WFO)

Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Mumbai, Trivandrum
4 - 8 yrs
Upto ₹30L / yr (Varies
)
skill iconNodeJS (Node.js)
skill iconPython
Dialog Flow
rasa
yellow.ai
+1 more

Responsible for developing, enhancing, modifying, and maintaining chatbot applications in the Global Markets environment. The role involves designing, coding, testing, debugging, and documenting conversational AI solutions, along with supporting activities aligned to the corporate systems architecture.

You will work closely with business partners to understand requirements, analyze data, and deliver optimal, market-ready conversational AI and automation solutions.


Key Responsibilities

  • Design, develop, test, debug, and maintain chatbot and virtual agent applications
  • Collaborate with business stakeholders to define and translate requirements into technical solutions
  • Analyze large volumes of conversational data to improve chatbot accuracy and performance
  • Develop automation workflows for data handling and refinement
  • Train and optimize chatbots using historical chat logs and user-generated content
  • Ensure solutions align with enterprise architecture and best practices
  • Document solutions, workflows, and technical designs clearly

Required Skills

  • Hands-on experience in developing virtual agents (chatbots/voicebots) and Natural Language Processing (NLP)
  • Experience with one or more AI/NLP platforms such as:
  • Dialogflow, Amazon Lex, Alexa, Rasa, LUIS, Kore.AI
  • Microsoft Bot Framework, IBM Watson, Wit.ai, Salesforce Einstein, Converse.ai
  • Strong programming knowledge in Python, JavaScript, or Node.js
  • Experience training chatbots using historical conversations or large-scale text datasets
  • Practical knowledge of:
  • Formal syntax and semantics
  • Corpus analysis
  • Dialogue management
  • Strong written communication skills
  • Strong problem-solving ability and willingness to learn emerging technologies

Nice-to-Have Skills

  • Understanding of conversational UI and voice-based processing (Text-to-Speech, Speech-to-Text)
  • Experience building voice apps for Amazon Alexa or Google Home
  • Experience with Test-Driven Development (TDD) and Agile methodologies
  • Ability to design and implement end-to-end pipelines for AI-based conversational applications
  • Experience in text mining, hypothesis generation, and historical data analysis
  • Strong knowledge of regular expressions for data cleaning and preprocessing
  • Understanding of API integrations, SSO, and token-based authentication
  • Experience writing unit test cases as per project standards
  • Knowledge of HTTP, REST APIs, sockets, and web services
  • Ability to perform keyword and topic extraction from chat logs
  • Experience training and tuning topic modeling algorithms such as LDA and NMF
  • Understanding of classical Machine Learning algorithms and appropriate evaluation metrics
  • Experience with NLP frameworks such as NLTK and spaCy


Read more
Remote only
6 - 15 yrs
₹15L - ₹30L / yr
skill iconRuby on Rails (ROR)
skill iconPython
skill iconGo Programming (Golang)
skill iconReact.js
skill iconJavascript
+1 more

About the Role

We are looking for a high-calibre Senior Full Stack Engineer to join a product-focused team, building and iterating on modern applications in a fast-paced environment.


This role goes beyond traditional full-stack development. It is suited for engineers who combine strong technical fundamentals with product thinking, high ownership, and the ability to move quickly while maintaining quality. You will work across the stack, prototype rapidly, and leverage AI tools as a core part of your daily workflow.

The ideal candidate is an independent thinker who can operate with minimal direction, challenge assumptions (including AI-generated outputs), and deliver end-to-end solutions. This is a highly visible role requiring strong communication skills and the ability to engage confidently with senior stakeholders.


Responsibilities

  • Design, build, and ship scalable full-stack applications across backend and frontend systems
  • Take ownership of features end-to-end, from ideation to production deployment
  • Prototype quickly and iterate based on product and user feedback
  • Use AI tools (e.g., Copilot, ChatGPT, Cursor, Claude) to accelerate development while applying sound engineering judgment
  • Evaluate and improve AI-generated code, ensuring quality, performance, and maintainability
  • Contribute to system design, architecture, and technical decision-making
  • Work across backend, frontend, and infrastructure layers as needed
  • Collaborate with product stakeholders to define requirements and make informed trade-offs
  • Identify gaps, inefficiencies, or product issues and proactively suggest improvements.
  • Maintain high standards of code quality, testing, and performance

Requirements

  • Strong academic background from top-tier engineering institutions (e.g., IITs, IISc, IIITs, BITS, top NITs, or equivalent)
  • 6–10+ years of experience in software engineering, with strong full-stack exposure.
  • Strong backend engineering experience (Ruby on Rails preferred, or Python, Go, Rust with equivalent depth)
  • Solid frontend development experience with modern frameworks (e.g., React or similar).
  • Strong understanding of system design, APIs, and scalable architecture
  • Proven ability to build and ship production-grade applications end-to-end
  • Demonstrated product mindset with the ability to think beyond implementation
  • Experience working in product-driven environments with high ownership
  • Hands-on experience using AI tools (e.g., Copilot, ChatGPT, Cursor, Claude) in day-to-day development
  • Ability to critically evaluate AI-generated output and apply sound engineering judgment
  • Strong communication skills with the ability to articulate technical decisions clearly
  • High level of autonomy, ownership, and problem-solving capability

Nice to Have

  • Experience working in high-growth startups or product-led companies
  • Experience contributing across DevOps or infrastructure
  • Strong track record of ownership and impact in previous roles
  • Exposure to fast-paced, high-performance engineering cultures


Read more
Bengaluru (Bangalore)
5 - 6 yrs
₹13L - ₹15L / yr
skill iconPython
skill iconDjango
skill iconFlask

We are looking for an experienced Python Developer with 5–6 years of hands-on experience in designing, developing, and maintaining scalable backend applications and APIs. The ideal candidate should have strong expertise in Python, backend frameworks, databases, and cloud/deployment practices. The candidate should be capable of working in a fast-paced environment and collaborating with cross-functional teams to deliver high-quality software solutions.

Key Responsibilities

  • Design, develop, test, and maintain robust and scalable Python-based applications.
  • Build and integrate RESTful APIs and backend services.
  • Work on server-side logic, database integration, and performance optimization.
  • Collaborate with frontend developers, QA teams, DevOps, and product teams for end-to-end delivery.
  • Write reusable, testable, and efficient code following best practices.
  • Debug, troubleshoot, and resolve production issues.
  • Participate in code reviews, technical design discussions, and architecture planning.
  • Optimize applications for maximum speed, scalability, and reliability.
  • Implement security and data protection measures.
  • Work with CI/CD pipelines and deployment processes.

Required Skills

  • Strong experience in Python development with 5–6 years of relevant experience.
  • Hands-on experience with Python frameworks such as:
  • Django
  • Flask
  • FastAPI
  • Strong understanding of OOPs, Data Structures, and Algorithms.
  • Experience in building and consuming REST APIs.
  • Good knowledge of SQL and relational databases like:
  • MySQL
  • PostgreSQL
  • Experience with NoSQL databases like:
  • MongoDB
  • Redis (preferred)
  • Knowledge of ORM frameworks such as SQLAlchemy or Django ORM.
  • Familiarity with Git/GitHub/GitLab version control.
  • Understanding of unit testing, debugging, and code quality practices.
  • Experience in working with Linux/Unix environments.
  • Knowledge of Docker, containerization, and deployment concepts.
  • Exposure to cloud platforms like AWS / Azure / GCP is preferred.

Preferred / Good to Have Skills

  • Experience in microservices architecture.
  • Knowledge of Celery, asynchronous processing, and message queues like:
  • RabbitMQ
  • Kafka
  • Familiarity with CI/CD pipelines.
  • Experience in writing clean architecture and scalable backend systems.
  • Exposure to DevOps practices is a plus.
  • Experience in Agile/Scrum methodology. 


Read more
Qiro Finance

at Qiro Finance

2 candid answers
2 recruiters
Bisman Gill
Posted by Bisman Gill
Bengaluru (Bangalore)
5yrs+
Upto ₹45L / yr (Varies
)
skill iconPython
TypeScript
skill iconAmazon Web Services (AWS)
Artificial Intelligence (AI)
Team Management

About the Role

Qiro is building the infrastructure powering the next generation of underwriting, credit analytics, and tokenized private credit markets.

We are looking for a Tech Lead — Credit & Blockchain Infrastructure to lead the architecture and execution of our core systems — spanning underwriting engines, credit lifecycle workflows, and blockchain-integrated capital markets infrastructure.

This is not a feature delivery role. This is a system ownership role.

You will be hands-on while leading a growing engineering team in a fast-moving, in-office environment.

What You’ll Own

  • Define and evolve the long-term technical vision for Qiro’s programmable credit infrastructure — architecting cohesive systems that unify underwriting engines, credit lifecycle workflows, and tokenized capital markets.
  • Own the end-to-end architecture of scalable backend platforms (Python and/or TypeScript), establishing clear boundaries between risk logic, platform APIs, and smart contract integrations while ensuring scalability, auditability, and extensibility.
  • Build and standardize configurable underwriting and credit lifecycle systems — from onboarding and drawdown orchestration to repayment waterfalls and early closures — ensuring deterministic, traceable financial state transitions at institutional scale.
  • Set integration and infrastructure standards across API contracts, data models, validation layers, and event-driven architectures, enabling reliable synchronization between off-chain services and on-chain contracts.
  • Architect secure and resilient blockchain integrations, including wallet interactions, capital flow coordination, and observable on-chain/off-chain state reconciliation.
  • Lead high-impact, cross-product initiatives from RFC and system design through production launch — validating architectural decisions, aligning stakeholders, and delivering measurable improvements in reliability, performance, and developer velocity.
  • Elevate reliability and operational excellence by defining SLOs, strengthening CI/CD and observability practices, reducing latency, and minimizing systemic risk in financial workflows.
  • Build and scale the engineering organization — mentoring senior engineers, shaping hiring standards, driving architecture reviews, and fostering a culture of ownership, craftsmanship, and first-principles thinking.
  • Partner closely with Product, Design, Security, and Operations to translate complex lending and capital market mechanics into simple, robust platform primitives.

Who You Are

  • 6-8+ years of engineering experience, with 3+ years in technical leadership roles.
  • Strong backend architecture experience in Python and/or TypeScript.
  • Comfortable designing distributed systems and financial workflows.
  • Experience building fintech, lending, underwriting, trading, or blockchain-integrated systems.
  • Strong understanding of API design, state management, and data modeling.
  • Able to navigate ambiguity and build 0→1 infrastructure.
  • Hands-on builder who leads by writing production-grade code.

We Value

  • Experience with underwriting engines or policy-driven decision systems.
  • Exposure to smart contracts and blockchain integrations.
  • Familiarity with PostgreSQL and event-driven architectures.
  • Experience in early-stage or high-growth startups.
  • Strong product thinking and ability to translate complex financial logic into scalable systems.

Why Join Qiro

  • Lead the architecture of a programmable credit infrastructure platform.
  • Join the founding technical leadership team.
  • High autonomy and ownership — your decisions shape the company.
  • In-office collaboration in Bangalore for speed and iteration.
  • Competitive compensation and meaningful equity.

Our Culture

We operate with:

  • First-principles thinking
  • Technical craftsmanship
  • High ownership
  • Fast execution with long-term architectural discipline


Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort