Cutshort logo
Remote python jobs

50+ Remote Python Jobs in India

Apply to 50+ Remote Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!

icon
Remote only
4 - 5 yrs
₹7L - ₹15L / yr
SQL
PL/SQL, T-SQL, PostgreSQL, or MySQL
skill iconPython
Pandas, NumPy, SQLAlchemy, Psycopg2.
Database Design
+12 more

Database Programmer (SQL & Python)

Experience: 4 – 5 Years

Location: Remote

Employment Type: Full-Time

About the Opportunity

We are a mission-driven HealthTech organization dedicated to bridging the gap in global healthcare equity. By harnessing the power of AI-driven clinical insights and real-world evidence, we help healthcare providers and pharmaceutical companies deliver precision medicine to underrepresented populations.

We are looking for a skilled Database Programmer with a strong blend of SQL expertise and Python automation skills to help us manage, transform, and unlock the value of complex clinical data. This is a fully remote role where your work will directly contribute to improving patient outcomes and making life-saving treatments more affordable and accessible.


Key Responsibilities

  • Data Architecture & Management: Design, develop, and maintain robust relational databases to store large-scale, longitudinal patient records and clinical data.
  • Complex Querying: Write and optimize sophisticated SQL queries, stored procedures, and triggers to handle deep clinical datasets, ensuring high performance and data integrity.
  • Python Automation: Develop Python scripts and ETL pipelines to automate data ingestion, cleaning, and transformation from diverse sources (EHRs, lab reports, and unstructured clinical notes).
  • AI Support: Collaborate with Data Scientists to prepare datasets for AI-based analytics, Knowledge Graphs, and predictive modeling.
  • Data Standardization: Map and transform clinical data into standardized models (such as HL7, FHIR, or proprietary formats) to ensure interoperability across healthcare ecosystems.
  • Security & Compliance: Implement and maintain rigorous data security protocols, ensuring all database activities comply with global healthcare regulations (e.g., HIPAA, GDPR).


Required Skills & Qualifications

  • Education: Bachelor’s degree in Computer Science, Information Technology, Statistics, or a related field.
  • SQL Mastery: 4+ years of experience with relational databases (PostgreSQL, MySQL, or MS SQL Server). You should be comfortable with performance tuning and complex data modeling.
  • Python Proficiency: Strong programming skills in Python, particularly for data manipulation (Pandas, NumPy) and database interaction (SQLAlchemy, Psycopg2).
  • Healthcare Experience: Familiarity with healthcare data standards (HL7, FHIR) or experience working with Electronic Health Records (EHR) is highly preferred.
  • ETL Expertise: Proven track record of building and managing end-to-end data pipelines for structured and unstructured data.
  • Analytical Mindset: Ability to troubleshoot complex data issues and translate business requirements into efficient technical solutions.


To process your details please fill-out the google form.

https://forms.gle/4psh2vaUi115TKnm6

Read more
Remote only
3 - 8 yrs
₹20L - ₹30L / yr
ETL
Google Cloud Platform (GCP)
skill iconPython
Pipeline management
BigQuery

About Us:


CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services, and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.


Job Summary:


We are seeking a highly skilled and motivated Data Engineer to join our Development POD for the Integration Project. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to ingest, clean, transform, and integrate diverse public datasets into our knowledge graph. This role requires a strong understanding of Cloud Platform (GCP) services, data engineering best practices, and a commitment to data quality and scalability.


Key Responsibilities:


  • ETL Development: Design, develop, and optimize data ingestion, cleaning, and transformation pipelines for various data sources (e.g., CSV, API, XLS, JSON, SDMX) using Cloud Platform services (Cloud Run, Dataflow) and Python.
  • Schema Mapping & Modeling: Work with LLM-based auto-schematization tools to map source data to our schema.org vocabulary, defining appropriate Statistical Variables (SVs) and generating MCF/TMCF files.
  • Entity Resolution & ID Generation: Implement processes for accurately matching new entities with existing IDs or generating unique, standardized IDs for new entities.
  • Knowledge Graph Integration: Integrate transformed data into the Knowledge Graph, ensuring proper versioning and adherence to existing standards. 
  • API Development: Develop and enhance REST and SPARQL APIs via Apigee to enable efficient access to integrated data for internal and external stakeholders.
  • Data Validation & Quality Assurance: Implement comprehensive data validation and quality checks (statistical, schema, anomaly detection) to ensure data integrity, accuracy, and freshness. Troubleshoot and resolve data import errors.
  • Automation & Optimization: Collaborate with the Automation POD to leverage and integrate intelligent assets for data identification, profiling, cleaning, schema mapping, and validation, aiming for significant reduction in manual effort.
  • Collaboration: Work closely with cross-functional teams, including Managed Service POD, Automation POD, and relevant stakeholders.

Qualifications and Skills:


  • Education: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Technology, or a related quantitative field.
  • Experience: 3+ years of proven experience as a Data Engineer, with a strong portfolio of successfully implemented data pipelines.
  • Programming Languages: Proficiency in Python for data manipulation, scripting, and pipeline development.
  • Cloud Platforms and Tools: Expertise in Google Cloud Platform (GCP) services, including Cloud Storage, Cloud SQL, Cloud Run, Dataflow, Pub/Sub, BigQuery, and Apigee. Proficiency with Git-based version control.


Core Competencies:


  • Must Have - SQL, Python, BigQuery, (GCP DataFlow / Apache Beam), Google Cloud Storage (GCS)
  • Must Have - Proven ability in comprehensive data wrangling, cleaning, and transforming complex datasets from various formats (e.g., API, CSV, XLS, JSON)
  • Secondary Skills - SPARQL, Schema.org, Apigee, CI/CD (Cloud Build), GCP, Cloud Data Fusion, Data Modelling
  • Solid understanding of data modeling, schema design, and knowledge graph concepts (e.g., Schema.org, RDF, SPARQL, JSON-LD).
  • Experience with data validation techniques and tools.
  • Familiarity with CI/CD practices and the ability to work in an Agile framework.
  • Strong problem-solving skills and keen attention to detail.


Preferred Qualifications:


  • Experience with LLM-based tools or concepts for data automation (e.g., auto-schematization).
  • Familiarity with similar large-scale public dataset integration initiatives.
  • Experience with multilingual data integration.
Read more
Deqode

at Deqode

1 recruiter
Apoorva Jain
Posted by Apoorva Jain
Remote only
9 - 18 yrs
₹5L - ₹29L / yr
skill iconPython
SQL
NOSQL Databases
DBA

Job Summary


We are looking for an experienced Python DBA with strong expertise in Python scripting and SQL/NoSQL databases. The candidate will be responsible for database administration, automation, performance optimization, and ensuring availability and reliability of database systems.


Key Responsibilities

  • Administer and maintain SQL and NoSQL databases
  • Develop Python scripts for database automation and monitoring
  • Perform database performance tuning and query optimization
  • Manage backups, recovery, replication, and high availability
  • Ensure data security, integrity, and compliance
  • Troubleshoot and resolve database-related issues
  • Collaborate with development and infrastructure teams
  • Monitor database health and performance
  • Maintain documentation and best practices


Required Skills

  • 10+ years of experience in Database Administration
  • Strong proficiency in Python
  • Experience with SQL databases (PostgreSQL, MySQL, Oracle, SQL Server)
  • Experience with NoSQL databases (MongoDB, Cassandra, etc.)
  • Strong understanding of indexing, schema design, and performance tuning
  • Good analytical and problem-solving skills


Read more
Forbes Advisor

at Forbes Advisor

3 candid answers
Bisman Gill
Posted by Bisman Gill
Remote only
4yrs+
Upto ₹27L / yr (Varies
)
Google Cloud Platform (GCP)
Data Transformation Tool (DBT)
skill iconPython
SQL
skill iconAmazon Web Services (AWS)
+6 more

Forbes Advisor is a high-growth digital media and technology company dedicated to helping consumers make confident, informed decisions about their money, health, careers, and everyday life.

We do this by combining data-driven content, rigorous product comparisons, and user-first design all built on top of a modern, scalable platform. Our teams operate globally and bring deep expertise across journalism, product, performance marketing, and analytics.

The Role

We are hiring a Senior Data Engineer to help design and scale the infrastructure behind our analytics,performance marketing, and experimentation platforms.

This role is ideal for someone who thrives on solving complex data problems, enjoys owning systems end-to-end, and wants to work closely with stakeholders across product, marketing, and analytics.

You’ll build reliable, scalable pipelines and models that support decision-making and automation at every level of the business.


What you’ll do

● Build, maintain, and optimize data pipelines using Spark, Kafka, Airflow, and Python

● Orchestrate workflows across GCP (GCS, BigQuery, Composer) and AWS-based systems

● Model data using dbt, with an emphasis on quality, reuse, and documentation

● Ingest, clean, and normalize data from third-party sources such as Google Ads, Meta,Taboola, Outbrain, and Google Analytics

● Write high-performance SQL and support analytics and reporting teams in self-serve data access

● Monitor and improve data quality, lineage, and governance across critical workflows

● Collaborate with engineers, analysts, and business partners across the US, UK, and India


What You Bring

● 4+ years of data engineering experience, ideally in a global, distributed team

● Strong Python development skills and experience

● Expert in SQL for data transformation, analysis, and debugging

● Deep knowledge of Airflow and orchestration best practices

● Proficient in DBT (data modeling, testing, release workflows)

● Experience with GCP (BigQuery, GCS, Composer); AWS familiarity is a plus

● Strong grasp of data governance, observability, and privacy standards

● Excellent written and verbal communication skills


Nice to have

● Experience working with digital marketing and performance data, including:

Google Ads, Meta (Facebook), TikTok, Taboola, Outbrain, Google Analytics (GA4)

● Familiarity with BI tools like Tableau or Looker

● Exposure to attribution models, media mix modeling, or A/B testing infrastructure

● Collaboration experience with data scientists or machine learning workflows


Why Join Us

● Monthly long weekends — every third Friday off

● Wellness reimbursement to support your health and balance

● Paid parental leave

● Remote-first with flexibility and trust

● Work with a world-class data and marketing team inside a globally recognized brand

Read more
ByteFoundry AI

at ByteFoundry AI

4 candid answers
Bisman Gill
Posted by Bisman Gill
Remote only
3 - 8 yrs
Upto ₹40L / yr (Varies
)
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconPython
SQL
skill iconAmazon Web Services (AWS)
+3 more

About the Role

We are looking for a motivated Full Stack Developer with 2–5 years of hands-on experience in building scalable web applications. You will work closely with senior engineers and product teams to develop new features, improve system performance, and ensure high-

quality code delivery.

Responsibilities

- Develop and maintain full-stack applications.

- Implement clean, maintainable, and efficient code.

- Collaborate with designers, product managers, and backend engineers.

- Participate in code reviews and debugging.

- Work with REST APIs/GraphQL.

- Contribute to CI/CD pipelines.

- Ability to work independently as well as within a collaborative team environment.


Required Technical Skills

- Strong knowledge of JavaScript/TypeScript.

- Experience with React.js, Next.js.

- Backend experience with Node.js, Express, NestJS.

- Understanding of SQL/NoSQL databases.

- Experience with Git, APIs, debugging tools.ß

- Cloud familiarity (AWS/GCP/Azure).

AI and System Mindset

Experience working with AI-powered systems is a strong plus. Candidates should be comfortable integrating AI agents, third-party APIs, and automation workflows into applications, and should demonstrate curiosity and adaptability toward emerging AI technologies.

Soft Skills

- Strong problem-solving ability.

- Good communication and teamwork.

- Fast learner and adaptable.

Education

Bachelor's degree in Computer Science / Engineering or equivalent.

Read more
Palcode.ai

at Palcode.ai

2 candid answers
Team Palcode
Posted by Team Palcode
Remote only
1 - 2 yrs
₹2L - ₹5L / yr
skill iconPython
skill iconReact.js
skill iconAmazon Web Services (AWS)

Palcode.ai is an AI-first platform built to solve real, high-impact problems in the construction and preconstruction ecosystem. We work at the intersection of AI, product execution, and domain depth, and are backed by leading global ecosystems


Role: Full Stack Developer

Industry Type: Software Product

Department: Engineering - Software & QA

Employment Type: Full Time, Permanent

Role Category: Software Development

Education

UG: Any Graduate

Read more
Alpheva AI
Ramakant gupta
Posted by Ramakant gupta
Remote only
1 - 3 yrs
₹10L - ₹25L / yr
skill iconReact Native
skill iconReact.js
skill iconNextJs (Next.js)
skill iconPython
skill iconPostgreSQL

About the Role

We’re hiring a Full Stack Engineer who can own features end to end, from UI to APIs to data models.

This is not a “ticket executor” role. You’ll work directly with product, AI, and founders to shape how users interact with intelligent financial systems.

If you enjoy shipping real features, fixing real problems, and seeing users actually use what you built, this role is for you.


What You Will Do

  • Build and ship frontend features using React, Next.js, and React Native
  • Develop backend services and APIs using Python and/or Golang
  • Own end-to-end product flows like onboarding, dashboards, insights, and AI conversations
  • Integrate frontend with backend and AI services (LLMs, tools, data pipelines)
  • Design and maintain PostgreSQL schemas, queries, and migrations
  • Ensure performance, reliability, and clean architecture across the stack
  • Collaborate closely with product, AI, and design to ship fast and iterate
  • Debug production issues and continuously improve UX and system quality


What We’re Looking For

  • 2 to 3+ years of professional full stack engineering experience
  • Strong hands-on experience with React, Next.js, and React Native
  • Backend experience with Python and/or Golang in production
  • Solid understanding of PostgreSQL, APIs, and system design
  • Strong fundamentals in HTML, CSS, TypeScript, and modern frontend patterns
  • Ability to work independently and take ownership in a startup environment
  • Product-minded engineer who thinks in terms of user outcomes, not just code
  • B.Tech in Computer Science or related field


Nice to Have

  • Experience with fintech, dashboards, or data-heavy products
  • Exposure to AI-powered interfaces, chat systems, or real-time data
  • Familiarity with cloud platforms like AWS or GCP
  • Experience handling sensitive or regulated data


Why Join Alpheva AI

  • Build real product used by real users from day one
  • Work directly with founders and influence core product decisions
  • Learn how AI-native fintech products are built end to end
  • High ownership, fast execution, zero corporate nonsense
  • Competitive compensation with meaningful growth upside


Read more
Unilog

at Unilog

3 candid answers
1 video
Bisman Gill
Posted by Bisman Gill
Remote, BLR, Mysore
8yrs+
Upto ₹52L / yr (Varies
)
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Google Vertex AI
Agentic AI
PyTorch
+7 more

About Unilog

Unilog is the only connected product content and eCommerce provider serving the Wholesale Distribution, Manufacturing, and Specialty Retail industries. Our flagship CX1 Platform is at the center of some of the most successful digital transformations in North America. CX1 Platform’s syndicated product content, integrated eCommerce storefront, and automated PIM tool simplify our customers' path to success in the digital marketplace.

With more than 500 customers, Unilog is uniquely positioned as the leader in eCommerce and product content for Wholesale distribution, manufacturing, and specialty retail. 

Unilog’ s Mission Statement

At Unilog, our mission is to provide purpose-built connected product content and eCommerce solutions that empower our customers to succeed in the face of intense competition. By virtue of living our mission, we are able to transform the way Wholesale Distributors, Manufacturers, and Specialty Retailers go to market. We help our customers extend a digital version of their business and accelerate their growth.


Designation:- AI Architect

Location: Bangalore/Mysore/Remote  

Job Type: Full-time  

Department: Software R&D  


About the Role  

We are looking for a highly motivated AI Architect to join our CTO Office and drive the exploration, prototyping, and adoption of next-generation technologies. This role offers a unique opportunity to work at the forefront of AI/ML, Generative AI (Gen AI), Large Language Models (LLMs), Vector Databases, AI Search, Agentic AI, Automation, and more.  

As an Architect, you will be responsible for identifying emerging technologies, building proof-of-concepts (PoCs), and collaborating with cross-functional teams to define the future of AI-driven solutions. Your work will directly influence the company’s technology strategy and help shape disruptive innovations.


Key Responsibilities  

Research & Experimentation: Stay ahead of industry trends, evaluate emerging AI/ML technologies, and prototype novel solutions in areas like Gen AI, Vector Search, AI Agents, and Automation. 


Proof-of-Concept Development: Rapidly build, test, and iterate PoCs to validate new technologies for potential business impact.


AI/ML Engineering: Design and develop AI/ML models, LLMs, embedding’s, and intelligent search capabilities leveraging state-of-the-art techniques. 


Vector & AI Search: Explore vector databases and optimize retrieval-augmented generation (RAG) workflows.  


Automation & AI Agents: Develop autonomous AI agents and automation frameworks to enhance business processes.  


Collaboration & Thought Leadership: Work closely with software developers and product teams to integrate innovations into production-ready solutions.


Innovation Strategy: Contribute to the technology roadmap, patents, and research papers to establish leadership in emerging domains.  


Required Qualifications  


  1. 8-14 years of experience in AI/ML, software engineering, or a related field.  
  2. Strong hands-on expertise in Python, TensorFlow, PyTorch, LangChain, Hugging Face, OpenAI APIs, Claude, Gemini.
  3. Experience with LLMs, embeddings, AI search, vector databases (e.g., Pinecone, FAISS, Weaviate, PGVector), and agentic AI.  
  4. Familiarity with cloud platforms (AWS, Azure, GCP) and AI/ML infrastructure.  
  5. Strong problem-solving skills and a passion for innovation.  
  6. Ability to communicate complex ideas effectively and work in a fast-paced, experimental environment.  


Preferred Qualifications  

  • Experience with multi-modal AI (text, vision, audio), reinforcement learning, or AI security.  
  • Knowledge of data pipelines, MLOps, and AI governance.  
  • Contributions to open-source AI/ML projects or published research papers.  


Why Join Us?  

  • Work on cutting-edge AI/ML innovations with the CTO Office.  
  • Influence the company’s future AI strategy and shape emerging technologies.  
  • Competitive compensation, growth opportunities, and a culture of continuous learning.    


About our Benefits:

Unilog offers a competitive total rewards package including competitive salary, multiple medical, dental, and vision plans to meet all our employees’ needs, 401K match, career development, advancement opportunities, annual merit, pay-for-performance bonus eligibility, a generous time-off policy, and a flexible work environment.


Unilog is committed to building the best team and we are committed to fair hiring practices where we hire people for their potential and advocate for diversity, equity, and inclusion. As such, we do not discriminate or make decisions based on your race, color, religion, creed, ancestry, sex, national origin, age, disability, familial status, marital status, military status, veteran status, sexual orientation, gender identity, or expression, or any other protected class. 

Read more
Data Axle

at Data Axle

2 candid answers
Eman Khan
Posted by Eman Khan
Remote, Pune
5 - 10 yrs
Best in industry
skill iconC++
skill iconDocker
skill iconKubernetes
ECS
skill iconAmazon Web Services (AWS)
+12 more

We are looking for a Senior Software Engineer to join our team and contribute to key business functions. The ideal candidate will bring relevant experience, strong problem-solving skills, and a collaborative

mindset.


Responsibilities:

  • Design, build, and maintain high-performance systems using modern C++
  • Architect and implement containerized services using Docker, with orchestration via Kubernetes or ECS
  • Build, monitor, and maintain data ingestion, transformation, and enrichment pipelines
  • Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and
  • managing applications in the cloud.
  • Implement and maintain modern CI/CD pipelines, ensuring seamless integration, testing, and delivery
  • Participate in system design, peer code reviews, and performance tuning


Qualifications:

  • 5+ years of software development experience, with strong command over modern C++
  • Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and managing applications in the cloud.
  • Apache Airflow for orchestrating complex data workflows.
  • EKS (Elastic Kubernetes Service) for managing containerized workloads.
  • Proven expertise in designing and managing robust data pipelines & Microservices.
  • Proficient in building and scaling data processing workflows and working with structured/unstructured data
  • Strong hands-on experience with Docker, container orchestration, and microservices architecture
  • Working knowledge of CI/CD practices, Git, and build/release tools
  • Strong problem-solving, debugging, and cross-functional collaboration skills


This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.

Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Remote only
4 - 7 yrs
₹4.5L - ₹10.5L / yr
skill iconPython
FastAPI
API
SQLAlchemy
Pydantic

🚀 Hiring: Python Developer at Deqode

⭐ Experience: 4+ Years

⭐ Work Mode:- Remote

⏱️ Notice Period: Immediate Joiners

(Only immediate joiners & candidates serving notice period)


Role Overview:

We are looking for a skilled Software Development Engineer (Python) to design, develop, and maintain scalable backend applications and high-performance RESTful APIs. The ideal candidate will work on modern microservices architecture, ensure clean and efficient code, and collaborate with cross-functional teams to deliver robust solutions.

Key Responsibilities:

  • Develop and maintain RESTful APIs and backend services using Python
  • Build scalable microservices and integrate third-party APIs
  • Design and optimize database schemas and queries
  • Ensure application security, performance, and reliability
  • Write clean, testable, and maintainable code
  • Participate in code reviews and follow best engineering practices

Mandatory Skills (3):

  1. Python – Strong hands-on experience in backend development
  2. FastAPI / REST API Development – Building and maintaining APIs
  3. SQLAlchemy / Relational Databases – Database modeling and optimization


Read more
Remote only
2 - 4 yrs
₹25L - ₹30L / yr
skill iconPython
skill iconAmazon Web Services (AWS)

Strong Full stack/Backend engineer profile

Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)

Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures

Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS

Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis

Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS

Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring

Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design

Mandatory (Company) : Product companies (B2B SaaS preferred)

Read more
Technology Industry

Technology Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Remote only
2 - 4 yrs
₹23L - ₹30L / yr
skill iconPython
Microservices
skill iconVue.js
MySQL
RESTful APIs
+21 more

Job Details

- Job Title: Software Developer (Python, React/Vue)

- Industry: Technology

- Experience Required: 2-4 years

- Working Days: 5 days/week

- Job Location: Remote working

- CTC Range: Best in Industry


Review Criteria

  • Strong Full stack/Backend engineer profile
  • 2+ years of hands-on experience as a full stack developer (backend-heavy)
  • (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
  • (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
  • (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
  • (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
  • (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
  • (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
  • Product companies (B2B SaaS preferred)


Preferred

  • Preferred (Location) - Mumbai
  • Preferred (Skills): Candidates with strong backend or full-stack experience in other languages/frameworks are welcome if fundamentals are strong
  • Preferred (Education): B.Tech from Tier 1, Tier 2 institutes


Role & Responsibilities

This is not just another dev job. You’ll help engineer the backbone of the world’s first AI Agentic manufacturing OS.

 

You will:

  • Build and own features end-to-end — from design → deployment → scale.
  • Architect scalable, loosely coupled systems powering AI-native workflows.
  • Create robust integrations with 3rd-party systems.
  • Push boundaries on reliability, performance, and automation.
  • Write clean, tested, secure code → and continuously improve it.
  • Collaborate directly with Founders & Snr engineers in a high-trust environment.

 

Our Tech Arsenal:

  • We believe in always using the sharpest tools for the job. On that end we always try to remain tech agnostic and leave it up to discussion on what tools to use to get the problem solved in the most robust and quick way.
  • That being said, our bright team of engineers have already constructed a formidable arsenal of tools which helps us fortify our defense and always play on the offensive. Take a look at the Tech Stack we already use.
Read more
Remote only
2 - 4 yrs
₹25L - ₹32L / yr
skill iconPython
skill iconAmazon Web Services (AWS)

Strong Full stack/Backend engineer profile

Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)

Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures

Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS

Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis

Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS

Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring

Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design

Mandatory (Company) : Product companies (B2B SaaS preferred)

Read more
ConvertLens

at ConvertLens

2 candid answers
3 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote, Noida
2yrs+
Best in industry
skill iconPython
FastAPI
AI Agents
Artificial Intelligence (AI)
Large Language Models (LLM)
+9 more

🚀 About Us

At Remedo, we're building the future of digital healthcare marketing. We help doctors grow their online presence, connect with patients, and drive real-world outcomes like higher appointment bookings and better Google reviews — all while improving their SEO.


We’re also the creators of Convertlens, our generative AI-powered engagement engine that transforms how clinics interact with patients across the web. Think hyper-personalized messaging, automated conversion funnels, and insights that actually move the needle.

We’re a lean, fast-moving team with startup DNA. If you like ownership, impact, and tech that solves real problems — you’ll fit right in.


🛠️ What You’ll Do

  • Build and maintain scalable Python back-end systems that power Convertlens and internal applications.
  • Develop Agentic AI applications and workflows to drive automation and insights.
  • Design and implement connectors to third-party systems (APIs, CRMs, marketing tools) to source and unify data.
  • Ensure system reliability with strong practices in observability, monitoring, and troubleshooting.


⚙️ What You Bring

  • 2+ years of hands-on experience in Python back-end development.
  • Strong understanding of REST API design and integration.
  • Proficiency with relational databases (MySQL/PostgreSQL).
  • Familiarity with observability tools (logging, monitoring, tracing — e.g., OpenTelemetry, Prometheus, Grafana, ELK).
  • Experience maintaining production systems with a focus on reliability and scalability.
  • Bonus: Exposure to Node.js and modern front-end frameworks like ReactJs.
  • Strong problem-solving skills and comfort working in a startup/product environment.
  • A builder mindset — scrappy, curious, and ready to ship.


💼 Perks & Culture

  • Flexible work setup — remote-first for most, hybrid if you’re in Delhi NCR.
  • A high-growth, high-impact environment where your code goes live fast.
  • Opportunities to work with Agentic AI and cutting-edge tech.
  • Small team, big vision — your work truly matters here.
Read more
Talent Pro
Remote only
2 - 4 yrs
₹25L - ₹32L / yr
skill iconPython
Microservices

Strong Full stack/Backend engineer profile

Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)

Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures

Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS

Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis

Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS

Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring

Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design

Mandatory (Company) : Product companies (B2B SaaS preferred)

Preferred

Preferred (Location) - Mumbai

Preferred (Skills) : Candidates with strong backend or full-stack experience in other languages/frameworks are welcome if fundamentals are strong

Preferred (Education) : B.Tech from Tier 1,Tier 2 institutes

Read more
Avoma

at Avoma

3 candid answers
2 recruiters
Eman Khan
Posted by Eman Khan
Remote, Pune
4 - 10 yrs
₹20L - ₹40L / yr
skill iconPython
skill iconDjango
skill iconFlask
FastAPI
SaaS

Summary

As a Senior Software Engineer you will be directly responsible for the experience our clients have on the platform by designing, building, and maintaining the server-side of Avoma. This is an exciting opportunity for those who are curious, diligent, and want to learn and develop their professional skills within a fast-growing start-up. As an early member of the team, we will work together to create the building blocks for our development strategy and set up the foundations and processes for future Software Engineers at Avoma.


The ideal candidate will be experienced in building the structure of a B2B SaaS application. We expect you to be a reliable professional, able to balance the needs of the product roadmap and the needs of the customers. Your overarching goal is to ensure an enjoyable experience for everyone using Avoma.

We strongly believe in the overall growth and continued development of each new hire. As a rapidly expanding business, there is a high degree of opportunity for progression, creativity, and ownership.In the last 12 months, we have seen growth across all metrics, and we are looking for strong software engineers to scale up our platform.


What you will be doing in the role (Responsibilities)

  • Develop features and improvements to the Avoma product in a secure, well-tested, and performant way
  • Collaborate with Product Management and other stakeholders within Engineering (Frontend, UX, etc.) to maintain a high bar for quality in a fast-paced, iterative environment
  • Advocate for improvements to product quality, security, and performance
  • Solve technical problems of moderate scope and complexity
  • Craft code that meets our internal standards for style, maintainability, and best practices for a high-scale environment.
  • Conduct Code Review to ensure performance of our backend services
  • Build RESTful APIs and features for UI application clients (including Web and Mobile)
  • Deploy and integrate with machine learning models developed by the AI team
  • Own and work the core data processing pipeline and video processing pipeline
  • Third-party integrations - Calendar, CRM systems, Video Conferencing
  • Search and analytics infrastructure
  • DevOps and security


We use the Django framework to build our backend platform and React.js for our frontend application.


Requirements

What you need to be successful in the role (Qualifications)

  • At least 4+ years of experience as a Software Engineer with a track record of delivering software with high business impact
  • Significant professional experience with Python, Django
  • Experience building RESTful APIs
  • Using Coding AI agents like Cursor and different prompting techniques.
  • System Design knowledge using AWS services and other frameworks of Python / Django ecosystem.
  • Experience writing scalable, high-performant, and clean code
  • B.S./B.E. in Computer Science or relevant engineering field
  • Proficiency in the English language, both written and verbal, sufficient for success in a largely asynchronous work environment


What sets you apart (Preferred Qualifications)

  • Demonstrated capacity to clearly and concisely communicate about complex technical, architectural, and/or organizational problems and propose thorough iterative solutions
  • Experience with performance and optimization problems and a demonstrated ability to both diagnose and prevent these problems
  • Comfort working in a highly agile, intensely iterative software development process
  • Positive and solution-oriented mindset
  • An inclination towards communication, inclusion, and visibility
  • Experience owning a project from concept to production, including proposal, discussion, and execution.
  • Demonstrated ability to work closely with other parts of the organization
  • Knowledge of Artificial Intelligence technologies and tools


Benefits

  • What you get from Avoma (Benefits)
  • A transparent salary structure
  • Senior Salary Range: ₹2,800,000 - ₹3,800,000 Depending on Experience.
  • 15 days of PTO annually on top of company holidays
  • Employee Stock Options Program


About Avoma

  • Avoma is an intelligent meeting assistant for teams looking to automate some of the common tasks required for customer-focused meetings. We leverage NLP and machine learning to summarize meeting notes and extract key topics and action items discussed. All of this data automatically syncs back into their CRM. This helps to save end users time and focus on what matters most, their customer interactions.
  • We are a venture-funded early-stage startup, have 1000+ paid customers, and are growing consistently month over month.
Read more
LogIQ Labs Pvt.Ltd.

at LogIQ Labs Pvt.Ltd.

2 recruiters
HR eShipz
Posted by HR eShipz
Remote only
4 - 5 yrs
₹6L - ₹13L / yr
skill iconPython
Generative AI
AI Agents

We are seeking a Generative AI Developer to design, build, and scale next-generation AI systems. You will go beyond simple API integration to architect RAG (Retrieval-Augmented Generation) pipelines, fine-tune LLMs (Large Language Models), and develop Agentic workflows where AI can autonomously handle multi-step tasks. You will be responsible for the "System" around the model—ensuring reliability, cost-efficiency, and ethical safety.


Responsibilities :


  • Agentic Orchestration: Design and implement AI agents that use tools (APIs, databases) to solve complex, multi-step business problems.
  • RAG Architecture: Build and optimize high-performance RAG pipelines using vector databases (e.g., Pinecone, Weaviate, or Milvus) to provide AI with long-term memory and factual grounding.
  • Model Fine-Tuning: Customize pre-trained models (like Llama 3, GPT-4, or Claude) using techniques like LoRA and QLoRA for domain-specific accuracy.
  • Prompt Engineering: Develop advanced prompt strategies (Chain-of-Thought, Few-Shot) and version-control them as first-class software artifacts.
  • Evaluation & Observability: Build "Eval" frameworks to measure model performance, hallucination rates, and latency to ensure production-grade reliability.
  • LLMOps & Deployment: Collaborate with DevOps to containerize (Docker/Kubernetes) and deploy models on cloud platforms (AWS Bedrock, Azure AI, or Google Vertex AI).


Required Technical Skills;


  • Programming: Mastery of Python (FastAPI, PyTorch, TensorFlow).
  • Frameworks: Proficiency in LangChain, LlamaIndex, or Haystack.
  • Vector Databases: Experience with Pinecone, FAISS, or ChromaDB.
  • Model Expertise: Hands-on experience with LLMs (OpenAI, Anthropic) and Open-Source models (Mistral, Llama).
  • Data Engineering: Ability to build pipelines for data cleaning, chunking, and embedding.
  • Cloud Platforms: Familiarity with AI services on AWS, GCP, or Azure




Read more
Alpha

at Alpha

2 candid answers
Yash Makhecha
Posted by Yash Makhecha
Remote only
1 - 3 yrs
₹3L - ₹8L / yr
skill iconPython
skill iconNodeJS (Node.js)
skill iconJavascript
skill iconPostgreSQL
skill iconRedis
+2 more

Software Development Engineer 1 (SDE1)


Location: Remote (India preferred) | Type: Full-time | Compensation: Competitive salary + early-stage stock options



🧠 About Alpha


Modern revenue teams juggle 10+ point-solutions. Alpha unifies them into an agent-powered platform that plans, executes, and optimises GTM campaigns—so every touch happens on the right channel, at the right time, with the right context.


Alpha is building the world’s most intuitive AI stack for revenue teams —to engage, convert & scale revenue with an AI powered GTM team. l

Our mission is to make AI not just accessible, but dependable and truly useful.


We’re early, funded, and building with urgency. Join us to help define what work looks like when AI works for you.



🔧 What You’ll Do


You’ll lead the development of our AI GTM platform and underlying AI agents to power seamless multi-channel GTMs.


This is a hybrid UX-engineering role: you’ll translate high-level user journeys into interfaces that feel clear, powerful, and trustworthy.


Your responsibilities:


  • Design & implement end-to-end features across React-TS/Next.js, Node.js, Postgres, Redis, and Python micro-services for LLM agents.
  • Build & document scalable GraphQL / REST APIs that expose our data model (Company, Person, Campaign, Sequence, Asset, ActivityRecord, InferenceSnippet).
  • Integrate third-party APIs (CRM, email, ads, CMS) and maintain data sync reliability > 98 %.
  • Implement the dynamic agent flow builder with configurable steps, HITL checkpoints, and audit trails.
  • Instrument product analytics, error tracking, and CI pipelines for fast feedback and safe releases.
  • Work directly with the founder on product scoping, technical roadmap, and hiring pipeline.


✅ What We’re Looking For

  • 1–3 years experience building polished web apps (React, Vue, or similar)
  • Strong eye for design fidelity, UX decisions, and motion
  • Experience integrating frontend with backend APIs and managing state
  • Experience with visual builders, workflow editors, or schema UIs is a big plus
  • You love taking complex systems and making them feel simple


💎 What You’ll Get

  • Competitive salary + high-leverage early equity
  • Ownership of user experience at the most critical phase
  • A tight feedback loop with real users from Day 1
  • Freedom to shape UI decisions, patterns, and performance for the long haul
Read more
LuvFitz

at LuvFitz

1 candid answer
Recruitment Team
Posted by Recruitment Team
Remote only
2 - 6 yrs
₹10L - ₹12L / yr
Data engineering
skill iconPython
skill iconReact.js
Web Scraping

Founding Full-Stack Engineer

Consumer AI | Delhi (Hybrid) | Building for the US Market


The Opportunity

We are a pre-seed startup redefining fashion discovery. We believe the future of e-commerce isn't about search bars and endless scrolling; it’s about context-aware curation.

We are looking for a Founding Full-Stack Engineer who sits at the intersection of Systems and Design. You will be the technical anchor of the team, turning chaotic web data into a fluid, consumer-grade experience.


The Role

You won't just be writing code; you will define the product's architecture. We need Vertical Ownership—the ability to take a feature from a raw database row all the way to a pixel-perfect interaction on the screen.

  • The Data Engine (Backend & Scraping): You will build robust data pipelines that scrape and frequently refresh data from top fashion retailers. You will further structure and enrich this data, add additional attributes and prepare it for downstream consumption. Added bonus: prior experience with feature engineering
  • The Experience (Frontend): You will craft the user interface. We are building a consumer brand, so "functional" isn't enough—it needs to feel "alive.”. If you have a refined design sense, and prior exposure to building eComm websites with complex landing and category pages, then you are the right fit for this role. 
  • The Intelligence (AI Integration): You should be able to understand and re-run existing research algorithms on Fill-In-The-Blank (FITB) tasks.


The Toolkit

  • Core Stack: Python for the heavy lifting; JavaScript/TypeScript (React) for the web interface.
  • Data Ops: Experience building complex scrapers, handling proxies, and managing data pipelines is non-negotiable.
  • Design Engineering: A strong grasp of building advanced eCommerce components (multi-tile grid layouts, drag-and-drop elements, etc.)
  • Low-code tools: Adept at rapidly prototyping with Replit, Lovable, etc.


The DNA We Are Looking For

  • You are a Builder, not just a Coder. You prefer shipping a live prototype over debating a PR for three days.
  • You have Taste. You understand that building for the US consumer market requires a level of polish and minimalism that standard B2B SaaS ignores.
  • You are "T-Shaped." You have deep expertise in one area (either scraping or frontend), but you are dangerous enough across the entire stack to build solo.
  • You embrace the Chaos. You know how to build structure out of messy unstructured data.


Why Join Now?

  • 0 to 1 Ownership: No legacy code, no technical debt. Just you, the founders, and a blank editor.
  • Global Impact: You are building from India, but for the most competitive consumer market in the world (USA). The standards are higher, and so is the reward.
  • Founding Team Status: Competitive pay with a future path to equity
Read more
Hudson Data

at Hudson Data

1 recruiter
MadanLal Gupta
Posted by MadanLal Gupta
Remote only
6 - 10 yrs
₹9L - ₹12L / yr
skill iconPython
SQL
skill iconGoogle Analytics
Linux/Unix
Google Cloud Platform (GCP)
+1 more

About Hudson Data


At Hudson Data, we view AI as both an art and a science. Our cross-functional teams — spanning business leaders, data scientists, and engineers — blend AI/ML and Big Data technologies to solve real-world business challenges. We harness predictive analytics to uncover new revenue opportunities, optimize operational efficiency, and enable data-driven transformation for our clients.


Beyond traditional AI/ML consulting, we actively collaborate with academic and industry partners to stay at the forefront of innovation. Alongside delivering projects for Fortune 500 clients, we also develop proprietary AI/ML products addressing diverse industry challenges.


Headquartered in New Delhi, India, with an office in New York, USA, Hudson Data operates globally, driving excellence in data science, analytics, and artificial intelligence.



About the Role


We are seeking a Data Analyst & Modeling Specialist with a passion for leveraging AI, machine learning, and cloud analytics to improve business processes, enhance decision-making, and drive innovation. You’ll play a key role in transforming raw data into insights, building predictive models, and delivering data-driven strategies that have real business impact.



Key Responsibilities


1.⁠ ⁠Data Collection & Management

• Gather and integrate data from multiple sources including databases, APIs, spreadsheets, and cloud warehouses.

• Design and maintain ETL pipelines ensuring data accuracy, scalability, and availability.

• Utilize any major cloud platform (Google Cloud, AWS, or Azure) for data storage, processing, and analytics workflows.

• Collaborate with engineering teams to define data governance, lineage, and security standards.


2.⁠ ⁠Data Cleaning & Preprocessing

• Clean, transform, and organize large datasets using Python (pandas, NumPy) and SQL.

• Handle missing data, duplicates, and outliers while ensuring consistency and quality.

• Automate data preparation using Linux scripting, Airflow, or cloud-native schedulers.


3.⁠ ⁠Data Analysis & Insights

• Perform exploratory data analysis (EDA) to identify key trends, correlations, and drivers.

• Apply statistical techniques such as regression, time-series analysis, and hypothesis testing.

• Use Excel (including pivot tables) and BI tools (Tableau, Power BI, Looker, or Google Data Studio) to develop insightful reports and dashboards.

• Present findings and recommendations to cross-functional stakeholders in a clear and actionable manner.


4.⁠ ⁠Predictive Modeling & Machine Learning

• Build and optimize predictive and classification models using scikit-learn, XGBoost, LightGBM, TensorFlow, Keras, and H2O.ai.

• Perform feature engineering, model tuning, and cross-validation for performance optimization.

• Deploy and manage ML models using Vertex AI (GCP), AWS SageMaker, or Azure ML Studio.

• Continuously monitor, evaluate, and retrain models to ensure business relevance.


5.⁠ ⁠Reporting & Visualization

• Develop interactive dashboards and automated reports for performance tracking.

• Use pivot tables, KPIs, and data visualizations to simplify complex analytical findings.

• Communicate insights effectively through clear data storytelling.


6.⁠ ⁠Collaboration & Communication

• Partner with business, engineering, and product teams to define analytical goals and success metrics.

• Translate complex data and model results into actionable insights for decision-makers.

• Advocate for data-driven culture and support data literacy across teams.


7.⁠ ⁠Continuous Improvement & Innovation

• Stay current with emerging trends in AI, ML, data visualization, and cloud technologies.

• Identify opportunities for process optimization, automation, and innovation.

• Contribute to internal R&D and AI product development initiatives.



Required Skills & Qualifications


Technical Skills

• Programming: Proficient in Python (pandas, NumPy, scikit-learn, XGBoost, LightGBM, TensorFlow, Keras, H2O.ai).

• Databases & Querying: Advanced SQL skills; experience with BigQuery, Redshift, or Azure Synapse is a plus.

• Cloud Expertise: Hands-on experience with one or more major platforms — Google Cloud, AWS, or Azure.

• Visualization & Reporting: Skilled in Tableau, Power BI, Looker, or Excel (pivot tables, data modeling).

• Data Engineering: Familiarity with ETL tools (Airflow, dbt, or similar).

• Operating Systems: Strong proficiency with Linux/Unix for scripting and automation.


Soft Skills

• Strong analytical, problem-solving, and critical-thinking abilities.

• Excellent communication and presentation skills, including data storytelling.

• Curiosity and creativity in exploring and interpreting data.

• Collaborative mindset, capable of working in cross-functional and fast-paced environments.



Education & Certifications

• Bachelor’s degree in Data Science, Computer Science, Statistics, Mathematics, or a related field.

• Master’s degree in Data Analytics, Machine Learning, or Business Intelligence preferred.

• Relevant certifications are highly valued:

• Google Cloud Professional Data Engineer

• AWS Certified Data Analytics – Specialty

• Microsoft Certified: Azure Data Scientist Associate

• TensorFlow Developer Certificate



Why Join Hudson Data


At Hudson Data, you’ll be part of a dynamic, innovative, and globally connected team that uses cutting-edge tools — from AI and ML frameworks to cloud-based analytics platforms — to solve meaningful problems. You’ll have the opportunity to grow, experiment, and make a tangible impact in a culture that values creativity, precision, and collaboration.


Read more
CLOUDSUFI

at CLOUDSUFI

3 recruiters
Ayushi Dwivedi
Posted by Ayushi Dwivedi
Remote only
3 - 5 yrs
₹15L - ₹25L / yr
Google Cloud Platform (GCP)
skill iconPython
SQL

If interested please share your resume at ayushi.dwivedi at cloudsufi.com


Note - This role is remote but with quarterly visit to Noida office (1 week in a qarter) if you are ok for that then pls share your resume.


Data Engineer 

Position Type: Full-time


About Us

CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services, and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.


Job Summary

We are seeking a highly skilled and motivated Data Engineer to join our Development POD for the Integration Project. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to ingest, clean, transform, and integrate diverse public datasets into our knowledge graph. This role requires a strong understanding of Cloud Platform (GCP) services, data engineering best practices, and a commitment to data quality and scalability.


Key Responsibilities

ETL Development: Design, develop, and optimize data ingestion, cleaning, and transformation pipelines for various data sources (e.g., CSV, API, XLS, JSON, SDMX) using Cloud Platform services (Cloud Run, Dataflow) and Python.

Schema Mapping & Modeling: Work with LLM-based auto-schematization tools to map source data to our schema.org vocabulary, defining appropriate Statistical Variables (SVs) and generating MCF/TMCF files.

Entity Resolution & ID Generation: Implement processes for accurately matching new entities with existing IDs or generating unique, standardized IDs for new entities.

Knowledge Graph Integration: Integrate transformed data into the Knowledge Graph, ensuring proper versioning and adherence to existing standards. 

API Development: Develop and enhance REST and SPARQL APIs via Apigee to enable efficient access to integrated data for internal and external stakeholders.

Data Validation & Quality Assurance: Implement comprehensive data validation and quality checks (statistical, schema, anomaly detection) to ensure data integrity, accuracy, and freshness. Troubleshoot and resolve data import errors.

Automation & Optimization: Collaborate with the Automation POD to leverage and integrate intelligent assets for data identification, profiling, cleaning, schema mapping, and validation, aiming for significant reduction in manual effort.

Collaboration: Work closely with cross-functional teams, including Managed Service POD, Automation POD, and relevant stakeholders.

Qualifications and Skills

Education: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Technology, or a related quantitative field.

Experience: 3+ years of proven experience as a Data Engineer, with a strong portfolio of successfully implemented data pipelines.

Programming Languages: Proficiency in Python for data manipulation, scripting, and pipeline development.

Cloud Platforms and Tools: Expertise in Google Cloud Platform (GCP) services, including Cloud Storage, Cloud SQL, Cloud Run, Dataflow, Pub/Sub, BigQuery, and Apigee. Proficiency with Git-based version control.


Core Competencies:

Must Have - SQL, Python, BigQuery, (GCP DataFlow / Apache Beam), Google Cloud Storage (GCS)

Must Have - Proven ability in comprehensive data wrangling, cleaning, and transforming complex datasets from various formats (e.g., API, CSV, XLS, JSON)

Secondary Skills - SPARQL, Schema.org, Apigee, CI/CD (Cloud Build), GCP, Cloud Data Fusion, Data Modelling

Solid understanding of data modeling, schema design, and knowledge graph concepts (e.g., Schema.org, RDF, SPARQL, JSON-LD).

Experience with data validation techniques and tools.

Familiarity with CI/CD practices and the ability to work in an Agile framework.

Strong problem-solving skills and keen attention to detail.


Preferred Qualifications:

Experience with LLM-based tools or concepts for data automation (e.g., auto-schematization).

Familiarity with similar large-scale public dataset integration initiatives.

Experience with multilingual data integration.

Read more
CLOUDSUFI

at CLOUDSUFI

3 recruiters
Ayushi Dwivedi
Posted by Ayushi Dwivedi
Remote only
6 - 10 yrs
₹25L - ₹38L / yr
Google Cloud Platform (GCP)
SQL
skill iconPython
Bigquery

If interested please send your resume at ayushi.dwivedi at cloudsufi.com


Current location of candidate must be Bangalore (as client office visit is required), also candidate must be open for 1 week in a quarter visit to Noida office.


About Us

CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.

 

Our Values 

We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.

 

Equal Opportunity Statement 

CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, colour, religion, gender, gender identity or expression, sexual orientation and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/


Job Summary

We are seeking a highly skilled and motivated Data Engineer to join our Development POD for the Integration Project. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to ingest, clean, transform, and integrate diverse public datasets into our knowledge graph. This role requires a strong understanding of Cloud Platform (GCP) services, data engineering best practices, and a commitment to data quality and scalability.


Key Responsibilities

ETL Development: Design, develop, and optimize data ingestion, cleaning, and transformation pipelines for various data sources (e.g., CSV, API, XLS, JSON, SDMX) using Cloud Platform services (Cloud Run, Dataflow) and Python.

Schema Mapping & Modeling: Work with LLM-based auto-schematization tools to map source data to our schema.org vocabulary, defining appropriate Statistical Variables (SVs) and generating MCF/TMCF files.

Entity Resolution & ID Generation: Implement processes for accurately matching new entities with existing IDs or generating unique, standardized IDs for new entities.

Knowledge Graph Integration: Integrate transformed data into the Knowledge Graph, ensuring proper versioning and adherence to existing standards. 

API Development: Develop and enhance REST and SPARQL APIs via Apigee to enable efficient access to integrated data for internal and external stakeholders.

Data Validation & Quality Assurance: Implement comprehensive data validation and quality checks (statistical, schema, anomaly detection) to ensure data integrity, accuracy, and freshness. Troubleshoot and resolve data import errors.

Automation & Optimization: Collaborate with the Automation POD to leverage and integrate intelligent assets for data identification, profiling, cleaning, schema mapping, and validation, aiming for significant reduction in manual effort.

Collaboration: Work closely with cross-functional teams, including Managed Service POD, Automation POD, and relevant stakeholders.

Qualifications and Skills

Education: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Technology, or a related quantitative field.

Experience: 3+ years of proven experience as a Data Engineer, with a strong portfolio of successfully implemented data pipelines.

Programming Languages: Proficiency in Python for data manipulation, scripting, and pipeline development.

Cloud Platforms and Tools: Expertise in Google Cloud Platform (GCP) services, including Cloud Storage, Cloud SQL, Cloud Run, Dataflow, Pub/Sub, BigQuery, and Apigee. Proficiency with Git-based version control.


Core Competencies:

Must Have - SQL, Python, BigQuery, (GCP DataFlow / Apache Beam), Google Cloud Storage (GCS)

Must Have - Proven ability in comprehensive data wrangling, cleaning, and transforming complex datasets from various formats (e.g., API, CSV, XLS, JSON)

Secondary Skills - SPARQL, Schema.org, Apigee, CI/CD (Cloud Build), GCP, Cloud Data Fusion, Data Modelling

Solid understanding of data modeling, schema design, and knowledge graph concepts (e.g., Schema.org, RDF, SPARQL, JSON-LD).

Experience with data validation techniques and tools.

Familiarity with CI/CD practices and the ability to work in an Agile framework.

Strong problem-solving skills and keen attention to detail.


Preferred Qualifications:

Experience with LLM-based tools or concepts for data automation (e.g., auto-schematization).

Familiarity with similar large-scale public dataset integration initiatives.

Experience with multilingual data integration.

Read more
Fluxon

at Fluxon

3 candid answers
Ariba Khan
Posted by Ariba Khan
Remote only
5 - 10 yrs
Upto ₹55L / yr (Varies
)
skill iconPython
Artificial Intelligence (AI)
Generative AI
Langraph
Langchain

Who we are

We’re Fluxon, a product development team founded by ex-Googlers and startup founders. We offer full-cycle software development from ideation and design to build and go-to-market. We partner with visionary companies, ranging from fast-growing startups to tech leaders like

Google and Stripe, to turn bold ideas into products with the power to transform the world.


About the role

As an AI Engineer at Fluxon, you’ll take the lead in designing, building and deploying AI-powered applications for our clients.


You'll be responsible for:

  • System Architecture: Design and implement end-to-end AI systems and their parts, including data ingestion, preprocessing, model inference, and output structuring
  • Generative AI Development: Build and optimize RAG (Retrieval-Augmented Generation) systems and Agentic workflows using frameworks like LangChain, LangGraph, ADK (Agent Development Kit), Genkit
  • Production Engineering: Deploy models to production environments (AWS/GCP/Azure) using Docker and Kubernetes, ensuring high availability and scalability
  • Evaluation & Monitoring: Implement feedback loops to evaluate model performance (accuracy, hallucinations, relevance) and set up monitoring for drift in production
  • Collaboration: Work closely with other engineers to integrate AI endpoints into the core product and with product managers to define AI capabilities
  • Model Optimization: Fine-tune open-source models (e.g., Llama, Mistral) for specific domain tasks and optimize them for latency and cost


You'll work with technologies including:

Languages

  • Python (Preferred)
  • Java / C++ / Scala / R / JavaScript

AI / ML

  • LangChain
  • LangGraph
  • Google ADK
  • Genkit
  • OpenAI API
  • LLM - Large Language Model
  • Vertex AI

Cloud & Infrastructure

  • Platforms: Google Cloud Platform (GCP) or Amazon Web Services (AWS)
  • Storage: Google Cloud Storage (GCS) or AWS S3
  • Orchestration: Temporal, Kubernetes
  • Data Stores
  • PostgreSQL
  • Firestore
  • MongoDB

Monitoring & Observability

  • GCP Cloud Monitoring Suite


Qualifications

  • 5+ years of industry experience in software engineering roles
  • Strong proficiency in Python or any preferred AI programming language such as Scala, Javascript and Java
  • Strong understanding of Transformer architectures, embeddings, and vector similarity search
  • Experience integrating with LLM provider APIs (OpenAI, Anthropic, Google Vertex AI)
  • Hands-on experience with agent workflows like LangChain, LangGraph
  • Experience with Vector Databases and traditional SQL / NoSQL databases
  • Familiarity with cloud platforms, preferably GCP or AWS
  • Understanding of patterns like RAG (Retrieval-Augmented Generation), few-shot prompting, and Fine-Tuning
  • Solid understanding of software development practices including version control (Git) and CI/CD

Nice to have:

  • Experience with Google Cloud Platform (GCP) services, specifically Vertex AI, Firestore,and Cloud Functions
  • Knowledge of prompt engineering techniques (Chain-of-Thought, ReAct, Tree of Thoughts)
  • Experience building "Agentic" workflows where AI can execute tools or API calls autonomously


What we offer

  • Exposure to high-profile SV startups and enterprise companies
  • Competitive salary
  • Fully remote work with flexible hours
  • Flexible paid time off
  • Profit-sharing program
  • Healthcare
  • Parental leave, including adoption and fostering
  • Gym membership and tuition reimbursement
  • Hands-on career development
Read more
Rapid Canvas

at Rapid Canvas

1 candid answer
Nikita Sinha
Posted by Nikita Sinha
Remote only
6 - 10 yrs
Upto ₹60L / yr (Varies
)
skill iconJava
skill iconPython
skill iconGo Programming (Golang)
skill iconNodeJS (Node.js)
skill iconPHP

We are looking for back end engineering experts with the passion to take on new challenges in a high growth startup environment. If you love finding creative solutions to coding challenges using the latest tech stack such as Java 18+, Spring Boot 3+, then we would like to speak with you.

Roles & Responsibilities

  • You will be part of a team that focuses on building a world-class data science platform
  • Work closely with both product owners and architects to fully understand business requirements and the design philosophy
  • Optimize web and data applications for performance and scalability
  • Collaborate with automation engineering team to deliver high-quality deliverables within a challenging time frame
  • Produce quality code, raising the bar for team performance and speed
  • Recommend systems solutions by comparing advantages and disadvantages of custom development and purchased alternatives
  • Follow emerging technologies

Key Skills Required

  • Bachelor’s degree (or equivalent) in computer science
  • At least 6 years of experience in software development using Java / Python, SpringBoot, REST API and scalable microservice frameworks.
  • Strong foundation in computer science, algorithms, and web design
  • Experience in writing highly secure web applications
  • Knowledge of container/orchestration tools (Kubernetes, Docker, etc.) and UI frameworks (NodeJS, React)
  • Good development habits, including unit testing, CI, and automated testing
  • High growth mindset that challenges the status quo and focuses on unique ideas and solutions
  • Experience on working with dynamic startups / high intensity environment would be a Plus
  • Experience working with shell scripting, Github Actions, Unix and prominent cloud providers like GCP, Azure, S3 is a plus

Why Join Us

  • Drive measurable impact for Fortune 500 customers across the globe, helping them turn AI vision into operational value.
  • Be part of a category-defining AI company, pioneering a hybrid model that bridges agents and experts.
  • Own strategic accounts end-to-end and shape what modern AI success looks like.
  • Work with a cross-functional, high-performance team that values execution, clarity, and outcomes.
  • Globally competitive compensation and benefits tailored to your local market.
  • Recognized as a Top 5 Data Science and Machine Learning platform on G2 for customer satisfaction.

Read more
Rapid Canvas

at Rapid Canvas

1 candid answer
Nikita Sinha
Posted by Nikita Sinha
Remote only
4 - 8 yrs
Upto ₹60L / yr (Varies
)
skill iconPython
Large Language Models (LLM) tuning
Pipeline management
Systems design
Artificial Intelligence (AI)

We are seeking a highly motivated and skilled AI Engineer. You will have strong fundamentals in applied machine learning. You will have a passion for building and deploying production-grade AI solutions for enterprise clients. You will be a key technical expert and the face of our company. You will directly interface with customers to design, build, and deliver cutting-edge AI applications. This is a customer-facing role. It requires a balance of deep technical expertise and excellent communication skills.

Roles & Responsibilities

Design & Deliver AI Solutions

  • Interact directly with customers.
  • Understand their business requirements.
  • Translate them into robust, production-ready AI solutions.
  • Manage AI projects with the customer's vision in mind.
  • Build long-term, trusted relationships with clients.

Build & Integrate Agents

  • Architect, build, and integrate intelligent agent systems.
  • Automate IT functions and solve specific client problems.
  • Use expertise in frameworks like LangChain or LangGraph to build multi-step tasks.
  • Integrate these custom agents directly into the RapidCanvas platform.

Implement LLM & RAG Pipelines

  • Develop grounding pipelines with retrieval-augmented generation (RAG).
  • Contextualize LLM behavior with client-specific knowledge.
  • Build and integrate agents with infrastructure signals like logs and APIs.

Collaborate & Enable

  • Work with customer data science teams.
  • Collaborate with other internal Solutions Architects, Engineering, and Product teams.
  • Ensure seamless integration of AI solutions.
  • Serve as an expert on the RapidCanvas platform.
  • Enable and support customers in building their own applications.
  • Act as a Product Champion, providing crucial feedback to the product team to drive innovation.

Data & Model Management

  • Oversee the entire AI project lifecycle.
  • Start from data preprocessing and model development.
  • Finish with deployment, monitoring, and optimization.

Champion Best Practices

  • Write clean, maintainable Python code.
  • Champion engineering best practices.
  • Ensure high performance, accuracy, and scalability.

Key Skills Required

Experience

  • Minimum 5+ years of hands-on experience in AI/ML engineering or backend systems.
  • Recent exposure to LLMs or intelligent agents is a must.

Technical Expertise

  • Proficiency in Python.
  • Proven track record of building scalable backend services or APIs.
  • Expertise in machine learning, deep learning, and Generative AI concepts.
  • Hands-on experience with LLM platforms (e.g., GPT, Gemini).
  • Deep understanding of and hands-on experience with agentic frameworks like LangChain, LangGraph, or CrewAI.
  • Experience with vector databases (e.g., Pinecone, Weaviate, FAISS).

Customer & Communication Skills

  • Proven ability to partner with enterprise stakeholders.
  • Excellent presentation skills.
  • Comfortable working independently.
  • Manage multiple projects simultaneously.

Preferred Skills

  • Experience with cloud platforms (e.g., AWS, Azure, Google Cloud).
  • Knowledge of MLOps practices.
  • Experience in the AI services industry or startup environments.

Why Join us

  • High-impact opportunity: Play a pivotal role in building a new business vertical within a rapidly growing AI company.
  • Strong leadership & funding: Backed by top-tier investors, our leadership team has deep experience scaling AI-driven businesses.
  • Recognized as a top 5 Data Science and Machine Learning platform by independent research firm G2 for customer satisfaction.


Read more
PGAGI
Javeriya Shaik
Posted by Javeriya Shaik
Remote only
0 - 0.6 yrs
₹2L - ₹2L / yr
skill iconPython
Large Language Models (LLM)
Natural Language Processing (NLP)
skill iconDeep Learning
FastAPI
+1 more

We're at the forefront of creating advanced AI systems, from fully autonomous agents that provide intelligent customer interaction to data analysis tools that offer insightful business solutions. We are seeking enthusiastic interns who are passionate about AI and ready to tackle real-world problems using the latest technologies.


Duration: 6 months


Perks:

- Hands-on experience with real AI projects.

- Mentoring from industry experts.

- A collaborative, innovative and flexible work environment

After completion of the internship period, there is a chance to get a full-time opportunity as AI/ML engineer (Up to 12 LPA).


Compensation:

- Joining Bonus: A one-time bonus of INR 2,500 will be awarded upon joining.

- Stipend: Base is INR 8000/- & can increase up to 20000/- depending upon performance matrix.

Key Responsibilities

  • Experience working with python, LLM, Deep Learning, NLP, etc..
  • Utilize GitHub for version control, including pushing and pulling code updates.
  • Work with Hugging Face and OpenAI platforms for deploying models and exploring open-source AI models.
  • Engage in prompt engineering and the fine-tuning process of AI models.

Requirements

  • Proficiency in Python programming.
  • Experience with GitHub and version control workflows.
  • Familiarity with AI platforms such as Hugging Face and OpenAI.
  • Understanding of prompt engineering and model fine-tuning.
  • Excellent problem-solving abilities and a keen interest in AI technology.


To Apply Click below link and submit the Assignment

https://pgagi.in/jobs/28df1e98-f0c3-4d58-9509-d5b1a4ea9754

Read more
Verix

at Verix

5 candid answers
1 video
Eman Khan
Posted by Eman Khan
Remote only
4 - 8 yrs
₹15L - ₹30L / yr
Search Engine Optimization (SEO)
skill iconPython
skill iconNodeJS (Node.js)
skill iconJava
SEMRush
+1 more

About OptimizeGEO

OptimizeGEO.ai is our flagship product that helps brands stay visible and discoverable in AI-powered answers. Unlike traditional SEO that optimizes for keywords and rankings, OptimizeGEO operationalizes GEO principles ensuring brands are mentioned, cited, and trusted by generative systems (ChatGPT, Gemini, Perplexity, Claude, etc.) and answer engines (featured snippets, voice search, and AI answer boxes).


Founded by industry veterans Kirthiga Reddy (ex-Meta, MD Facebook India) and Saurabh Doshi (ex-Meta, ex-Viacom), Company is backed by Micron Ventures, Better Capital, FalconX, and leading angels including Randi Zuckerberg, Vani Kola and Harsh Jain.


Role Overview

We are hiring a Senior Backend Engineer to build the data and services layer that powers OptimizeGEO’s analytics, scoring, and reporting. This role partners closely with our GEO/AEO domain experts and data teams to translate framework gap analysis, share-of-voice, entity/knowledge-graph coverage, trust signals into scalable backend systems and APIs.


You will design secure, reliable, and observable services that ingest heterogeneous web and third-party data, compute metrics, and surface actionable insights to customers via dashboards and reports.


Key Responsibilities

  • Own backend services for data ingestion, processing, and aggregation across crawlers, public APIs, search consoles, analytics tools, and third-party datasets.
  • Operationalize GEO/AEO metrics (visibility scores, coverage maps, entity health, citation/trust signals, competitor benchmarks) as versioned, testable algorithms.
  • Data Scaping using various tools and working on volume estimates for accurate consumer insights for brands
  • Design & implement APIs for internal use (data science, frontend) and external consumption (partner/export endpoints), with clear SLAs and quotas.
  • Data pipelines & orchestration: batch and incremental jobs, queueing, retries/backoff, idempotency, and cost-aware scaling.
  • Storage & modeling: choose fit-for-purpose datastores (OLTP/OLAP), schema design, indexing/partitioning, lineage, and retention.
  • Observability & reliability: logging, tracing, metrics, alerting; SLOs for freshness and accuracy; incident response playbooks.
  • Security & compliance: authN/authZ, secrets management, encryption, PII governance, vendor integrations.
  • Collaborate cross-functionally with domain experts to convert research into productized features and executive-grade reports.


Required Qualification (Must Have)

  • Familiarity with technical SEO artifacts: schema.org/structured data, E-E-A-T, entity/knowledge-graph concepts, and crawl budgets.
  • Exposure to AEO/GEO and how LLMs weigh sources, citations, and trust; awareness of hallucination risks and mitigation.
  • Experience integrating SEO/analytics tools (Google Search Console, Ahrefs, SEMrush, Similarweb, Screaming Frog) and interpreting their data models.
  • Background in digital PR/reputation signals and local/international SEO considerations.
  • Comfort working with analysts to co-define KPIs and build executive-level reporting.


Expected Qualifications

  • 4+ years of experience building backend systems in production (startups or high-growth product teams preferred).
  • Proficiency in one or more of: Python, Node.js/TypeScript, Go, or Java.
  • Experience with cloud platforms (AWS/GCP/Azure) and containerized deployment (Docker, Kubernetes).
  • Hands-on with data pipelines (Airflow/Prefect, Kafka/PubSub, Spark/Flink or equivalent) and REST/GraphQL API design.
  • Strong grounding in systems design, scalability, reliability, and cost/performance trade-offs.


Tooling & Stack (Illustrative)

  • Runtime: Python/TypeScript/Go
  • Data: Postgres/BigQuery + object storage (S3/GCS)
  • Pipelines: Airflow/Prefect, Kafka/PubSub
  • Infra: AWS/GCP, Docker, Kubernetes, Terraform
  • Observability: OpenTelemetry, Prometheus/Grafana, ELK/Cloud Logging
  • Collab: GitHub, Linear/Jira, Notion, Looker/Metabase


Working Model

  • Hybrid-remote within India with limited periodic in-person collaboration
  • Startup velocity with pragmatic processes; bias to shipping, measurement, and iteration.


Equal Opportunity

OptimizeGEO is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

Read more
Grey Chain Technology

at Grey Chain Technology

5 candid answers
Deebaj Mir
Posted by Deebaj Mir
Remote only
7 - 10 yrs
₹18L - ₹24L / yr
skill iconPython
FastAPI
Generative AI
AI Agents
skill iconAmazon Web Services (AWS)
+1 more

Company: Grey Chain AI

Location: Remote

Experience: 7+ Years

Employment Type: Full Time


About the Role

We are looking for a Senior Python AI Engineer who will lead the design, development, and delivery of production-grade GenAI and agentic AI solutions for global clients. This role requires a strong Python engineering background, experience working with foreign enterprise clients, and the ability to own delivery, guide teams, and build scalable AI systems.


You will work closely with product, engineering, and client stakeholders to deliver high-impact AI-driven platforms, intelligent agents, and LLM-powered systems.

Key Responsibilities

  • Lead the design and development of Python-based AI systems, APIs, and microservices.
  • Architect and build GenAI and agentic AI workflows using modern LLM frameworks.
  • Own end-to-end delivery of AI projects for international clients, from requirement gathering to production deployment.
  • Design and implement LLM pipelines, prompt workflows, and agent orchestration systems.
  • Ensure reliability, scalability, and security of AI solutions in production.
  • Mentor junior engineers and provide technical leadership to the team.
  • Work closely with clients to understand business needs and translate them into robust AI solutions.
  • Drive adoption of latest GenAI trends, tools, and best practices across projects.

Must-Have Technical Skills

  • 7+ years of hands-on experience in Python development, building scalable backend systems.
  • Strong experience with Python frameworks and libraries (FastAPI, Flask, Pydantic, SQLAlchemy, etc.).
  • Solid experience working with LLMs and GenAI systems (OpenAI, Claude, Gemini, open-source models).
  • Good experience with agentic AI frameworks such as LangChain, LangGraph, LlamaIndex, or similar.
  • Experience designing multi-agent workflows, tool calling, and prompt pipelines.
  • Strong understanding of REST APIs, microservices, and cloud-native architectures.
  • Experience deploying AI solutions on AWS, Azure, or GCP.
  • Knowledge of MLOps / LLMOps, model monitoring, evaluation, and logging.
  • Proficiency with Git, CI/CD, and production deployment pipelines.


Leadership & Client-Facing Experience

  • Proven experience leading engineering teams or acting as a technical lead.
  • Strong experience working directly with foreign or enterprise clients.
  • Ability to gather requirements, propose solutions, and own delivery outcomes.
  • Comfortable presenting technical concepts to non-technical stakeholders.


What We Look For

  • Excellent communication, comprehension, and presentation skills.
  • High level of ownership, accountability, and reliability.
  • Self-driven professional who can operate independently in a remote setup.
  • Strong problem-solving mindset and attention to detail.
  • Passion for GenAI, agentic systems, and emerging AI trends.


Why Grey Chain AI

Grey Chain AI is a Generative AI-as-a-Service and Digital Transformation company trusted by global brands such as UNICEF, BOSE, KFINTECH, WHO, and Fortune 500 companies. We build real-world, production-ready AI systems that drive business impact across industries like BFSI, Non-profits, Retail, and Consulting.

Here, you won’t just experiment with AI — you will build, deploy, and scale it for the real world.


Read more
Flycatch infotech PVT LTD
Flycatch Recruitment
Posted by Flycatch Recruitment
Remote only
3 - 4 yrs
₹8.4L - ₹9.6L / yr
skill iconJava
skill iconPython

1. Minimum of 3 years of experience in ERPNext development, with a strong understanding of *ERPNext framework and customization. *

2. Proficiency in Python, JavaScript, HTML, CSS, and Frappe framework. Experience with ERPNext’s core modules such as Accounting, Sales, Purchase, Inventory, and HR is essential.

3. Experience with *MySQL or MariaDB databases. *

Read more
Corporate Web Solutions
Remote only
0 - 1 yrs
₹1L - ₹2L / yr
skill iconPython
skill iconData Science

About the Job :

As a Data Science Intern, you will work closely with our experienced data scientists and analysts to extract, analyze, and interpret large datasets to drive strategic decisions and improve our products and services. You will gain hands-on experience with data science tools, machine learning models, and statistical methods to help solve complex problems.


Currently offering "Data Science Internship" for 1-6months.


Data Science Projects details In which Intern’s Will Work :

Project 01 : Image Caption Generator Project in Python

Project 02 : Credit Card Fraud Detection Project

Project 03 : Movie Recommendation System

Project 04 : Customer Segmentation

Project 05 : Brain Tumor Detection with Data Science


Eligibility


A PC or Laptop with decent internet speed.

Good understanding of English language.

Any Graduate with a desire to become a web developer. Freshers are welcomed.

Knowledge of HTML, CSS and JavaScript is a plus but NOT mandatory.

Fresher are welcomed. You will get proper training also, so don't hesitate to apply if you don't have any coding background.


Duration: 2 Months (with the possibility of extending up to 6 months)

MODE: Work From Home (Online)


Key Responsibilities:


Assist in collecting, cleaning, and preprocessing data from various sources.

Perform exploratory data analysis to identify trends, patterns, and anomalies.

Develop and implement machine learning models and algorithms.

Create data visualizations and reports to communicate findings to stakeholders.

Collaborate with team members on data-driven projects and research.

Participate in meetings and contribute to discussions on project progress and strategy.


Benefits


Internship Certificate

Letter of recommendation

Stipend Performance Based

Part time work from home (2-3 Hrs per day)

5 days a week, Fully Flexible Shift

Read more
Heaven Designs

at Heaven Designs

1 product
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote only
2yrs+
Upto ₹12L / yr (Varies
)
skill iconPython
skill iconDjango
RESTful APIs
DevOps
CI/CD
+8 more

Backend Engineer (Python / Django + DevOps)


Company: SurgePV (A product by Heaven Designs Pvt. Ltd.)


About SurgePV

SurgePV is an AI-first solar design software built from more than a decade of hands-on experience designing and engineering thousands of solar installations at Heaven Designs. After working with nearly every solar design tool in the market, we identified major gaps in speed, usability, and intelligence—particularly for rooftop solar EPCs.

Our vision is to build the most powerful and intuitive solar design platform for rooftop installers, covering fast PV layouts, code-compliant engineering, pricing, proposals, and financing in a single workflow. SurgePV enables small and mid-sized solar EPCs to design more systems, close more deals, and accelerate the clean energy transition globally.

As SurgePV scales, we are building a robust backend platform to support complex geometry, pricing logic, compliance rules, and workflow automation at scale.


Role Overview

We are seeking a Backend Engineer (Python / Django + DevOps) to own and scale SurgePV’s core backend systems. You will be responsible for designing, building, and maintaining reliable, secure, and high-performance services that power our solar design platform.

This role requires strong ownership—you will work closely with the founders, frontend engineers, and product team to make architectural decisions and ensure the platform remains fast, observable, and scalable as global usage grows.


Key Responsibilities

  • Design, develop, and maintain backend services and REST APIs that power PV design, pricing, and core product workflows.
  • Collaborate with the founding team on system architecture, including authentication, authorization, billing, permissions, integrations, and multi-tenant design.
  • Build secure, scalable, and observable systems with structured logging, metrics, alerts, and rate limiting.
  • Own DevOps responsibilities for backend services, including Docker-based containerization, CI/CD pipelines, and production deployments.
  • Optimize PostgreSQL schemas, migrations, indexes, and queries for computation-heavy and geospatial workloads.
  • Implement caching strategies and performance optimizations where required.
  • Integrate with third-party APIs such as CRMs, financing providers, mapping platforms, and satellite or irradiance data services.
  • Write clean, maintainable, well-tested code and actively participate in code reviews to uphold engineering quality.

Required Skills & Qualifications (Must-Have)

  • 2–5 years of experience as a Backend Engineer.
  • Strong proficiency in Python and Django / Django REST Framework.
  • Solid computer science fundamentals, including data structures, algorithms, and basic distributed systems concepts.
  • Proven experience designing and maintaining REST APIs in production environments.
  • Hands-on DevOps experience, including:
  • Docker and containerized services
  • CI/CD pipelines (GitHub Actions, GitLab CI, or similar)
  • Deployments on cloud platforms such as AWS, GCP, Azure, or DigitalOcean
  • Strong working knowledge of PostgreSQL, including schema design, migrations, indexing, and query optimization.
  • Strong debugging skills and a habit of instrumenting systems using logs, metrics, and alerts.
  • Ownership mindset with the ability to take systems from spec → implementation → production → iteration.

Good-to-Have Skills

  • Experience working in early-stage startups or building 0→1 products.
  • Familiarity with Kubernetes or other container orchestration tools.
  • Experience with Infrastructure as Code (Terraform, Pulumi).
  • Exposure to monitoring and observability stacks such as Prometheus, Grafana, ELK, or similar tools.
  • Prior exposure to solar, CAD/geometry, geospatial data, or financial/pricing workflows.

What We Offer

  • Real-world impact: every feature you ship helps accelerate solar adoption on real rooftops.
  • Opportunity to work across backend engineering, DevOps, integrations, and performance optimization.
  • A mission-driven, fast-growing product focused on sustainability and clean energy.
Read more
MyOperator - VoiceTree Technologies

at MyOperator - VoiceTree Technologies

1 video
3 recruiters
Vijay Muthu
Posted by Vijay Muthu
Remote only
3 - 5 yrs
₹8L - ₹12L / yr
skill iconKubernetes
skill iconAmazon Web Services (AWS)
Amazon EC2
AWS RDS
AWS opensearch
+22 more

About MyOperator

MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.


Job Summary

We are looking for a skilled and motivated DevOps Engineer with 3+ years of hands-on experience in AWS cloud infrastructure, CI/CD automation, and Kubernetes-based deployments. The ideal candidate will have strong expertise in Infrastructure as Code, containerization, monitoring, and automation, and will play a key role in ensuring high availability, scalability, and security of production systems.


Key Responsibilities

  • Design, deploy, manage, and maintain AWS cloud infrastructure, including EC2, RDS, OpenSearch, VPC, S3, ALB, API Gateway, Lambda, SNS, and SQS.
  • Build, manage, and operate Kubernetes (EKS) clusters and containerized workloads.
  • Containerize applications using Docker and manage deployments with Helm charts
  • Develop and maintain CI/CD pipelines using Jenkins for automated build and deployment processes
  • Provision and manage infrastructure using Terraform (Infrastructure as Code)
  • Implement and manage monitoring, logging, and alerting solutions using Prometheus and Grafana
  • Write and maintain Python scripts for automation, monitoring, and operational tasks
  • Ensure high availability, scalability, performance, and cost optimization of cloud resources
  • Implement and follow security best practices across AWS and Kubernetes environments
  • Troubleshoot production issues, perform root cause analysis, and support incident resolution
  • Collaborate closely with development and QA teams to streamline deployment and release processes

Required Skills & Qualifications

  • 3+ years of hands-on experience as a DevOps Engineer or Cloud Engineer.
  • Strong experience with AWS services, including:
  • EC2, RDS, OpenSearch, VPC, S3
  • Application Load Balancer (ALB), API Gateway, Lambda
  • SNS and SQS.
  • Hands-on experience with AWS EKS (Kubernetes)
  • Strong knowledge of Docker and Helm charts
  • Experience with Terraform for infrastructure provisioning and management
  • Solid experience building and managing CI/CD pipelines using Jenkins
  • Practical experience with Prometheus and Grafana for monitoring and alerting
  • Proficiency in Python scripting for automation and operational tasks
  • Good understanding of Linux systems, networking concepts, and cloud security
  • Strong problem-solving and troubleshooting skills

Good to Have (Preferred Skills)

  • Exposure to GitOps practices
  • Experience managing multi-environment setups (Dev, QA, UAT, Production)
  • Knowledge of cloud cost optimization techniques
  • Understanding of Kubernetes security best practices
  • Experience with log aggregation tools (e.g., ELK/OpenSearch stack)

Language Preference

  • Fluency in English is mandatory.
  • Fluency in Hindi is preferred.
Read more
Discovered Labs
Remote only
4 - 10 yrs
₹25L - ₹44L / yr
skill iconPython
skill iconReact.js
skill iconNextJs (Next.js)
TypeScript
Data architecture
+1 more

Senior Engineer (Full-Stack)


Apply Here

👉 Submit your application HERE: https://airtable.com/app9tS5cclInJg589/shr3gTEXrhh8fi7lg


About Discovered Labs

At Discovered Labs we work with $10M - $50M ARR companies to help them get more leads, users and customers from Google, Bing and AI assistants such as ChatGPT, Claude and Perplexity.

We approach marketing the way engineers approach systems: data in, insights out, feedback loops everywhere. Every decision traces back to measurable outcomes. Every workflow is designed to eliminate manual bottlenecks and compound over time.


High-level overview of our approach:

  • Data-driven automation: We treat marketing programs like products. We instrument everything, automate the repetitive, and focus human effort on high-leverage problems.
  • First principles thinking: We don't copy what others do. We understand the underlying mechanics of how search and AI systems work, then build solutions from that foundation.
  • Full-stack ownership: SEO and AEO rarely work as isolated tasks. We work across the entire funnel and multiple surface areas to ensure we own the outcome and clients win.


The Team

We're a deeply technical team building the SpaceX of the AEO & SEO space. You'll work alongside engineers who have built fraud engines powering Stripe, Plaid, and Coinbase; developed self-driving car systems at Aurora; and conducted AI research at Stanford. We don't have layers of management. You'll work directly with founders who can go deep on architecture, code, and product.


This Role

We're looking for a Senior Engineer to own the development and delivery of some of our core product infrastructure. You'll work directly with the CTO to build client-facing dashboards, AI visibility tooling, and automated content and outreach systems.

This is a high-ownership, hands-on role. You'll take feature specs from idea to production, own the quality of your releases, and help us ship faster without sacrificing reliability. If you thrive on building products that matter, not just writing code, this is for you.


What You'll Do

  • Build client-facing products: Design and ship deep analytics dashboards to uncover insights in AI search performance in a data-driven manner, all the way to mechanism interpretability of these LLMs.
  • Develop AI-powered tooling: Extend our internal systems into public-facing products, including automated reporting and intelligent workflows.
  • Own the full lifecycle: Take features from spec to production, monitor reliability, and iterate based on feedback. You own what you build.


The Ideal Person for This Role

  • A builder who ships. You care about getting working software into users' hands, not endless planning or polish. You've shipped products people actually use.
  • An owner. You take responsibility for outcomes, not just tasks. When something you ship breaks, you fix it.
  • Humble and curious. You acknowledge what you don't know, ask good questions, and genuinely want to learn. You take feedback as a gift, not a threat.
  • A first-principles thinker. You understand why things work, not just how. You can go five levels deep on technical decisions.
  • Always improving. You're not satisfied with "good enough." You actively seek ways to get better at your craft and make systems better over time.


Requirements

  • 4+ years of professional software engineering experience
  • Strong full-stack skills (TypeScript, React, Next.js, Python)
  • Track record of taking briefs and shipping robust, production-ready code without heavy hand-holding
  • You don't just build features. You leave the codebase better than you found it.
  • Comfortable with data modeling, API design, and pragmatic architecture decisions
  • Excellent written communication


Preferred Qualifications

  • Experience with AI/ML or LLM model finetuning, evaluation, or large-scale production deployments
  • Prior experience at a fast-moving startup or agency


What's in It for You

  • Fully remote position
  • Work directly with the CTO on high-impact projects
  • High ownership and autonomy. No micromanagement.
  • First-hand exposure to cutting-edge AI and search technology
  • Your work will directly impact well-known (10M+ ARR) companies’ performance
  • Join a fast-growing company at the intersection of AI and marketing


Our Hiring Process

  1. Application
  2. Take-Home Project
  3. Technical Deep Dive
  4. Leadership Interview
  5. Reference Checks


Apply Here

👉 Submit your application HERE: https://airtable.com/app9tS5cclInJg589/shr3gTEXrhh8fi7lg

Read more
Biz-Tech Analytics

Biz-Tech Analytics

Agency job
via Keep Knockin by Mehul Gulati
Remote only
4 - 10 yrs
₹1L - ₹3L / yr
Linux/Unix
skill iconDocker
skill iconPython
PyTorch
TensorFlow

Hiring DevOps Engineers (Freelance)

We’re hiring for our client: Biz-Tech Analytics


Role: DevOps Engineer (Freelance)

Experience: 4-7+ years

Project: Terminus Project

Location: Remote

Engagement Type: Freelance | Project-based


About the Role:

Biz-Tech Analytics is looking for experienced DevOps Engineers to contribute to the Terminus Project, a hands-on initiative involving system-level problem solving, automation, and containerised environments.

This role is ideal for engineers who enjoy working close to the system layer, debugging complex issues, and building reliable automation in isolated environments.


Key Responsibilities:

• Work on Linux-based systems, handling process management, file systems, and system utilities

• Write clean, testable Python code for automation and verification

• Build, configure, and manage Docker-based environments for testing and deployment

• Troubleshoot and debug complex system and software issues

• Collaborate using Git and GitHub workflows, including pull requests and branching

• Execute tasks independently and iterate based on structured feedback


Required Skills & Qualifications:

• Expert-level proficiency with Linux CLI, including Bash scripting

• Strong Python programming skills for automation and tooling

• Hands-on experience with Docker and containerized environments

• Excellent problem-solving and debugging skills

• Proficiency with Git and standard GitHub workflows


Preferred Qualifications:

• Professional experience in DevOps or Site Reliability Engineering (SRE)

• Exposure to cloud platforms such as AWS, GCP, or Azure

• Familiarity with machine learning frameworks like TensorFlow or PyTorch

• Prior experience contributing to open-source projects


Engagement Details

• Fully remote freelance engagement

• Flexible workload, with scope to take on additional tasks

• Opportunity to work on real-world systems supporting advanced AI and infrastructure projects


Apply via Google form: https://forms.gle/SDgdn7meiicTNhvB8


About Biz-Tech Analytics:

Biz-Tech Analytics partners with global enterprises, AI labs, and industrial businesses to help them build and scale frontier AI systems. From data creation to deployment, the team delivers specialised services including human-in-the-loop annotation, reinforcement learning from human feedback (RLHF), and custom dataset creation.

With a network of 500+ vetted developers, STEM professionals, linguists, and domain experts, Biz-Tech Analytics supports leading global platforms by enhancing complex AI models and providing high-precision feedback at scale.

Their work sits at the intersection of advanced research, engineering rigor, and real-world AI deployment, making them a strong partner for cutting-edge AI initiatives.

  

Read more
Shortcastle Technologies

at Shortcastle Technologies

3 recruiters
Arun Srinivaas R S
Posted by Arun Srinivaas R S
Remote only
0 - 2 yrs
₹7000 - ₹10000 / mo
skill iconPython
skill iconJavascript
skill iconReact.js

🚀 AI Marketing Automation Developer Intern


AI-First | High Ownership | Long-Term Opportunity


📍 About the Role


We are building an AI-first marketing and communications engine across multiple products and brands.

This role is for someone who wants to use AI to eliminate manual work, not do more of it.

This is not a traditional marketing internship.

It is a builder role focused on automation, experimentation, and systems thinking.


🧠 How We Work


  • AI-first, automation-first mindset
  • We focus on outcomes, not activity
  • You will work independently on clearly defined objectives
  • Minimal meetings, maximum ownership
  • Trial, iterate, break, fix, and improve
  • What you build is expected to be production-ready, not just a demo

We use modern AI tools (including Cursor and LLMs) and expect you to learn fast and apply faster.


✅ Who This Is For


This role is a strong fit if you:

  • Think in terms of systems and leverage
  • Enjoy solving open-ended problems
  • Are comfortable with ambiguity
  • Like experimenting until something works
  • Want to work in a real AI-first environment, not just talk about AI

Background matters less than mindset.

Engineering, tech-savvy marketing, or self-taught AI backgrounds all work.


❌ Who This Is NOT For


  • Manual or repetitive marketing work
  • Copy-paste or template-only roles
  • People who need detailed step-by-step instructions


🌱 Growth & Long-Term Path


This is a long-term internship, not a short project.

Interns who:

  • Consistently deliver
  • Show ownership
  • Fit into our AI-first work culture

👉 Will be converted to full-time roles.


Hiring and conversion decisions are made jointly by the founders and the automation team lead.


🕒 Commitment

  • 20–30 hours per week minimum
  • Fully remote
  • Flexible working hours (output > hours)


💡 How to Apply


Send:

A short note on why this role excites you

Any proof of:

  • AI tools you’ve used
  • Automation you’ve attempted
  • Projects you’ve built (academic, personal, or professional)

No formal resume required if your work speaks for itself.

Read more
Tonomo
Remote only
7 - 12 yrs
$18K - $21.6K / yr
Artificial Intelligence (AI)
skill iconFlutter
skill iconAndroid Development
skill iconiOS App Development
skill iconPython
+16 more

The Mission Tonomo is revolutionizing e-commerce with an intelligent, autonomous platform powered by IoT and AI. We are in the Beta phase, rapidly iterating based on user feedback. We need an "Unblocker"—a senior engineer who owns the mobile experience but can dive into the Python backend to build the endpoints they need to move fast.

The Engineering Culture We believe in AI-Augmented Engineering. We expect you to use tools like Cursor, Copilot, Gemini, GPT-4 and alike, to handle boilerplate code, allowing you to focus on complex native bridges, system architecture, and "on-the-spot" bug resolution.

Core Responsibilities

  • Flutter Mastery: Lead the development of our cross-platform Beta app (Android, iOS, and Web) using Flutter.
  • Backend Independence: Build and modify REST APIs and microservices in Python (FastAPI) to unblock frontend features.
  • AI coding: tools like Cursor, Copilot, Gemini, GPT-4 and alike
  • Agile Troubleshooting: Fix critical UI and logical bugs "on the spot" as reported by users. Experience with UI/UX best practices.
  • Performance & Debugging: Proactively monitor app health, experienced with Sentry, Firebase Crashlytics, and Flutter DevTools
  • IoT & Integration: Work with IoT telemetry protocols (MQTT) and integrate third-party services for payments (Stripe) and Firebase.
  • Native Depth: Develop custom plugins and MethodChannels to bridge Flutter with native iOS/Android functionalities.
  • Dashboard Ownership: Own dashboards end-to-end. Design and build internal dashboards for: Business Intelligence. System health and operational metrics. IoT and backend activity insights.
  • Frontend Development Build modern, responsive web dashboards using React (or similar). Implement advanced data visualizations. Focus on clarity, performance, and usability for non-technical stakeholders.
  • BI & Data Integration: Integrate dashboards with: Backend APIs (Python / FastAPI). Databases (PostgreSQL). Analytics / metrics sources (Grafana, Prometheus, or BI tools). Work with product & ops to define what should be measured.
  • Monitoring & Insights: Build visual views on top of monitoring data (Grafana or embedded views). Help translate raw metrics into actionable insights. Support ad-hoc analysis and investigation workflows.
  • Execution & Iteration: Move fast in a startup environment: iterate dashboards based on real feedback. Improve data quality, consistency, and trust over time.

Technical Requirements

  • Mobile Experience: 7+ years in mobile development with at least 5 highly distributed apps published.
  • The Stack: * Frontend: Expert Flutter/Dart skills
  • Backend: Proficient Python developer with experience in FastAPI, SQLAlchemy, and PostgreSQL.
  • Data & Backend Awareness: Comfortable consuming REST APIs and working with structured data.
  • Ability to collaborate on schema design and API contracts.
  • BI / Analytics (Nice to Have): Experience with BI tools or platforms (Grafana, Metabase, Superset, Looker, etc.).
  • Understanding of KPIs, funnels, and business metrics.
  • Experience embedding dashboards or analytics into web apps.
  • Architecture: Mastery of design patterns for both mobile (MVVM/MVC) and backend microservices.
  • Infrastructure: Experience with Google Cloud Platform and IoT telemetry (mandatory).
  • Execution: Proactive attitude toward learning and the ability to "own" a feature from DB schema to UI implementation.
  • Experience with Atlassian Jira

 

Soft skills:

·      Self-Directed Ownership: flags blockers early and suggests improvements without being asked. You are well experienced professional... You don't wait for a Jira ticket to be perfect; you ask the right questions and move the needle forward

·      Transparency: Extreme honesty about timelines—if a task is more complex than estimated, you communicate it immediately, not at the deadline.

·      Clear communicator with engineers and non-technical stakeholders.

 

The Deal

  • Part-time Retainer: 100 hours per month.
  • Rate: $15 – $18 USD per hour (Performance-based).
  • Impact: Direct partnership with the founding team in a fast-paced, AI-driven startup.
  • Location: We value the stability and focus of Tier-2 rockstars Kochi, Indore, Jaipur, or Ahmedabad and alike.

How to Apply If you are a self-starter who codes with AI and can bridge the gap between frontend and backend, send your resume and links to your 3 best live apps

Read more
Product company

Product company

Agency job
via Trinity consulting by Priyanka G
Remote, Bengaluru (Bangalore)
3 - 7 yrs
₹4L - ₹8L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Retrieval Augmented Generation (RAG)
skill iconDocker
skill iconKubernetes
+1 more

Experience: 3+ years


Responsibilities:


  • Build, train and fine tune ML models
  • Develop features to improve model accuracy and outcomes.
  • Deploy models into production using Docker, kubernetes and cloud services.
  • Proficiency in Python, MLops, expertise in data processing and large scale data set.
  • Hands on experience in Cloud AI/ML services.
  • Exposure in RAG Architecture
Read more
Fluxon

at Fluxon

3 candid answers
Ariba Khan
Posted by Ariba Khan
Remote only
6 - 12 yrs
₹30L - ₹60L / yr
skill iconNodeJS (Node.js)
skill iconPython
skill iconGo Programming (Golang)
skill iconJava
skill iconReact.js
+1 more

About the company:

We are Fluxon, a product development team founded by ex-Googlers and startup founders. We offer full-cycle software development: from ideation and design to build and go-to-market. We partner with visionary companies, ranging from fast-growing startups to tech leaders like Google and Stripe, to turn bold ideas into products with the power to transform the world. 


Qualifications:

  • 7+ years of industry experience in software development
  • Experience leading development through the full product lifecycle, including CI/CD, testing, release management, deployment, monitoring and incident response
  • Fluent in the design and implementation of scalable system architectures, data structures and algorithms, and effective development practices


About the role:

As a Staff Software Engineer at Fluxon, you'll drive the end-to-end delivery of products to market while learning, contributing, and growing in partnership with our leadership team

You'll be responsible for:

  • Guiding project delivery all the way to the user, leading projects, building and iterating in a dynamic environment
  • Partnering directly with clients to understand their needs, and achieve business goals
  • Defining product requirements, identifying appropriate system designs and planning development in partnership with our Product and Design teams
  • Supporting development of a healthy and effective engineering culture

You'll work with a diversity of technologies, including:

  • Languages
  • TypeScript/JavaScript, Java, .Net, Python, Golang, Rust, Ruby on Rails, Kotlin, Swift
  • Frameworks
  • Next.js, React, Angular, Spring, Expo, FastAPI, Django, SwiftUI
  • Cloud Service Providers
  • Google Cloud Platform, Amazon Web Services, Microsoft Azure
  • Cloud Services
  • Compute Engine, AWS Amplify, Fargate, Cloud Run
  • Apache Kafka, SQS, GCP CMS
  • S3, GCS
  • Technologies
  • AI/ML, LLMs, Crypto, SPA, Mobile apps, Architecture redesign 
  • Google Gemini, OpenAI ChatGPT, Vertex AI, Anthropic Claude, Huggingface
  • Databases
  • Firestore(Firebase), PostgreSQL, MariaDB, BigQuery, Supabase
  • Redis, Memcache


What we offer:

  • Exposure to high-profile SV startups and enterprise companies
  • Competitive salary
  • Fully remote work with flexible hours
  • Flexible paid time off
  • Profit-sharing program
  • Healthcare
  • Parental leave, including adoption and fostering
  • Gym membership and tuition reimbursement
  • Hands-on career development
Read more
Procedure

at Procedure

4 candid answers
3 recruiters
Adithya K
Posted by Adithya K
Remote only
5 - 10 yrs
₹40L - ₹60L / yr
Software Development
skill iconAmazon Web Services (AWS)
skill iconPython
TypeScript
skill iconPostgreSQL
+3 more

Procedure is hiring for Drover.


This is not a DevOps/SRE/cloud-migration role — this is a hands-on backend engineering and architecture role where you build the platform powering our hardware at scale.


About Drover

Ranching is getting harder. Increased labor costs and a volatile climate are placing mounting pressure to provide for a growing population. Drover is empowering ranchers to efficiently and sustainably feed the world by making it cheaper and easier to manage livestock, unlock productivity gains, and reduce carbon footprint with rotational grazing. Not only is this a $46B opportunity, you'll be working on a climate solution with the potential for real, meaningful impact.


We use patent-pending low-voltage electrical muscle stimulation (EMS) to steer and contain cows, replacing the need for physical fences or electric shock. We are building something that has never been done before, and we have hundreds of ranches on our waitlist.


Drover is founded by Callum Taylor (ex-Harvard), who comes from 5 generations of ranching, and Samuel Aubin, both of whom grew up in Australian ranching towns and have an intricate understanding of the problem space. We are well-funded and supported by Workshop Ventures, a VC firm with experience in building unicorn IoT companies.


We're looking to assemble a team of exceptional talent with a high eagerness to dive headfirst into understanding the challenges and opportunities within ranching.


About The Role

As our founding cloud engineer, you will be responsible for building and scaling the infrastructure that powers our IoT platform, connecting thousands of devices across ranches nationwide.


Because we are an early-stage startup, you will have high levels of ownership in what you build. You will play a pivotal part in architecting our cloud infrastructure, building robust APIs, and ensuring our systems can scale reliably. We are looking for someone who is excited about solving complex technical challenges at the intersection of IoT, agriculture, and cloud computing.


What You'll Do

  • Develop Drover IoT cloud architecture from the ground up (it’s a green field project)
  • Design and implement services to support wearable devices, mobile app, and backend API
  • Implement data processing and storage pipelines
  • Create and maintain Infrastructure-as-Code
  • Support the engineering team across all aspects of early-stage development -- after all, this is a startup


Requirements

  • 5+ years of experience developing cloud architecture on AWS
  • In-depth understanding of various AWS services, especially those related to IoT
  • Expertise in cloud-hosted, event-driven, serverless architectures
  • Expertise in programming languages suitable for AWS micro-services (eg: TypeScript, Python)
  • Experience with networking and socket programming
  • Experience with Kubernetes or similar orchestration platforms
  • Experience with Infrastructure-as-Code tools (e.g., Terraform, AWS CDK)
  • Familiarity with relational databases (PostgreSQL)
  • Familiarity with Continuous Integration and Continuous Deployment (CI/CD)


Nice To Have

  • Bachelor’s or Master’s degree in Computer Science, Software Engineering, Electrical Engineering, or a related field


Read more
Sun King

at Sun King

2 candid answers
1 video
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote only
2yrs+
Best in industry
Test Automation (QA)
Software Testing (QA)
Manual testing
skill iconPython
skill iconJava
+8 more

About Sun King

Sun King is the world’s leading off-grid solar energy company, providing affordable solar solutions to the 1.8 billion people without reliable access to electricity. By combining product design, fintech, and field operations, Sun King has connected over 20 million homes to solar power across Africa and Asia, adding more than 200,000 new homes each month. Through ‘pay-as-you-go’ financing, customers make small payments to eventually own their solar systems, saving money and reducing reliance on harmful energy sources like kerosene.


Sun King employs 2,800 staff across 12 countries, with expertise in product design, data science, logistics, customer service, and more. The company is expanding its product range to include clean cooking, electric mobility, and entertainment solutions, all while supporting a diverse workforce — with women making up 44% of the team.


About the role:

The role involves designing, executing, and maintaining robust functional, regression, and integration testing to ensure product quality and reliability, along with thorough defect tracking, analysis, and resolution. The individual will develop and maintain UI and API automation frameworks to improve test coverage, minimize manual effort, and enhance release efficiency. Close collaboration with development teams is expected to reproduce issues, validate fixes, and ensure high-quality releases. The role also includes integrating automated tests into CI/CD pipelines, supporting production issue analysis, and verifying hotfixes in live environments. Additionally, the candidate will actively participate in requirement and design reviews to ensure testability and clarity, maintain comprehensive QA documentation, and continuously improve testing frameworks, tools, and overall QA processes.


What you will be expected to do:

  • Design, execute, and maintain test cases, test plans, and test scripts for functional, regression, and integration testing.
  • Identify software defects, document them clearly, and track them through to closure.
  • Analyze bugs and provide detailed insights to help developers understand root causes.
  • Partner closely with the development team to reproduce issues, validate fixes, and ensure overall product quality.
  • Develop, maintain, and improve automated test suites (API/UI) to enhance test coverage, reduce manual effort, and improve release confidence.
  • Work with CI/CD pipelines to integrate automated tests into the deployment workflow.
  • Validate production issues, support troubleshooting, and verify hotfixes in real-time environments.
  • Recommend improvements in product performance, usability, and reliability based on test findings.
  • Participate in requirement and design reviews to ensure clarity, completeness, and testability.
  • Benchmark against competitor products and suggest enhancements based on industry trends.
  • Maintain detailed test documentation, including test results, defect logs, and release readiness assessments.
  • Continuously improve QA processes, automation frameworks, and testing methodologies.

You might be a strong candidate if you have/are:

  • Bachelor’s Degree in Computer Science, Information Technology, or a related field.
  • 2+ years of hands-on experience in software testing (manual + exposure to automation).
  • Strong understanding of QA methodologies, testing types, and best practices.
  • Experience in designing and executing test cases, test plans, and regression suites.
  • Exposure to automation tools/frameworks such as Selenium, Playwright, Cypress, TestNG, JUnit, or similar.
  • Basic programming or scripting knowledge (Java/Python preferred).
  • Good understanding of SQL for backend and data validation testing.
  • Familiarity with API testing tools such as Postman or RestAssured.
  • Experience with defect tracking and test management tools (Jira, TestRail, etc.).
  • Strong analytical and debugging skills with the ability to identify root causes.
  • Ability to work effectively in Agile/Scrum environments and partner with developers, product, and DevOps teams.
  • Strong ownership mindset — having contributed to high-quality, near bug-free releases.

Good to have:

  • Exceptional attention to detail and a strong focus on product quality.
  • Experience with performance, load, or security testing (JMeter, Gatling, OWASP tools, etc.).
  • Exposure to advanced automation frameworks or building automation scripts from scratch.
  • Familiarity with CI/CD pipelines and integrating automated tests.
  • Experience working with observability tools like Grafana, Kibana, and Prometheus for production verification.
  • Good understanding of microservices, distributed systems, or cloud platforms.

What Sun King offers:

  • Professional growth in a dynamic, rapidly expanding, high-social-impact industry
  • An open-minded, collaborative culture made up of enthusiastic colleagues who are driven by the challenge of innovation towards profound impact on people and the planet.
  • A truly multicultural experience: you will have the chance to work with and learn from people from different geographies, nationalities, and backgrounds.
  • Structured, tailored learning and development programs that help you become a better leader, manager, and professional through the Sun King Center for Leadership.
Read more
KGiSL MICROCOLLEGE
Hiring Recruitment
Posted by Hiring Recruitment
Remote, Erode
1 - 3 yrs
₹1L - ₹3.5L / yr
skill iconHTML/CSS
JAWA
skill iconPython
Artificial Intelligence (AI)

Salary: ₹3.5 LPA( Based on the performance)

Experience: 1–3 Years (ONLY FOR FEMALES)


We are looking for a Technical Trainer skilled in HTML, Java, Python, and AI to conduct technical trainer. The trainer will create learning materials, deliver sessions, assess student performance, and support learners throughout the training. Strong communication skills and the ability to explain technical concepts clearly are essential.

Read more
Capital Squared
Remote only
5 - 10 yrs
₹25L - ₹55L / yr
MLOps
DevOps
Google Cloud Platform (GCP)
CI/CD
skill iconPostgreSQL
+4 more

Role: Full-Time, Long-Term Required: Docker, GCP, CI/CD Preferred: Experience with ML pipelines


OVERVIEW

We are seeking a DevOps engineer to join as a core member of our technical team. This is a long-term position for someone who wants to own infrastructure and deployment for a production machine learning system. You will ensure our prediction pipeline runs reliably, deploys smoothly, and scales as needed.


The ideal candidate thinks about failure modes obsessively, automates everything possible, and builds systems that run without constant attention.


CORE TECHNICAL REQUIREMENTS

Docker (Required): Deep experience with containerization. Efficient Dockerfiles, layer caching, multi-stage builds, debugging container issues. Experience with Docker Compose for local development.


Google Cloud Platform (Required): Strong GCP experience: Cloud Run for serverless containers, Compute Engine for VMs, Artifact Registry for images, Cloud Storage, IAM. You can navigate the console but prefer scripting everything.


CI/CD (Required): Build and maintain deployment pipelines. GitHub Actions required. You automate testing, building, pushing, and deploying. You understand the difference between continuous integration and continuous deployment.


Linux Administration (Required): Comfortable on the command line. SSH, diagnose problems, manage services, read logs, fix things. Bash scripting is second nature.


PostgreSQL (Required): Database administration basics—backups, monitoring, connection management, basic performance tuning. Not a DBA, but comfortable keeping a production database healthy.


Infrastructure as Code (Preferred): Terraform, Pulumi, or similar. Infrastructure should be versioned, reviewed, and reproducible—not clicked together in a console.


WHAT YOU WILL OWN

Deployment Pipeline: Maintaining and improving deployment scripts and CI/CD workflows. Code moves from commit to production reliably with appropriate testing gates.


Cloud Run Services: Managing deployments for model fitting, data cleansing, and signal discovery services. Monitor health, optimize cold starts, handle scaling.


VM Infrastructure: PostgreSQL and Streamlit on GCP VMs. Instance management, updates, backups, security.


Container Registry: Managing images in GitHub Container Registry and Google Artifact Registry. Cleanup policies, versioning, access control.


Monitoring and Alerting: Building observability. Logging, metrics, health checks, alerting. Know when things break before users tell us.


Environment Management: Configuration across local and production. Secrets management. Environment parity where it matters.


WHAT SUCCESS LOOKS LIKE

Deployments are boring—no drama, no surprises. Systems recover automatically from transient failures. Engineers deploy with confidence. Infrastructure changes are versioned and reproducible. Costs are reasonable and resources scale appropriately.


ENGINEERING STANDARDS

Automation First: If you do something twice, automate it. Manual processes are bugs waiting to happen.


Documentation: Runbooks, architecture diagrams, deployment guides. The next person can understand and operate the system.


Security Mindset: Secrets never in code. Least-privilege access. You think about attack surfaces.


Reliability Focus: Design for failure. Backups are tested. Recovery procedures exist and work.


CURRENT ENVIRONMENT

GCP (Cloud Run, Compute Engine, Artifact Registry, Cloud Storage), Docker, Docker Compose, GitHub Actions, PostgreSQL 16, Bash deployment scripts with Python wrapper.


WHAT WE ARE LOOKING FOR

Ownership Mentality: You see a problem, you fix it. You do not wait for assignment.


Calm Under Pressure: When production breaks, you diagnose methodically.


Communication: You explain infrastructure decisions to non-infrastructure people. You document what you build.


Long-Term Thinking: You build systems maintained for years, not quick fixes creating tech debt.


EDUCATION

University degree in Computer Science, Engineering, or related field preferred. Equivalent demonstrated expertise also considered.


TO APPLY

Include: (1) CV/resume, (2) Brief description of infrastructure you built or maintained, (3) Links to relevant work if available, (4) Availability and timezone.

Read more
Remote only
5 - 10 yrs
₹25L - ₹55L / yr
Data engineering
Databases
skill iconPython
SQL
skill iconPostgreSQL
+4 more

Role: Full-Time, Long-Term Required: Python, SQL Preferred: Experience with financial or crypto data


OVERVIEW

We are seeking a data engineer to join as a core member of our technical team. This is a long-term position for someone who wants to build robust, production-grade data infrastructure and grow with a small, focused team. You will own the data layer that feeds our machine learning pipeline—from ingestion and validation through transformation, storage, and delivery.


The ideal candidate is meticulous about data quality, thinks deeply about failure modes, and builds systems that run reliably without constant attention. You understand that downstream ML models are only as good as the data they consume.


CORE TECHNICAL REQUIREMENTS

Python (Required): Professional-level proficiency. You write clean, maintainable code for data pipelines—not throwaway scripts. Comfortable with Pandas, NumPy, and their performance characteristics. You know when to use Python versus push computation to the database.


SQL (Required): Advanced SQL skills. Complex queries, query optimization, schema design, execution plans. PostgreSQL experience strongly preferred. You think about indexing, partitioning, and query performance as second nature.


Data Pipeline Design (Required): You build pipelines that handle real-world messiness gracefully. You understand idempotency, exactly-once semantics, backfill strategies, and incremental versus full recomputation tradeoffs. You design for failure—what happens when an upstream source is late, returns malformed data, or goes down entirely. Experience with workflow orchestration required: Airflow, Prefect, Dagster, or similar.


Data Quality (Required): You treat data quality as a first-class concern. You implement validation checks, anomaly detection, and monitoring. You know the difference between data that is missing versus data that should not exist. You build systems that catch problems before they propagate downstream.


WHAT YOU WILL BUILD

Data Ingestion: Pipelines pulling from diverse sources—crypto exchanges, traditional market feeds, on-chain data, alternative data. Handling rate limits, API quirks, authentication, and source-specific idiosyncrasies.


Data Validation: Checks ensuring completeness, consistency, and correctness. Schema validation, range checks, freshness monitoring, cross-source reconciliation.


Transformation Layer: Converting raw data into clean, analysis-ready formats. Time series alignment, handling different frequencies and timezones, managing gaps.


Storage and Access: Schema design optimized for both write patterns (ingestion) and read patterns (ML training, feature computation). Data lifecycle and retention management.

Monitoring and Alerting: Observability into pipeline health. Knowing when something breaks before it affects downstream systems.


DOMAIN EXPERIENCE

Preference for candidates with experience in financial or crypto data—understanding market data conventions, exchange-specific quirks, and point-in-time correctness. You know why look-ahead bias is dangerous and how to prevent it.


Time series data at scale—hundreds of symbols with years of history, multiple frequencies, derived features. You understand temporal joins, windowed computations, and time-aligned data challenges.


High-dimensional feature stores—we work with hundreds of thousands of derived features. Experience managing, versioning, and serving large feature sets is valuable.


ENGINEERING STANDARDS

Reliability: Pipelines run unattended. Failures are graceful with clear errors, not silent corruption. Recovery is straightforward.


Reproducibility: Same inputs and code version produce identical outputs. You version schemas, track lineage, and can reconstruct historical states.


Documentation: Schemas, data dictionaries, pipeline dependencies, operational runbooks. Others can understand and maintain your systems.


Testing: You write tests for pipelines—validation logic, transformation correctness, edge cases. Untested pipelines are broken pipelines waiting to happen.


TECHNICAL ENVIRONMENT

PostgreSQL, Python, workflow orchestration (flexible on tool), cloud infrastructure (GCP preferred but flexible), Git.


WHAT WE ARE LOOKING FOR

Attention to Detail: You notice when something is slightly off and investigate rather than ignore.


Defensive Thinking: You assume sources will send bad data, APIs will fail, schemas will change. You build accordingly.


Self-Direction: You identify problems, propose solutions, and execute without waiting to be told.


Long-Term Orientation: You build systems you will maintain for years.


Communication: You document clearly, explain data issues to non-engineers, and surface problems early.


EDUCATION

University degree in a quantitative/technical field preferred: Computer Science, Mathematics, Statistics, Engineering. Equivalent demonstrated expertise also considered.


TO APPLY

Include: (1) CV/resume, (2) Brief description of a data pipeline you built and maintained, (3) Links to relevant work if available, (4) Availability and timezone.

Read more
Hashone Careers
Madhavan I
Posted by Madhavan I
Remote only
5 - 10 yrs
₹20L - ₹40L / yr
DevOps
skill iconAmazon Web Services (AWS)
skill iconKubernetes
cicd
skill iconPython
+1 more

Job Description: DevOps Engineer

Location: Bangalore / Hybrid / Remote

Company: LodgIQ

Industry: Hospitality / SaaS / Machine Learning


About LodgIQ

Headquartered in New York, LodgIQ delivers a revolutionary B2B SaaS platform to the

travel industry. By leveraging machine learning and artificial intelligence, we enable precise

forecasting and optimized pricing for hotel revenue management. Backed by Highgate

Ventures and Trilantic Capital Partners, LodgIQ is a well-funded, high-growth startup with a

global presence.


Role Summary:

We are seeking a Senior DevOps Engineer with 5+ years of strong hands-on experience in

AWS, Kubernetes, CI/CD, infrastructure as code, and cloud-native technologies. This

role involves designing and implementing scalable infrastructure, improving system

reliability, and driving automation across our cloud ecosystem.


Key Responsibilities:

• Architect, implement, and manage scalable, secure, and resilient cloud

infrastructure on AWS

• Lead DevOps initiatives including CI/CD pipelines, infrastructure automation,

and monitoring

• Deploy and manage Kubernetes clusters and containerized microservices

• Define and implement infrastructure as code using

Terraform/CloudFormation

• Monitor production and staging environments using tools like CloudWatch,

Prometheus, and Grafana

• Support MongoDB and MySQL database administration and optimization

• Ensure high availability, performance tuning, and cost optimization

• Guide and mentor junior engineers, and enforce DevOps best practices

• Drive system security, compliance, and audit readiness in cloud environments

• Collaborate with engineering, product, and QA teams to streamline release

processes


Required Qualifications:

• 5+ years of DevOps/Infrastructure experience in production-grade environments

• Strong expertise in AWS services: EC2, EKS, IAM, S3, RDS, Lambda, VPC, etc.

• Proven experience with Kubernetes and Docker in production

• Proficient with Terraform, CloudFormation, or similar IaC tools

• Hands-on experience with CI/CD pipelines using Jenkins, GitHub Actions, or

similar

• Advanced scripting in Python, Bash, or Go

• Solid understanding of networking, firewalls, DNS, and security protocols

• Exposure to monitoring and logging stacks (e.g., ELK, Prometheus, Grafana)

• Experience with MongoDB and MySQL in cloud environments

Preferred Qualifications:

• AWS Certified DevOps Engineer or Solutions Architect

• Experience with service mesh (Istio, Linkerd), Helm, or ArgoCD

• Familiarity with Zero Downtime Deployments, Canary Releases, and Blue/Green

Deployments

• Background in high-availability systems and incident response

• Prior experience in a SaaS, ML, or hospitality-tech environment


Tools and Technologies You’ll Use:

• Cloud: AWS

• Containers: Docker, Kubernetes, Helm

• CI/CD: Jenkins, GitHub Actions

• IaC: Terraform, CloudFormation

• Monitoring: Prometheus, Grafana, CloudWatch

• Databases: MongoDB, MySQL

• Scripting: Bash, Python

• Collaboration: Git, Jira, Confluence, Slack


Why Join Us?

• Competitive salary and performance bonuses.

• Remote-friendly work culture.

• Opportunity to work on cutting-edge tech in AI and ML.

• Collaborative, high-growth startup environment.

• For more information, visit http://www.lodgiq.com

Read more
Analytical Brains Education
Remote only
1 - 5 yrs
₹8L - ₹12L / yr
skill iconPython
Shell Scripting
Powershell
SQL
skill iconJava

Job Description

We are looking for motivated IT professionals with at least one year of industry experience. The ideal candidate should have hands-on experience in AWS, Azure, AI, or Cloud technologies, or should be enthusiastic and ready to upskill and shift to new and emerging technologies. This role is primarily remote; however, candidates may be required to visit the office occasionally for meetings or project needs.

Key Requirements

  • Minimum 1 year of experience in the IT industry
  • Exposure to AWS / Azure / AI / Cloud platforms (any one or more)
  • Willingness to learn and adapt to new technologies
  • Strong problem-solving and communication skills
  • Ability to work independently in a remote setup
  • Must have a proper work-from-home environment (laptop, stable internet, quiet workspace)

Education Qualification

  • B.Tech / BE / MCA / M.Sc (IT) / equivalent


Read more
EDUPOWERPO SOLUTIONS

at EDUPOWERPO SOLUTIONS

1 candid answer
CHANDAN RAI
Posted by CHANDAN RAI
Remote only
1 - 2 yrs
₹1.2L - ₹2L / yr
SEO management
skill iconHTML/CSS
skill iconPython
skill iconDjango

What We’re Looking For

• Hands-on experience in keyword research, competitor analysis and content gap identification

• Strong understanding of on-page SEO: meta tags, schema, internal linking, URL structure and content optimization

• Experience managing technical SEO: site audits, crawling issues, indexing, page speed and mobile optimization

• Ability to plan and execute backlink strategies using safe, high-quality methods

• Familiarity with tools like Google Search Console, Google Analytics, Ahrefs, SEMrush, Screaming Frog

• Experience working with blogs, landing pages and long-form content

• Ability to coordinate with writers, developers and designers for SEO requirements

• Proven experience in improving rankings for competitive keywords

• Understanding of local SEO and structured data markup

• Comfortable working in a fast-moving, bootstrapped startup environment

• Bonus: Experience with Django or basic HTML/CSS is useful but not mandatory

What You Will Work On

• Improving rankings for keywords like “Top boarding schools in India”, “Best boarding schools”, etc.

• Conducting monthly audits and pushing technical SEO fixes

• Growing EduPowerPro’s organic traffic through structured content planning

• Managing backlink acquisition and partnerships

• Tracking performance and presenting monthly insights

Read more
Mira Network

at Mira Network

1 candid answer
Nikita Sinha
Posted by Nikita Sinha
Remote only
5 - 10 yrs
Upto ₹70L / yr (Varies
)
Systems design
skill iconGo Programming (Golang)
skill iconRust
skill iconReact.js
skill iconNodeJS (Node.js)
+2 more

Mira is building the foundational trust and verification layer for agentic commerce - the emerging landscape where autonomous AI agents interact, transact, and deliver value across modern digital systems. Our work extends into next-generation consumer finance, blending intelligent automation, verifiable execution, and new forms of digital value movement.


We operate as a senior, high-caliber engineering team that ships foundational infrastructure for intelligent consumer experiences, where correctness, reliability, and clarity of execution matter deeply.

We are building a consumer-facing financial product at Mira.


Think crypto-native neobank + agentic automation, where large parts of money movement, compliance, and execution are stitched together via existing providers - but orchestrated with strong guarantees, clear invariants, and excellent user experience.

This is a 0 to 1 product. Architecture decisions made early will compound for years.



We’re looking for a Lead Architect / Lead Full-Stack Engineer to act as a technical owner for this product.

You will define and build the core product architecture across backend, integrations, and frontend surfaces. This role is not about inventing new primitives - it’s about correctly and safely stitching together high-risk systems (money movement, KYC, wallets, verification, agent flows) into a coherent, scalable product.


You’ll work closely with product leadership to shape the technical direction, while staying deeply hands-on in production code and also support in building and leading the technical team.


WHAT YOU WILL DO:

Your primary responsibility is to design, build, and own the core full-stack system end to end.

Specifically, you will:

  • Architect and implement backend systems for correctness-critical workflows, including ledgers, balances, transaction state, and orchestration logic.
  • Design and build integration-heavy systems, stitching together KYC, wallet providers, verification services, vaults, and external financial APIs.
  • Own system invariants and failure modes: ensuring money movement, retries, reconciliation, and edge cases behave safely under stress.
  • Build and evolve agentic automation flows that coordinate execution across multiple systems while remaining observable and debuggable.
  • Develop core frontend surfaces (web, admin, internal tools) and collaborate closely with React Native engineers for the consumer app.
  • Set up and maintain DevOps foundations: environments, CI/CD, monitoring, alerts, and operational playbooks.
  • Act as a technical decision-maker and mentor, raising the bar for system design, code quality, and reliability across the team.
  • Collaborate with the broader engineering and product team on shared primitives around verification, correctness, and trust.


WHAT YOU BRING:

You are a senior engineer who has built and owned real systems where failure is expensive.

You likely have:

  • 5–8+ years of full-stack engineering experience, with strong depth on the backend.
  • Prior experience in consumer fintech, payments, wallets, or financial infrastructure, where ledgers and state correctness mattered.
  • Deep experience with schema design, data modeling, consistency models, and fault-tolerant systems.
  • Strong systems thinking: you naturally reason about invariants, race conditions, retries, idempotency, and data integrity.
  • Hands-on experience integrating and operating third-party APIs in production (KYC, payments, identity, compliance, etc.).
  • Solid DevOps instincts - comfortable owning deployments, infra decisions, and operational reliability.
  • Frontend experience with modern JS/TypeScript frameworks (React); enough to build and reason about product UX, even if you’re backend-leaning.
  • Comfort operating in ambiguity and 0→1 environments, where the problem is still being shaped alongside the solution.
  • A strong sense of ownership - you don’t wait for specs; you help define them.

NICE TO HAVE:

  • Experience working with crypto or blockchain-adjacent systems, even if via integrations rather than protocol design.
  • Familiarity with React Native or close collaboration with mobile teams.
  • Prior work building consumer-scale systems or financial-grade infrastructure.
  • Prior exposure to agent-like workflows, automation engines, or distributed orchestration systems.

WHY THIS ROLE MATTERS:

This is a keystone hire.

The person in this role will:

  • Shape how money moves through the system
  • Define early architectural patterns
  • Prevent painful rewrites later by getting fundamentals right now

If you enjoy building serious systems with real-world consequences - and want to operate at founder-level ownership inside a small, elite team - this role will stretch and reward you.


WHAT WE OFFER:

  • Competitive compensation.
  • High ownership and the opportunity to shape product direction.
  • Direct impact on foundational cryptographic and blockchain infrastructure.
  • A collaborative team that values clarity, autonomy, and velocity.


Note: This role can be remote; however Bengaluru or Mumbai candidates will be prioritized.

Read more
Automate Accounts

at Automate Accounts

2 candid answers
Namrata Das
Posted by Namrata Das
Remote only
2 - 6 yrs
₹6L - ₹20L / yr
skill iconPython
skill iconNodeJS (Node.js)
skill iconSpring Boot
Debugging
RESTful APIs
+1 more

Responsibilities


Develop and maintain web and backend components using Python, Node.js, and Zoho tools


Design and implement custom workflows and automations in Zoho


Perform code reviews to maintain quality standards and best practices


Debug and resolve technical issues promptly


Collaborate with teams to gather and analyze requirements for effective solutions


Write clean, maintainable, and well-documented code


Manage and optimize databases to support changing business needs


Contribute individually while mentoring and supporting team members


Adapt quickly to a fast-paced environment and meet expectations within the first month



Leadership Opportunities


Lead and mentor junior developers in the team


Drive projects independently while collaborating with the broader team


Act as a technical liaison between the team and stakeholders to deliver effective solutions



Selection Process


1. HR Screening: Review of qualifications and experience


2. Online Technical Assessment: Test coding and problem-solving skills


3. Technical Interview: Assess expertise in web development, Python, Node.js, APIs, and Zoho


4. Leadership Evaluation: Evaluate team collaboration and leadership abilities


5. Management Interview: Discuss cultural fit and career opportunities


6. Offer Discussion: Finalize compensation and role specifics



Experience Required


2-6 years of relevant experience as a Software Developer


Proven ability to work as a self-starter and contribute individually


Strong technical and interpersonal skills to support team members effectively

Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Remote only
3 - 5 yrs
₹20L - ₹25L / yr
skill iconPython
skill iconReact.js
skill iconNodeJS (Node.js)

Role: Technical Co-Founder


Experience: 3+ years (Mandatory)


Compensation: Equity Only (No Salary)


Requirements:


Strong full-stack development skills


Experience building web applications from scratch


Able to manage complete tech independently


Startup mindset & owne

rship attitude

Read more
CFRA

at CFRA

4 candid answers
2 recruiters
Bisman Gill
Posted by Bisman Gill
Remote only
4yrs+
Upto ₹23L / yr (Varies
)
skill iconAmazon Web Services (AWS)
SQL
skill iconPython
skill iconNodeJS (Node.js)
skill iconJava
+1 more

The Senior Software Developer is responsible for development of CFRA’s report generation framework using a modern technology stack: Python on AWS cloud infrastructure, SQL, and Web technologies. This is an opportunity to make an impact on both the team and the organization by being part of the design and development of a new customer-facing report generation framework that will serve as the foundation for all future report development at CFRA.

The ideal candidate has a passion for solving business problems with technology and can effectively communicate business and technical needs to stakeholders. We are looking for candidates that value collaboration with colleagues and having an immediate, tangible impact for a leading global independent financial insights and data company.


Key Responsibilities

  • Analyst Workflows: Design and development of CFRA’s integrated content publishing platform using a proprietary 3rd party editorial and publishing platform for integrated digital publishing.
  • Designing and Developing APIs: Design and development of robust, scalable, and secure APIs on AWS, considering factors like performance, reliability, and cost-efficiency.
  • AWS Service Integration: Integrate APIs with various AWS services such as AWS Lambda, Amazon API Gateway, Amazon SQS, Amazon SNS, AWS Glue, and others, to build comprehensive and efficient solutions.
  • Performance Optimization: Identify and implement optimizations to improve performance, scalability, and efficiency, leveraging AWS services and tools.
  • Security and Compliance: Ensure APIs are developed following best security practices, including authentication, authorization, encryption, and compliance with relevant standards and regulations.
  • Monitoring and Logging: Implement monitoring and logging solutions for APIs using AWS CloudWatch, AWS X-Ray, or similar tools, to ensure availability, performance, and reliability.
  • Continuous Integration and Deployment (CI/CD): Establish and maintain CI/CD pipelines for API development, automating testing, deployment, and monitoring processes on AWS.
  • Documentation and Training: Create and maintain comprehensive documentation for internal and external users, and provide training and support to developers and stakeholders.
  • Team Collaboration: Collaborate effectively with cross-functional teams, including product managers, designers, and other developers, to deliver high-quality solutions that meet business requirements.
  • Problem Solving: troubleshooting efforts, identifying root causes and implementing solutions to ensure system stability and performance.
  • Stay Updated: Stay updated with the latest trends, tools, and technologies related to development on AWS, and continuously improve your skills and knowledge.

Desired Skills and Experience

  • Development: 5+ years of extensive experience in designing, developing, and deploying using modern technologies, with a focus on scalability, performance, and security.
  • AWS Services: proficiency in using AWS services such as AWS Lambda, Amazon API Gateway, Amazon SQS, Amazon SNS, Amazon SES, Amazon RDS, Amazon DynamoDB, and others, to build and deploy API solutions.
  • Programming Languages: Proficiency in programming languages commonly used for development, such as Python, Node.js, or others, as well as experience with serverless frameworks like AWS.
  • Architecture Design: Ability to design scalable and resilient API architectures using microservices, serverless, or other modern architectural patterns, considering factors like performance, reliability, and cost-efficiency.
  • Security: Strong understanding of security principles and best practices, including authentication, authorization, encryption, and compliance with standards like OAuth, OpenID Connect, and AWS IAM.
  • DevOps Practices: Familiarity with DevOps practices and tools, including CI/CD pipelines, infrastructure as code (IaC), and automated testing, to ensure efficient and reliable deployment on AWS.
  • Problem-solving Skills: Excellent problem-solving skills, with the ability to troubleshoot complex issues, identify root causes, and implement effective solutions to ensure the stability and performance.
  • Communication Skills: Strong communication skills, with the ability to effectively communicate technical concepts to both technical and non-technical stakeholders, and collaborate with cross- functional teams.
  • Agile Methodologies: Experience working in Agile development environments, following practices like Scrum or Kanban, and ability to adapt to changing requirements and priorities.
  • Continuous Learning: A commitment to continuous learning and staying updated with the latest trends, tools, and technologies related to development and AWS services.
  • Bachelor's Degree: A bachelor's degree in Computer Science, Software Engineering, or a related field is often preferred, although equivalent experience and certifications can also be valuable.


Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort