Cutshort logo

50+ Python Jobs in India

Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!

icon
CipherSonic Labs
Ajay Joshi
Posted by Ajay Joshi
Remote only
3 - 5 yrs
₹20L - ₹30L / yr
skill iconC++
skill iconC
Linux/Unix
skill iconAmazon Web Services (AWS)
skill iconPython
+2 more

 

Job Title: Software Developer

Location: Remote

About Us: CipherSonic Labs is a cutting-edge technology company specializing in data security and privacy solutions for enterprises processing sensitive data in the cloud. We develop high-performance cryptographic software and hardware acceleration techniques to enable secure computing. Our team is looking for talented individuals to contribute to innovative projects in secure computing and high-performance software development.

Job Description: We are seeking a Software Developer to assist in the development of high-performance software solutions. This role will involve working on low-level programming, optimizing cryptographic algorithms, and improving performance for security-critical applications. The ideal candidate will have a passion for systems programming, algorithm optimization, and working in a high-performance computing environment.

Key Responsibilities:

·     Develop and optimize software using C/C++ for high-performance computing applications.

·     Work on cryptographic algorithm implementations and performance tuning.

·     Optimize memory management, threading, and parallel computing techniques.

·     Debug, profile, and test software for performance and reliability.

·     Write clean, efficient, and well-documented code.

Qualifications:

·     Completed a B.S. or higher degree in Computer Science, Computer Engineering.

·     Strong programming skills in C and C++.

·     Familiarity with Linux-based development environments.

·     Basic understanding of cryptographic algorithms and security principles is a plus.

·     Experience with AWS Lambda, EC2, S3, DynamoDB, API Gateway, Containerization (like Docker, Kubernetes) is a plus.

·     Knowledge of other programming languages such as Python, Rust, or Go is a plus.

·     Strong problem-solving skills and attention to detail.

·     Ability to work independently and collaboratively in a fast-paced startup environment.

What You’ll Gain:

·     Hands-on experience in systems programming, cryptography, and high-performance computing.

·     Opportunities to work on real-world security and privacy-focused projects.

·     Mentorship from experienced software engineers and researchers.

·     Exposure to cutting-edge cryptographic acceleration and secure computing techniques.

·     Potential for future full-time employment based on performance.

Read more
The Blue Owls Solutions

at The Blue Owls Solutions

2 candid answers
Apoorvo Chakraborty
Posted by Apoorvo Chakraborty
Pune
2 - 5 yrs
₹10L - ₹18L / yr
PySpark
SQL
skill iconPython
Data engineering
ETL

Blue Owls Solutions is looking for a mid-level Azure Data Engineer with approximately 4 years of hands-on experience to join our growing data team. In this role, you will design, build, and maintain scalable data pipelines and architectures that power business-critical analytics and reporting. You'll work closely with cross-functional teams to transform raw data into reliable, high-quality datasets that drive decision-making across the organization.

Required Skills

  • 4+ years of professional experience as a Data Engineer or in a similar data-focused role
  • Strong proficiency in SQL for data manipulation, querying, and performance optimization
  • Hands-on experience with PySpark for large-scale data processing and transformation
  • Solid working knowledge of the Microsoft Azure ecosystem (Azure Data Factory, Azure Data Lake, Azure Synapse, etc.)
  • Experience with Microsoft Fabric for end-to-end data analytics workflows
  • Ability to design and implement robust data architectures including data warehouses, lakehouses, and ETL/ELT frameworks
  • Strong coding and scripting skills with Python
  • Proven problem-solving ability with a knack for debugging complex data issues and optimizing pipeline performance
  • Understanding of data modeling concepts, dimensional modeling, and data governance best practices

Preferred Skills & Certifications

  • Microsoft Certified: Fabric Analytics Engineer Associate (DP-600)
  • Microsoft Certified: Fabric Data Engineer Associate (DP-700)
  • Experience with CI/CD practices for data pipelines
  • Familiarity with version control systems such as Git
  • Exposure to real-time streaming data solutions
  • Experience working in Agile or Scrum environments
  • Strong communication skills with the ability to translate technical concepts for non-technical stakeholders

What We Offer

  • Competitive salary and performance-based bonuses
  • Flexible hybrid options
  • Opportunities for professional development, training, and certification sponsorship
  • A collaborative, innovation-driven team culture
  • Paid time off and company holidays


Read more
AI Recruiting Platform

AI Recruiting Platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Remote only
1 - 15 yrs
₹70L - ₹99L / yr
MySQL
skill iconPython
Microservices
API
skill iconJava
+18 more

Description

Join company as a Backend Developer and become a pivotal force in building the robust, scalable services that power our innovative platforms. In this role, you will design, develop, and maintain server‑side applications, ensuring high performance and reliability for millions of users. You’ll collaborate closely with cross‑functional product, front‑end, and DevOps teams to translate business requirements into clean, efficient code, while participating in code reviews and architectural discussions. Our dynamic environment encourages continuous learning, offering opportunities to work with cutting‑edge technologies, cloud infrastructures, and modern development practices. As a key contributor, your work will directly impact product quality, user satisfaction, and the overall success of company's mission to streamline hiring solutions.


Requirements:

  • 1–15 years of professional experience in backend development, with a strong focus on building APIs and microservices.
  • Proficiency in server‑side languages such as Python, Java, Node.js, or Go, and solid understanding of object‑oriented and functional programming paradigms.
  • Extensive experience with relational (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Redis), including schema design and query optimization.
  • Familiarity with cloud platforms (AWS, GCP, Azure) and containerization technologies like Docker and Kubernetes.
  • Hands‑on experience with version control (Git), CI/CD pipelines, and automated testing frameworks.
  • Strong problem‑solving abilities, effective communication skills, and a collaborative mindset for working within multidisciplinary teams.


Roles and Responsibilities:

  • Design, develop, and maintain high‑throughput backend services and RESTful APIs that support core product features.
  • Implement data models and storage solutions, ensuring data integrity, security, and optimal performance.
  • Collaborate with front‑end engineers, product managers, and designers to define technical requirements and deliver end‑to‑end solutions.
  • Participate in code reviews, provide constructive feedback, and uphold coding standards and best practices.
  • Monitor, troubleshoot, and optimize production systems, implementing robust logging, alerting, and performance tuning.
  • Contribute to the continuous improvement of development workflows, including CI/CD automation, testing strategies, and deployment processes.
  • Stay current with emerging technologies and industry trends, proposing innovative approaches to enhance system architecture.


Budget:

  • Job Type: payroll
  • Experience Range: 1–15 years


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Meghana Shinde
Posted by Meghana Shinde
Pune, Bengaluru (Bangalore)
4 - 9 yrs
Best in industry
skill iconC++
skill iconPython




JOB DESRIPTION: C++ Developer ​

Experience : 4 –7   Years ​

Location : Pune​

No of Position : 1​

We are seeking an experienced C++ Developer with 4–7 years of experience to work in financial 

systems. The role involves working on mission-critical applications such as trading platforms, 

market data systems, risk engines, or payment processing systems, where performance, stability, 

and correctness are paramount.​

 

 

1. General Req -​

 

•4–7 years of professional C++ experience in performance-critical systems​

 

•Expert knowledge of modern C++ (C++11/14/17)​

 

•Strong understanding of data structures, algorithms, and memory models​

 

•Deep experience with multithreading, atomics, lock-free programming, and CPU cache 

behavior​

 

•Excellent knowledge of Linux internals and system-level programming​

 

•Experience with low-level debugging and profiling (gdb, perf, valgrind, flamegraphs)​

 

•Proficiency with CMake/Make and Git​

 

2. Trading Systems Experience (Highly Preferred)​

 

•Hands-on experience with order management systems (OMS) and execution engines​

 

•Knowledge of exchange protocols: FIX, ITCH, OUCH, FAST​

 

•Experience handling market data feeds (L1/L2, multicast, UDP)​

 

•Understanding of latency measurement, clock synchronization, and time stamping​

 

 




Read more
MNC with 5000+ employees

MNC with 5000+ employees

Agency job
via True tech professionals by Saffan Shaikh
Gurugram
6 - 12 yrs
₹15L - ₹28L / yr
skill iconPython
Large Language Models (LLM)
skill iconAmazon Web Services (AWS)
FastAPI

Backend Engineer III – Senior Python Developer (LLM & AI)

Location: Gurgaon, India

Positions: 1

Experience: 6 to 9 Years

Gurgaon Hybrid

About the Role

We are seeking an experienced Backend Engineer III / Senior Python Developer to join our AI engineering team and play a critical role in building scalable, secure, and high-performance backend platforms for LLM and AI-driven applications. You will work as a hands-on individual contributor while collaborating closely with Machine Learning Engineers, Data Scientists, Product Managers, and Cloud/DevOps teams to deliver innovative, production-grade AI solutions.

Key Responsibilities

  • Design, develop, and maintain scalable backend systems and services using Python to support LLM and AI-based applications
  • Build and maintain RESTful APIs and microservices that serve machine learning models and AI components
  • Write clean, modular, efficient, and testable code following industry best practices and coding standards
  • Participate actively in code reviews, ensuring high quality, security, and maintainability of the codebase
  • Debug, profile, and optimize applications to improve performance, reliability, and scalability
  • Identify and resolve performance bottlenecks in AI/ML pipelines and backend services
  • Collaborate with ML engineers, data scientists, and product teams to translate business and technical requirements into robust backend solutions
  • Mentor and support junior developers, promoting a culture of technical excellence and continuous learning
  • Design and implement CI/CD pipelines and automate deployment workflows to ensure consistent and reliable releases
  • Stay up to date with emerging trends in Python, cloud-native development, and LLM/AI engineering practices and apply them to improve systems and processes

Required Skills & Experience

  • 6 to 9 years of strong hands-on experience in Python development
  • Solid understanding of Python software design, architecture patterns, and testing best practices
  • Proven experience working on AI, Machine Learning, or LLM-based projects
  • Strong experience in building and consuming RESTful APIs and microservices architectures
  • Hands-on experience with FastAPI, Flask, or similar model-serving frameworks
  • Strong debugging, performance profiling, and optimization skills
  • Experience with CI/CD tools and workflows (e.g., GitHub Actions, Azure DevOps, Jenkins, etc.)
  • Working knowledge of Docker and Kubernetes is a strong plus
  • Excellent analytical, problem-solving, and communication skills
  • Ability to work independently in a fast-paced, evolving AI/ML environment while mentoring junior team members

Education & Certifications

  • Bachelor’s degree in Computer Science, Software Engineering, or a related technical field
  • AWS or other relevant cloud certifications are preferred but not mandatory

Why Join Us?

  • Work on cutting-edge AI and LLM platforms
  • Collaborate with top-tier engineering and data science teams
  • Opportunity to influence system architecture and technical direction
  • Competitive compensation and career growth opportunities


Read more
Bengaluru (Bangalore)
2 - 6 yrs
₹10L - ₹30L / yr
Linux/Unix
cicd
Scripting
CI/CD
Shell Scripting
+7 more



Position: Member of Technical Staff - Linux Specialist - DevOps

Location: Bengaluru - India

Experience: 2-6 Years


● Responsibilities: We are looking for experienced Linux specialists (Linux system administrators) to be part of RtBrick’s DevOps team. The DevOps team handles the CI/CD, compute and networking infrastructure and tools that together form a multi-tenant multi-environment delivery and deployment system for RBFS (RtBrick Full Stack). You will be part of a high performance team responsible for managing, improving and adapting these systems.

 

● CI/CD

Knowledge of software compilation and packaging for various Linux environments is required. Expertise in Linux system administration, Linux package management and Linux internals is essential. Ability to build custom Linux images for different types of container and/or virtual machine (VM) environments is also required. Experience with the Linux boot process, init system and service manager is highly desirable.

 

● Tools

Good knowledge of shell (bash) scripting and the Ansible automation framework is required. Knowledge of other automation frameworks and/or infrastructure-as-code tools is considered a plus. Experience with managing network infrastructure (switches, routes, firewalls) is highly desirable. Experience with monitoring solutions based on Prometheus and Grafana is desirable. Knowledge of the Python or Golang programming languages is considered a plus.

 

●Operations

Manage compute and networking infrastructure for a private cloud. Manage applications and services deployed in the private cloud but also in public clouds. This position will be part of an on-call engineer rotation during certain critical periods for the company.

 

Required Skills:

  1. About 2-6 years of industry experience in Linux system administration with emphasis on automation.
  2. Experience with networking focused Linux distributions (ONL/Open Network Linux and/or SONiC) is considered a plus.
  3. Good understanding and troubleshooting skills of networking issues, both at the host (Linux) level but also at the network (switches, routes, firewalls) level is required.
  4. Experience with CI/CD systems (Jenkins or similar) is required.
  5. Experience with software development tools like git, Gitlab, CMake, GNU build tools.
  6. Proficient in shell (bash) scripting. Experience with the Python or Golang programming languages is considered a plus.
  7. Knowledge and experience of Linux container technologies (Docker, LXC) and container orchestration (Kubernetes) or any other equivalent container technologies is desirable.


Enjoy a great environment, great people, and a great package

  • Stock Appreciation Rights - Generous pre series-B stock options
  • Generous Gratuity Plan - Long service compensation far exceeding Indian statutory requirements 
  • Health Insurance - Premium health insurance for employee, spouse and children 
  • Working Hours - Flexible working hours with sole focus on enabling a great work environment 
  • Work Environment - Work with top industry experts in an environment that fosters co-operation, learning and developing skills 
  • Make a Difference - We're here because we want to make an impact on the world - we hope you do too!



Read more
VDart
Abirami Ramdoss
Posted by Abirami Ramdoss
Bengaluru (Bangalore)
4 - 10 yrs
₹15L - ₹20L / yr
Microsoft Windows Azure
CI/CD
skill iconPython

Azure CI/CD Engineer

Bangalore

Fulltime

 

Skill Set Required

 

  • Cloud Platforms: Experienced in cloud-native development on both AWS, Azure (including Azure DevOps)

  • Programming: Proficient in Python, with a focus on backend development that is in AWS.
  • CI/CD: Skilled in developing and optimizing CI/CD pipelines using Azure DevOps and GitHub/GitLab.
  • API Integration: Well-versed in integrating with Jira REST API and Azure DevOps API.
  • Agile: Well versed with Agile methodology
  • Communication: Good communication skills

 

  • Front end is Azure DevOps board and backend is supported by AWS environment.

Therefore, looking for a resource with a mix of the above mentioned skills. 

Read more
VDart
Remote only
7 - 15 yrs
₹15L - ₹20L / yr
Test Automation (QA)
SaaS
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Large Language Models (LLM)
+7 more

Senior Quality Engineer – AI Products

Fulltime

Remote

Requirements

● 3-7 years of experience in software quality engineering, preferably in SaaS environments with a platform or infrastructure focus.

● Strong demonstrated experience testing distributed systems, APIs, data pipelines, or cloud-based infrastructure.

● Experience designing and executing test plans for AI/ML systems, data pipelines, or shared platform services.

● Familiarity with AI/LLM infrastructure concepts such as retrieval-augmented generation (RAG), vector search, model routing, and observability.

● Strong demonstrated proficiency in Linux distributions and CLI-based testing, including log file analysis and other troubleshooting tasks.

● Experience with AWS or other major cloud platforms.

● Basic Python/Shell scripting knowledge with ability to edit existing scripts and create new automation for pipeline validation.

● Advanced skills with API and SQL testing methodologies.

● Familiarity with test management tools such as TestRail; experience with Qase is a plus.

● Demonstrated experience leveraging Version Control Systems with a focus on GitHub.

● Experience with testing tools: Jira, Sentry, DataDog.

● Strong understanding of Agile/Scrum methodologies.

● Proven track record of mentoring junior engineers and contributing to process improvements.

● Excellent analytical and problem-solving abilities.

● Strong communication skills with ability to present to both technical and non-technical stakeholders.

● Proficiency in English (C1-C2 level).

● Most importantly: The courage to be vocal about quality concerns, platform risks, and testing impediments.

 

Preferred Qualifications

● Experience with AI/ML evaluation frameworks or tools (e.g., LLM-as-judge, Ragas, custom eval harnesses).

● Hands-on experience with document parsing, OCR, or unstructured data pipelines.

● Experience with observability tooling (e.g., Datadog, Grafana, OpenTelemetry) from a QA perspective.

● Experience testing SaaS products in regulated industries (such as PCI-compliant).

● Basic understanding of containerization, Kubernetes, and CI/CD pipelines (Jenkins, CircleCI).

● Experience with microservice architectures and distributed systems.

● Knowledge of basic non-functional testing (security, performance) with emphasis on AI-specific concerns.

● Background in security or compliance testing for AI systems.

● Certifications such as ISTQB or CSTE.

● Experience working in legal technology, fintech, or professional services software.

● Familiarity with AI-assisted testing tools and leveraging LLMs as a productivity-boosting tool.

● Experience evaluating and implementing new QE tools and processes

 

Read more
Chennai
0 - 1 yrs
₹1.8L - ₹2.4L / yr
Powershell
skill iconPython
skill iconJavascript


Location: Chennai (Hybrid Model)

Commitment: Minimum 2 Years (Excluding 3 months of Probation)

Experience Level: Fresher / Entry Level


About the Role

We are looking for enthusiastic and fast‑learning fresh graduates to join our Infrastructure & Security Engineering team. This role involves hands‑on work in system administration, implementation of infrastructure and security components, and continuous learning across multiple technology vendors and cloud environments including Microsoft, AWS, GCP, and others.

You will receive extensive training, mentorship, and opportunities to work directly with customers to demonstrate new products and solutions.


Key Responsibilities


Infrastructure & System Administration

  • Assist in the deployment, configuration, and administration of IT infrastructure components (servers, networks, cloud services, and security tools).
  • Work with multi‑vendor environments such as Microsoft, AWS, GCP, and other OEMs.
  • Support day‑to‑day system monitoring, performance checks, and troubleshooting activities.

Security Implementation

  • Participate in the implementation and maintenance of security solutions including identity management, endpoint security, SIEM, firewalls, and cloud security tools.
  • Learn and follow best practices for secure configurations and compliance requirements.

Scripting & Automation

  • Develop automation scripts using PowerShell, Python, and JavaScript to streamline operational tasks.
  • Contribute to internal automation projects and efficiency improvement initiatives.

AI/ML Exposure

  • Gain foundational understanding of AI & ML product development.
  • Assist in integrating AI capabilities into internal or customer‑facing tools where applicable.

Customer Engagement

  • Learn and perform product demos for customers on demand.
  • Participate in customer visit and meetings alongside senior team members to support solution discussions.
  • Present technical concepts in clear and professional English.


Required Skills

  • Basic understanding of system administration, networking, cloud fundamentals, or security concepts.
  • Strong scripting capabilities in PowerShell, Python, and JavaScript.
  • Curiosity and willingness to learn AI/ML‑related product development.
  • Excellent verbal and written English communication skills.
  • Ability to quickly learn new technologies and adapt to dynamic project needs.

Who Should Apply?

  • Fresh graduates (B.E/B.Tech/B.Sc/BCA/MCA or equivalent) passionate about IT infrastructure, security, cloud, and automation.
  • Individuals who are eager to learn, enthusiastic about hands‑on work, and comfortable interacting with customers.
  • Candidates willing to commit 2 years to grow within the organization as we invest in extensive training and development.

Work Model

  • Hybrid, based in Chennai, with flexibility to work from both office and home as needed.

What We Offer

  • Structured training in multi‑cloud, security, scripting, and automation.
  • Hands‑on exposure to real‑world implementation projects.
  • Opportunities to explore AI/ML product workflows.
  • Mentorship from experienced engineers and architects.
  • Career growth into Infra Engineer, Security Engineer, Cloud Engineer, Automation Engineer, or AI/ML Solution Specialist.


Read more
StarApps Studio

at StarApps Studio

2 candid answers
4 products
Shivani Kawade
Posted by Shivani Kawade
Pune
4 - 8 yrs
₹18L - ₹24L / yr
Fullstack Developer
RESTful APIs
Database Design
skill iconNodeJS (Node.js)
skill iconJava
+11 more

Full Stack Developer (Ruby on Rails)

Location: Baner, Pune, India (On-site)


About StarApps Studio

StarApps Studio is a product-driven SaaS company building Shopify apps that power thousands of online stores. We’ve developed 6 highly-rated Shopify apps (averaging 4.9★) used by 30,000+ Shopify merchants worldwide, including over 1,000 Shopify Plus stores. In just a few years, our bootstrapped team grew from $5.5M to $10M in Annual Recurring Revenue (ARR) by obsessing over quality and merchant success. We’re a tight-knit, 20-person team based in Baner, Pune, on a mission to help e-commerce brands create world-class shopping experiences.

Role Overview

We are looking for a Full Stack Developer who will own features end-to-end with an emphasis on backend excellence. In this role, you will design and optimize complex data models and API architectures in Ruby on Rails, implement robust background job queues (e.g. delayed_job) for heavy workloads, and perform rigorous performance tuning to ensure our systems scale. On the frontend, you'll build and integrate React components to deliver complete, user-friendly features. This is a role for someone who loves tackling deep technical challenges in the backend while also crafting intuitive user interfaces – an opportunity to leverage your backend expertise while driving full-stack product ownership.


Key Responsibilities

  • Architect & Optimize Backend: Design scalable database schemas and efficient data models. Develop high-performance RESTful APIs and services in Ruby on Rails, ensuring clean, maintainable code and great performance.
  • Backend API Development: Design, implement, and maintain robust backend services and RESTful APIs in Ruby on Rails to support new features and internal tools.
  • End-to-End Performance Tuning: Optimize application performance across the stack – from minimizing frontend load times to improving backend query efficiency – for our high-traffic, data-intensive apps.
  • Collaboration & Agile Delivery: Work closely with designers, product managers, and QA to translate requirements into technical solutions. Participate in sprint planning, code reviews, and daily deployments to ship features continuously and reliably.
  • Quality & Maintenance: Write clean, maintainable code with appropriate test coverage (unit and integration tests) to ensure reliability. Monitor, debug, and resolve issues in production, and continually refactor and improve existing code for stability and performance.


What We’re Looking For (Requirements)

  • 3–4 Years Experience: Proven experience (3-4 years) as a software developer in a product company (experience in e-commerce or SaaS is highly preferred). You have built real products used by actual customers at scale.
  • Ruby on Rails Expertise: Strong command of Ruby on Rails. Experience designing RESTful APIs, working with MVC architecture, and using Rails best practices. You should understand how to structure large Rails applications for maintainability.
  • Backend Proficiency: Comfortable building server-side applications and APIs with Ruby on Rails. You can implement business logic, integrate with databases, and create RESTful endpoints (bonus if you’ve worked with GraphQL or other backend frameworks).
  • Database Skills: Proficiency with PostgreSQL (or similar RDBMS). Capable of writing complex SQL queries, optimizing queries/indexes, and designing efficient relational schemas. Familiarity with Redis or caching strategies is a plus.
  • Front-End Proficiency: Comfortable building user interfaces with React and modern JavaScript/TypeScript. Able to implement frontend components that consume APIs and provide a smooth user experience.
  • System Design & Quality: Solid understanding of web application architecture, performance tuning, and scalability concerns. Experience with profiling, benchmarking, and optimizing web applications. Commitment to writing clean, maintainable code with proper tests.
  • Product Mindset: You care about the why behind the features. You are comfortable digging into requirements, questioning assumptions, and ensuring that we build solutions that truly solve merchant problems.
  • Adaptability & Collaboration: Excellent problem-solving skills, communication, and ability to work in a fast-paced, collaborative environment. You are a self-starter who can take ownership of tasks and drive them to completion, but also know when to ask for help.


Tech Stack

  • Frontend: React, TypeScript/JavaScript, HTML5, CSS3 (Tailwind/Bootstrap), modern build tools (Webpack, Babel).
  • Backend: Ruby on Rails (REST APIs, background jobs), some services in Python.
  • Database: PostgreSQL.
  • Cloud & DevOps: Amazon Web Services (EC2, S3, RDS, CloudFront), Docker, CI/CD for daily deployments.
  • Tools: Git (GitHub), Agile issue tracking (JIRA/Trello), and a keen use of automated testing.

(Don’t worry if you aren’t familiar with every item – we value willingness to learn. This is our current stack, and we continually adopt new technologies that improve our products.)


Why Join Us

  • High Impact & Ownership: Your work will directly enhance the shopping experience of 50M+ shoppers daily. At StarApps, developers deploy code daily and see the immediate impact on thousands of merchants – you’ll own projects end-to-end and build features that matter.
  • Fast-Growing, Profitable Startup: Join a bootstrapped, profitable company on an exciting growth trajectory (from $4M to $10M ARR). There’s no bureaucracy here – just a passionate team obsessed with product quality and merchant happiness. You’ll be part of our core team as we scale, with ample opportunities to grow into leadership roles.
  • Cutting-Edge Tech & Challenges: Work with modern technologies (React, Rails, AWS) on performance-intensive applications. Tackle complex challenges in scaling, optimization, and UX for a global user base – continuously sharpen your skills in a supportive, learning-focused environment.
  • Collaborative Culture: We are a small 25-person team that operates like a close-knit family. You’ll work side by side with experienced founders and a talented team that values innovation, humility, and continuous improvement. Our culture is open, empathetic, and growth-oriented – every voice is heard, and every team member plays a crucial role in our success.
  • Growth & Benefits: We invest in our team’s growth. Expect a competitive salary, performance bonuses, and whatever tools you need to do your best work. We sponsor professional development (courses, conferences, books) and encourage knowledge-sharing. You’ll enjoy a flexible leave policy, team off-sites, and the excitement of building a global product from our new office in Baner, Pune.


(StarApps Studio is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.)



Read more
appscrip

at appscrip

2 recruiters
Kanika Gaur
Posted by Kanika Gaur
Bengaluru (Bangalore)
0 - 2 yrs
₹3L - ₹5L / yr
skill iconPython
skill iconDjango
Artificial Intelligence (AI)
FastAPI

The requirements are as follows:


1) Familiar with the the Django REST API Framework.


2) Experience with the FAST API framework will be a plus


3) Strong grasp of basic python programming concepts ( We do ask a lot of questions on this on our interviews :) )


4) Experience with databases like MongoDB , Postgres , Elasticsearch , REDIS will be a plus


5) Experience with any ML library will be a plus.


6) Familiarity with using git , writing unit test cases for all code written and CI/CD concepts will be a plus as well.


7) Familiar with basic code patterns like MVC.


8) Grasp on basic data structures.


You can contact me on nine three one six one two zero one three two


Read more
The Nexora Group
Remote only
0 - 1 yrs
₹12000 - ₹18000 / mo
skill iconPython
skill iconData Analytics
Artificial Intelligence (AI)

About The Nexora Group Inc.

The Nexora Group Inc. is a technology-driven organization focused on building intelligent digital solutions using modern software engineering and artificial intelligence technologies. Our teams work on projects involving data-driven applications, automation systems, and AI-powered tools designed to solve real-world business challenges.

We are looking for motivated and enthusiastic Python Developer Interns with an interest in Artificial Intelligence who want to gain practical experience working on live development projects.


Internship Responsibilities


  • Assist in developing backend applications using Python
  • Work on AI-related modules such as machine learning models, data processing pipelines, and automation tools
  • Write clean, scalable, and well-documented code
  • Support the development of APIs and backend services
  • Participate in debugging, testing, and performance optimization
  • Collaborate with development teams on project tasks and deliverables
  • Contribute to research and implementation of AI/ML solutions


Required Skills

  • Basic understanding of Python programming
  • Familiarity with data structures and algorithms
  • Interest in Artificial Intelligence and Machine Learning
  • Basic knowledge of NumPy, Pandas, or similar Python libraries
  • Understanding of REST APIs is a plus
  • Strong problem-solving skills
  • Ability to learn quickly and work in a collaborative environment


Preferred Qualifications

  • Students or recent graduates in Computer Science, IT, Data Science, or related fields
  • Basic knowledge of Machine Learning concepts
  • Experience with Git or version control systems is beneficial
  • Familiarity with Flask, Django, or FastAPI is a plus


What Interns Will Gain

  • Hands-on experience working on real-world development projects
  • Exposure to AI and machine learning development workflows
  • Mentorship from experienced developers
  • Opportunity to build a strong portfolio with practical project experience
  • Internship completion certificate based on performance and participation


Read more
Inferigence Quotient

at Inferigence Quotient

1 recruiter
Neeta Trivedi
Posted by Neeta Trivedi
Bengaluru (Bangalore)
3 - 5 yrs
₹12L - ₹15L / yr
skill iconPython
skill iconNodeJS (Node.js)
FastAPI
skill iconDocker
skill iconJavascript
+16 more

3-5 years of experience as full stack developer with essential requirements on the following technologies: FastAPI, JavaScript, React.js-Redux, Node.js, Next.js, MongoDB, Python, Microservices, Docker, and MLOps.


Experience in Cloud Architecture using Kubernetes (K8s), Google Kubernetes Engine, Authentication and Authorisation Tools, DevOps Tools and Scalable and Secure Cloud Hosting is a significant plus.


Ability to manage a hosting environment, ability to scale applications to handle the load changes, knowledge of accessibility and security compliance.

 

Testing of API endpoints.

 

Ability to code and create functional web applications and optimising them for increasing response time and efficiency. Skilled in performance tuning, query plan/ explain plan analysis, indexing, table partitioning.

 

Expert knowledge of Python and corresponding frameworks with their best practices, expert knowledge of relational databases, NoSQL.


Ability to create acceptance criteria, write test cases and scripts, and perform integrated QA techniques.

 

Must be conversant with Agile software development methodology. Must be able to write technical documents, coordinate with test teams. Proficiency using Git version control.

Read more
Neuvamacro Technology Pvt Ltd
Chennai
3 - 6 yrs
₹12L - ₹17L / yr
skill iconJavascript
skill iconPython
skill iconDjango
skill iconFlask
skill iconNodeJS (Node.js)
+11 more

Years of Experience – 3 to 6 years

Location – Chennai

Work Mode: Hybrid – 3 days mandatory Work From Office (WFO).

Job Type: Full-Time


Role Description:

• Develops software solutions by studying information needs; conferring with users; studying

systems flow, data usage, and work processes; investigating problem areas; following the

software development lifecycle.

• Determines operational feasibility by evaluating analysis, problem definition, requirements,

solution development, and proposed solutions.

• Documents and demonstrates solutions by developing documentation, flowcharts, layouts,

diagrams, charts, code comments and clear code.

• Prepares and installs solutions by determining and designing system specifications,

standards, and programming.

• Improves operations by conducting systems analysis, recommending changes in policies and

procedures.

• Updates job knowledge by studying state-of-the-art development tools, programming

techniques, and computing equipment; participating in educational opportunities; reading

professional publications; maintaining personal networks; participating in professional

organizations.

• Protects operations by keeping information confidential.

• Provides information by collecting, analyzing, and summarizing development and service

issues. Accomplishes engineering and organization mission by completing related results as

needed.

• Supports and develops software engineers by providing advice, coaching, and educational

opportunities.


Mandatory skills:

• Hands-on experience with web development in any of the following programming languages:

Python, JavaScript

• Hands-on experience in the following JavaScript framework: React

• Hands-on experience in any of the following framework: Python (Django, Flask) or NodeJS

(Express, NestJS)

• Experience with back-end development, basic microservices implementation and

containerization using Docker

• Expertise in Relational databases such as Postgres, MySQL, Oracle, etc.

• Expertise in NoSQL DB such as MongoDB, Amazon DynamoDB, Cassandra, etc.

• Good Knowledge with any of the cloud providers such as Amazon Web Services, Microsoft

Azure or Google Cloud.

• Excellent verbal and written communication skills.

Read more
The Nexora Group
Remote only
0 - 1 yrs
₹1 - ₹2 / mo
skill iconPython
Artificial Intelligence (AI)

About the Internship


The Nexora Group Inc. is looking for enthusiastic and motivated interns who want to build practical experience in Data Science and Artificial Intelligence. This internship is designed to provide hands-on exposure to real-world datasets, machine learning techniques, and AI-driven problem solving.

Interns will work closely with our technical team to analyze data, build predictive models, and explore AI tools that support data-driven decision-making.


Key Responsibilities

  • Collect, clean, and preprocess structured and unstructured datasets
  • Perform exploratory data analysis (EDA) to identify trends and patterns
  • Develop machine learning models using Python-based libraries
  • Assist in building AI-powered data analysis workflows
  • Create dashboards, reports, and visualizations to communicate insights
  • Work with tools such as Python, Pandas, NumPy, and visualization libraries
  • Collaborate with team members on real-world data science projects
  • Document project findings and maintain clear technical reports


Required Skills


  • Basic knowledge of Python programming
  • Understanding of data analysis and statistics
  • Familiarity with Machine Learning concepts
  • Knowledge of libraries such as Pandas, NumPy, Matplotlib, or Scikit-learn
  • Strong analytical and problem-solving skills
  • Good communication and documentation skills


Preferred Qualifications


  • Students or recent graduates in Computer Science, Data Science, Statistics, Mathematics, or related fields
  • Basic understanding of Artificial Intelligence concepts
  • Familiarity with Jupyter Notebook or Google Colab
  • Interest in working with real-world datasets and analytics tools


What You Will Gain


  • Hands-on experience with Data Science and AI projects
  • Mentorship from experienced professionals
  • Internship completion certificate
  • Opportunity to build portfolio projects
  • Exposure to real-world industry workflows


Read more
CK-12 Foundation

at CK-12 Foundation

1 video
7 recruiters
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
4 - 10 yrs
Upto ₹70L / yr (Varies
)
Natural Language Processing (NLP)
Transformer
skill iconMachine Learning (ML)
skill iconPython

About CK-12 Foundation

CK-12’s mission is to provide free access to open-source content and technology tools that empower both students and teachers to enhance learning across different styles, resources, competence levels, and circumstances.


To achieve this ambitious vision, CK-12 challenges the traditional education model by leveraging technology to revolutionize learning for students, teachers, and parents.


CK-12 operates as a non-profit organization so it can experiment with bold ideas and focus on doing the right thing for education. The organization is backed by Vinod Khosla, a renowned technology venture capitalist.


At CK-12, you’ll work in a dynamic, entrepreneurial, and innovative environment where passionate individuals collaborate to disrupt traditional education through technology.


Technology is at the heart of scaling education, and CK-12 builds solutions on a cloud-based (AWS) and AI-first platform delivering rich and interactive learning experiences.


If you are a great technologist who enjoys challenging the status quo and building innovative products, this could be the place for you.


Together, we aim to transform education globally.

Product Offerings

Flexi 2.0 – AI-Powered Student Tutor

https://www.flexi.org/

AI-Powered Teacher Assistant

https://www.ck12.org/pages/teacher-assistant/


Core Responsibilities


• Translate high-level directions and open-ended product ideas into deliverable ML projects and drive their completion.

• Architect and implement highly scalable ML solutions for systems such as multimodal information retrieval, conversational chatbots, recommender systems, and ranking systems.

• Own end-to-end product delivery from research and experimentation to production deployment.

• Work closely with cross-functional teams including Product, Engineering, DevOps, QA, and Content teams.

• Manage ML workflows involving data gathering, working with annotators, and collaborating with ML researchers.

• Extract and analyze large volumes of data to generate insights about student and teacher behavior based on platform usage.

• Design and build innovative ML-driven solutions that can improve learning experiences in the EdTech space.

• Apply statistical hypothesis testing and experimentation to evaluate and improve models.

• Continuously innovate and challenge the traditional approach to education through ML solutions.


Requirements


• Bachelor’s degree or higher in Computer Science or a related quantitative discipline, or equivalent practical experience.

• 4+ years of hands-on development experience with strong programming skills, preferably in Python.

• Expertise in deep learning approaches for NLP including transformer-based models, predictive modeling, search and recommendation systems, and autoregressive models.

• 2+ years of experience in NLP applications such as information retrieval, chatbots, summarization, or generative models.

• Proven experience building scalable ML applications on cloud infrastructure such as AWS, GCP, or Azure.

• Strong understanding of trade-offs between model architecture, deployment costs, and model accuracy.

• Ability to manage multiple tasks and collaborate effectively with geographically distributed teams.

• Up-to-date knowledge of advancements in NLP and computer vision and the ability to apply them in the education domain.


Technical Skills

• Python, PyTorch, TorchServe

• Pandas

• SQL and NoSQL databases such as MySQL, MongoDB, Redis, and Redshift

• Cloud infrastructure (AWS / GCP / Azure)

• Vector databases and search technologies such as Elasticsearch

• Linux


Nice to Have

• Familiarity with Reinforcement Learning

• Experience with Deep Knowledge Tracing

Read more
MNK Global Corporate Solutions
Rithika Raghavan
Posted by Rithika Raghavan
Bengaluru (Bangalore)
5 - 7 yrs
₹15L - ₹20L / yr
skill iconPython
skill iconDjango
skill iconAmazon Web Services (AWS)

About the Role

We are looking for an experienced Senior Backend Developer to design and build scalable, secure, and high-performance backend systems. The ideal candidate will have deep expertise in Python/Django, microservices architecture, and cloud technologies, along with strong problem-solving skills and leadership capabilities.


Key Responsibilities

•Design and develop backend services using Django and Python.

•Architect and implement microservices-based solutions for scalability and maintainability.

•Work with PostgreSQL and Redis for efficient data storage and caching.

•Build and maintain RESTful APIs and ensure robust API design principles.

•Implement system design best practices for high availability and fault tolerance.

•Containerize applications using Docker and manage deployments with Kubernetes.

•Integrate with cloud platforms (AWS/Azure) for hosting and infrastructure management.

•Apply security best practices to protect data and application integrity.

•Collaborate with frontend, QA, and DevOps teams for seamless delivery.

•Mentor junior developers and conduct code reviews to maintain quality standards.


Required Skills & Expertise

•Django/Python – Advanced proficiency in backend development.

•Microservices Architecture – Strong understanding of distributed systems.

•PostgreSQL & Redis – Expertise in relational and in-memory databases.

•Docker/Kubernetes – Hands-on experience with containerization and orchestration.

•API Design & System Design – Ability to design scalable and secure systems.

•Cloud (AWS/Azure) – Practical experience with cloud services and deployments.

•Security Best Practices – Knowledge of authentication, authorization, and data protection.


Preferred Qualifications

•Experience with CI/CD pipelines and DevOps practices.

•Familiarity with message queues (e.g., RabbitMQ, Kafka).

•Exposure to monitoring tools (Prometheus, Grafana).


What We Offer

•Competitive salary and benefits.

•Opportunity to work on cutting-edge backend technologies.

•Collaborative and growth-oriented work environment.

Read more
MNK GCS
Bengaluru (Bangalore)
6 - 11 yrs
₹10L - ₹20L / yr
skill iconPython
skill iconDjango

Key Responsibilities

• Design and develop backend services using Django and Python.

• Architect and implement microservices-based solutions for scalability and maintainability.

• Work with PostgreSQL and Redis for efficient data storage and caching.

• Build and maintain RESTful APIs and ensure robust API design principles.

• Implement system design best practices for high availability and fault tolerance.

• Containerize applications using Docker and manage deployments with Kubernetes.

• Integrate with cloud platforms (AWS/Azure) for hosting and infrastructure management.

• Apply security best practices to protect data and application integrity.

• Collaborate with frontend, QA, and DevOps teams for seamless delivery.

• Mentor junior developers and conduct code reviews to maintain quality standards. 



Required Skills & Expertise

• Django/Python – Advanced proficiency in backend development.

Microservices Architecture – Strong understanding of distributed systems.

• PostgreSQL & Redis – Expertise in relational and in-memory databases.

• Docker/Kubernetes – Hands-on experience with containerization and orchestration.

• API Design & System Design – Ability to design scalable and secure systems.

• Cloud (AWS/Azure) – Practical experience with cloud services and deployments.

• Security Best Practices – Knowledge of authentication, authorization, and data protection.

Read more
TVARIT GmbH

at TVARIT GmbH

2 candid answers
DrSoumya Sahadevan
Posted by DrSoumya Sahadevan
Pune
7 - 15 yrs
₹20L - ₹30L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
PySpark
databricks
+2 more

About TVARIT

TVARIT GmbH specializes in developing and delivering cutting-edge artificial intelligence (AI) solutions for the metal industry, including steel, aluminum, copper, cast iron, and more. Our software products empower customers to make intelligent, data-driven decisions, driving advancements in Predictive Quality (PsQ), Predictive Maintenance (PdM), and Energy Consumption Reduction (PsE), etc. With a strong portfolio of renowned reference customers, state-of-the-art technology, a talented research team from prestigious universities, and recognition through esteemed awards such as the EU Horizon 2020 AI Prize, TVARIT is recognized as one of the most innovative AI companies in Germany and Europe. We are seeking a self-motivated individual with a positive "can-do" attitude and excellent oral and written communication skills in English to join our team.


Job Description: We are looking for a Senior Data Engineer with strong expertise in Azure Databricks, PySpark, and distributed computing to develop and optimize scalable ETL pipelines for manufacturing analytics. The role involves working with high-frequency industrial data to enable real-time and batch data processing.


Key Responsibilities · Build scalable real-time and batch processing workflows using Azure Databricks, PySpark, and Apache Spark.

· Perform data pre-processing, including cleaning, transformation, deduplication, normalization, encoding, and scaling to ensure high-quality input for downstream analytics.

· Design and maintain cloud-based data architectures, including data lakes, lakehouses, and warehouses, following Medallion Architecture.

· Deploy and optimize data solutions on Azure (preferred), AWS, or GCP with a focus on performance, security, and scalability.

· Develop and optimize ETL/ELT pipelines for structured and unstructured data from IoT, MES, SCADA, LIMS, and ERP systems. · Automate data workflows using CI/CD and DevOps best practices, ensuring security and compliance with industry standards

· Monitor, troubleshoot, and enhance data pipelines for high availability and reliability.

· Utilize Docker and Kubernetes for scalable data processing.

· Collaborate with automation team, data scientists and engineers to provide clean, structured data for AI/ML models.


Desired Skills and Qualifications · Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.

· 7+ years of experience in core data engineering, with a strong focus on cloud platforms such as Azure (preferred), AWS, or GCP · Proficiency in PySpark, Azure Databricks, Python and Apache Spark, etc.

. 2 years of team handling experience.

· Expertise in relational databases (e.g., SQL Server, PostgreSQL), time series databases (e.g. Influx DB), and NoSQL databases (e.g., MongoDB, Cassandra) · Experience in containerization (Docker, Kubernetes).

· Strong analytical and problem-solving skills with attention to detail.

· Good to have MLOps, DevOps including model lifecycle management

· Excellent communication and collaboration skills, with a proven ability to work effectively as a team player.

· Comfortable working in a dynamic, fast-paced startup environment, adapting quickly to changing priorities and responsibilities.

Read more
Timble Technologies

at Timble Technologies

1 recruiter
Preeti Bisht
Posted by Preeti Bisht
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
1 - 4 yrs
₹2L - ₹5L / yr
Advanced Linux Admin
Ansible
Terraform
skill iconDocker
skill iconJenkins
+7 more

Job Title: Devops Engineer

Location: Delhi, Arjan Garh

Job Type: Full-Time

IMMEDIATE JOINERS REQUIRED

 

About Us:

Timble is a forward-thinking organization dedicated to leveraging cutting-edge technology to solve real-world problems. Our mission is to drive innovation and create impactful solutions through artificial intelligence and machine learning.


About the Role

We are looking for a high-ownership Senior DevOps Engineer to architect and maintain the mission-critical infrastructure supporting our global algorithmic trading operations. You will be the bridge between development and live trading, ensuring zero-latency performance and 100% system availability.

Key Responsibilities

  • Infrastructure Architecture: Design scalable, fault-tolerant systems for high-frequency trading environments.
  • Performance Optimization: Tune Linux servers and Python environments for maximum speed and efficiency.
  • Incident Management: Lead real-time response for live trading systems, performing RCA and preventive fixes.
  • Automation & CI/CD: Build and enhance robust pipelines using Docker, Jenkins, and Ansible.
  • Proactive Monitoring: Implement advanced logging and alerting (Prometheus/Grafana) to ensure high uptime.
  • Database Admin: Manage relational databases and write optimized SQL for operational reporting.
  • Mentorship: Guide junior DevOps members and maintain rigorous system documentation.

Technical Requirements

  • OS/Scripting: Advanced Linux Admin and expert-level Python scripting.
  • IaC & Tools: Hands-on experience with Ansible, Terraform, and Docker.
  • CI/CD: Proficiency in Jenkins or GitLab CI.
  • Data: Strong SQL skills with experience in performance tuning.
  • Education: B.Tech/M.Tech in Computer Science or related engineering field.
Read more
wwwwebnyayai
Noida
4 - 8 yrs
₹6L - ₹30L / yr
Google Cloud Platform (GCP)
Artificial Intelligence (AI)
skill iconPython
skill iconDjango
Apache Kafka

We are looking to recruit an expert for backend software development at Webnyay. We are an enterprise SaaS startup catering to India and international markets. We are now growing fast and need a rockstar senior software developer who is an expert in Python/Django and GCP.


What we are looking for:

  • At least 6 years of professional software development experience.
  • At least 4 years of experience with Python & Django.
  • Proficiency in Natural Language Processing (tokenization, stopword removal, lemmatization, embeddings, etc.)
  • Experience in computer vision fundamentals, particularly object detection concepts and architectures (e.g., YOLO, Faster R-CNN)
  • Experience in search and retrieval systems and related concepts like ranking models, vector search, or semantic search techniques
  • Experience with multiple databases (relational and non-relational).
  • Experience with hosting on GCP and other cloud services.
  • Familiar with continuous integration and other automation.
  • Focus on code quality and writing scalable code.
  • Ability to learn and adopt new technologies depending on business requirements.
  • Prior startup experience will be a plus!


Some of your responsibilities would include:

  • Work closely in a highly AGILE environment with a team of engineers.
  • Create and maintain technical documentation of technical design and solution.
  • Build products/features that are highly scalable, secure, highly available, high performing and cost-effective.
  • Help team in debugging.
  • Perform code reviews.
  • Understand the full feature set/ implementation and architecture of the applications.
  • Analyze business goals and product requirements and contribute to application architecture design, development and delivery.
  • Provide technical expertise for every phase of the project lifecycle; from concept development to solution design, implementation, optimization and support.
  • Act as an Interface with business teams to understand and create technical specifications for workable solutions within the project.
  • Explore and work with LLM APIs and Generative AI.
  • Make performance-related recommendations, identify and eliminate performance bottlenecks (hardware, software, configuration); drive performance tuning, re-design and re-factoring.
  • Participate in the software development lifecycle, which includes research, new development, modification, security, reuse, re-engineering and maintenance of common component libraries.
  • Participate in product definition and feature prioritization.
  • Collaborate with internal teams and stakeholders across business verticals.


Read more
CLOUDSUFI

at CLOUDSUFI

3 recruiters
Ayushi Dwivedi
Posted by Ayushi Dwivedi
Remote only
6 - 11 yrs
₹30L - ₹45L / yr
Google Cloud Platform (GCP)
SQL
skill iconPython

Highlights - Current location of candidate should be Bangalore

Total Exp - 6-12yrs

Joining Time period - Within 30 days

GCP Bigquery expert, GCP Certified


About Us

CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.

 

Our Values 

We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.

 

Equal Opportunity Statement 

CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, colour, religion, gender, gender identity or expression, sexual orientation and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/


Job Summary

We are seeking a highly skilled and motivated Data Engineer to join our Development POD for the Integration Project. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to ingest, clean, transform, and integrate diverse public datasets into our knowledge graph. This role requires a strong understanding of Cloud Platform (GCP) services, data engineering best practices, and a commitment to data quality and scalability.


Key Responsibilities

ETL Development: Design, develop, and optimize data ingestion, cleaning, and transformation pipelines for various data sources (e.g., CSV, API, XLS, JSON, SDMX) using Cloud Platform services (Cloud Run, Dataflow) and Python.

Schema Mapping & Modeling: Work with LLM-based auto-schematization tools to map source data to our schema.org vocabulary, defining appropriate Statistical Variables (SVs) and generating MCF/TMCF files.

Entity Resolution & ID Generation: Implement processes for accurately matching new entities with existing IDs or generating unique, standardized IDs for new entities.

Knowledge Graph Integration: Integrate transformed data into the Knowledge Graph, ensuring proper versioning and adherence to existing standards. 

API Development: Develop and enhance REST and SPARQL APIs via Apigee to enable efficient access to integrated data for internal and external stakeholders.

Data Validation & Quality Assurance: Implement comprehensive data validation and quality checks (statistical, schema, anomaly detection) to ensure data integrity, accuracy, and freshness. Troubleshoot and resolve data import errors.

Automation & Optimization: Collaborate with the Automation POD to leverage and integrate intelligent assets for data identification, profiling, cleaning, schema mapping, and validation, aiming for significant reduction in manual effort.

Collaboration: Work closely with cross-functional teams, including Managed Service POD, Automation POD, and relevant stakeholders.


Qualifications and Skills

Education: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Technology, or a related quantitative field.

Experience: 6+ years of proven experience as a Data Engineer, with a strong portfolio of successfully implemented data pipelines.

Programming Languages: Proficiency in Python for data manipulation, scripting, and pipeline development.

Cloud Platforms and Tools: Expertise in Google Cloud Platform (GCP) services, including Cloud Storage, Cloud SQL, Cloud Run, Dataflow, Pub/Sub, BigQuery, and Apigee. Proficiency with Git-based version control.


Core Competencies:

Must Have - SQL, Python, BigQuery, (GCP DataFlow / Apache Beam), Google Cloud Storage (GCS)

Must Have - GCP Certification

Must Have - Proven ability in comprehensive data wrangling, cleaning, and transforming complex datasets from various formats (e.g., API, CSV, XLS, JSON)

Secondary Skills - SPARQL, Schema.org, Apigee, CI/CD (Cloud Build), GCP, Cloud Data Fusion, Data Modelling

Solid understanding of data modeling, schema design, and knowledge graph concepts (e.g., Schema.org, RDF, SPARQL, JSON-LD).

Experience with data validation techniques and tools.

Familiarity with CI/CD practices and the ability to work in an Agile framework.

Strong problem-solving skills and keen attention to detail.

Read more
Hashone Career
Pune
4 - 7 yrs
₹8L - ₹14L / yr
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconPython

About Us

Wednesday is a technology consulting and engineering firm based in Pune. We specialise in helping digital-first businesses solve complex engineering problems. Our expertise lies in data engineering, applied AI, and app development. We offer our expertise through our services: Launch, Catalyse, Amplify, and Control.


We're a passionate bunch of people who take their work seriously. We deeply care about each other and are united by the cause of building teams that delivery great digital products & experiences.

Job Description

We are seeking Senior Software Engineers who can architect and ship fullstack digital products at a high bar — using AI-assisted development tools to move faster without cutting corners. This role spans platform, product, and go-to-market — you'll own backend systems, shape frontend experiences, make infrastructure decisions, and set a higher engineering standard for the team around you. The ideal candidate has designed systems they can defend, shipped products at scale, and knows what it takes to get there.

Requirements

Product & Client Ownership Be the day-to-day technical owner on engagements — understand the client's business deeply, shape the product roadmap, and translate ambiguous problems into clear engineering direction. Show up to demos and reviews with the confidence to defend tradeoffs and flag risks early.

Architecture & Judgment Make architectural decisions that hold up at scale. AI can generate code — your job is to decide what gets built, how it fits together, and when to push back. Evaluate tradeoffs, review TRDs, and set the technical direction the rest of the team executes against.

Fullstack Execution Ship backend services, APIs, database schemas, and user-facing features end-to-end. Use AI-assisted tools (Cursor, Claude Code, Antigravity) to move at the speed of a small team without cutting corners on quality.

Platform & Reliability Own cloud infrastructure, CI/CD, and production systems. Define how the team monitors, debugs, and responds to incidents. If something breaks at 2am, you've already thought about it.

AI & Automation Drive AI adoption in products — LLM APIs, RAG pipelines, agentic workflows. Push for automation across client and internal workflows. Know what these tools are good at and, more importantly, where they fail.

Raising the Bar Be the judgment layer for junior engineers who are moving fast with AI tools. Review code for what matters — not style, but correctness, scalability, and whether the author actually understood what they shipped. Run knowledge-sharing sessions. Onboard people well.

Must Haves


3–5 years of professional engineering experience with production systems you've owned end-to-end.

Active user of AI IDEs (Cursor, Claude Code, Antigravity, or similar).

Demonstrated system design ability — you've made architectural decisions and can evaluate trade-offs.

Good exposure to cloud platforms and deployments.

Familiarity with observability and monitoring tools — you can track down issues and identify bottlenecks.

Deep backend proficiency: API design, databases, microservices, distributed systems, event-driven architecture, and message brokers.

Worked with at least two of REST, GraphQL, or gRPC in production.

Eye for design — you care about the experiences you build for users.

High rate of learning — you figure things out fast.


Nice to Have


Cloud architecture experience (AWS, GCP, Azure) with containerisation and orchestration.

Familiarity with AI/ML: prompt engineering, embeddings, agent frameworks (LangChain, CrewAI, LangGraph).

Experience with automation and workflow tools (n8n, Make, Zapier).




Benefits

Mentorship: Work next to some of the best engineers and designers — and be one for others.

Freedom: An environment where you get to practice your craft. No micromanagement.

Comprehensive healthcare: Healthcare for you and your family.

Growth: A tailor-made program to help you achieve your career goals.

A voice that is heard: We don't claim to know the best way of doing things. We like to listen to ideas from our team.


Read more
REConnect Energy

at REConnect Energy

4 candid answers
2 recruiters
Ariba Khan
Posted by Ariba Khan
Bengaluru (Bangalore)
4.5 - 7 yrs
Upto ₹30L / yr (Varies
)
skill iconPython
MLOps
skill iconMachine Learning (ML)
SQL
skill iconAmazon Web Services (AWS)

About Us:

REConnect Energy’s GRIDConnect platform helps integrate and manage energy generation and consumption for 1000s of renewable energy assets and grid operators. We are currently serving customers across India, Bhutan and the Middle East with expansion planned in US and European markets.


We are headquartered in Central Bangalore with a team of 150+ and growing. You will join the Bangalore based Engineering team as a senior member and work at the intersection of Energy, Weather & Climate Sciences and AI. 


Responsibilities:

● Engineering - Take complete ownership of engineering stacks including Data Engineering and MLOps. Define and maintain software systems architecture for high availability 24x7 systems.

● Leadership - Lead a team of engineers and analysts managing engineering development as well as round the clock service delivery. Provide mentorship and technical guidance to team members and contribute towards their professional growth. Manage weekly and monthly reviews with team members and senior management.

● Product Development - Contribute towards new product development through engineering solutions to product requirements. Interact with cross-functional teams to bring forward a technology perspective.

● Operations - Manage delivery of critical services to power utilities with expectations of zero downtime. Take ownership for uninterrupted product uptime. 


Requirements:

● 4-5 years of experience building highly available systems

● 2-3 years experience leading a team of engineers and analysts

● Bachelors or Master’s degree in Computer Science, Software Engineering, Electrical Engineering or equivalent

● Proficient in python programming skills and expertise with data engineering and machine learning deployment

● Experience in databases including MySQL and NoSQL

● Experience in developing and maintaining critical and high availability systems will be given strong preference

● Experience in software design using design principles and architectural modeling.

● Experience working with AWS cloud platform.

● Strong analytical and data driven approach to problem solving 

Read more
Leading provider of Capital Market solutions in India

Leading provider of Capital Market solutions in India

Agency job
via HyrHub by Neha Koshy
Bengaluru (Bangalore)
4 - 7 yrs
₹12L - ₹18L / yr
skill iconPython
skill iconGo Programming (Golang)
skill iconDocker
skill iconKubernetes
Linux/Unix
+1 more

Core Responsibilities:

  • Design & Development: Architect and implement scalable backend services and APIs using Python or Golang, ensuring high performance, resilience, and extensibility.
  • System Ownership: Take end-to-end ownership of critical modules, from design and development to deployment and support.
  • Technical Leadership: Conduct design and code reviews, enforce best practices, and mentor junior engineers to raise the team’s technical bar.
  • Collaboration: Work closely with product managers, architects, and other engineers to translate business requirements into technical solutions.
  • Performance & Reliability: Troubleshoot complex issues in production systems, identify root causes, and design sustainable long-term solutions.
  • Innovation: Evaluate new technologies, contribute to proof-of-concepts, and recommend tools that can improve developer productivity.
  • Process Improvement: Drive initiatives to improve coding standards, CI/CD pipelines, and automated testing practices.
  • Knowledge Sharing: Document designs, create technical guides, and share insights with the broader engineering team.


Experience and Expertise:

  • 4–7 years of backend development experience with Python or Golang.
  • Strong expertise in designing, developing, and scaling microservices and distributed systems.
  • Solid understanding of concurrency, multi-threading, and performance optimization.
  • Proficiency with databases (SQL/NoSQL), caching systems (Redis, Memcached), and messaging systems (Kafka, RabbitMQ, etc.).
  • Hands-on experience with Linux development, Docker, and Kubernetes.
  • Familiarity with cloud platforms (AWS/GCP/Azure) and related services.
  • Strong debugging, profiling, and optimization skills for production-grade systems.
  • Experience with AI-powered development tools is a strong plus; familiarity with concepts like 'agentic coding' for workflow automation or 'context engineering' for leveraging LLMs in system design is highly desirable.


Skills:

  • Strong problem-solving ability, with experience handling complex technical challenges.
  • Ability to lead technical initiatives and mentor junior engineers.
  • Excellent communication skills to collaborate with cross-functional teams and articulate trade-offs.
  • Self-motivated, proactive, and able to operate independently while aligning with team goals.
  • Passionate about engineering culture, quality, and developer productivity.


Read more
Leading provider of Capital Market solutions in India

Leading provider of Capital Market solutions in India

Agency job
via HyrHub by Neha Koshy
Bengaluru (Bangalore)
2 - 4 yrs
₹8L - ₹12L / yr
skill iconPython
skill iconGo Programming (Golang)
skill iconDocker
skill iconKubernetes
Linux/Unix
+2 more

Core Responsibilities:

  • Design, develop, and maintain backend services and APIs using Python or Golang.
  • Write high-quality, testable, and maintainable code with a focus on performance and scalability.
  • Implement automated tests and contribute to CI/CD pipelines.
  • Collaborate with product, QA, and DevOps teams for end-to-end feature delivery.
  • Troubleshoot production issues and provide timely resolutions.
  • Participate in design and architecture discussions to improve system efficiency.
  • Contribute to improving development processes, coding standards, and best practices.


Experience and Expertise:

  • 2–4 years of experience in backend development with Python or Golang.
  • Solid understanding of RESTful APIs, microservices, and distributed systems.
  • Strong knowledge of data structures, algorithms, and OOPS principles.
  • Hands-on experience with relational and/or NoSQL databases.
  • Familiarity with Linux development, Docker, and basic cloud concepts (AWS/GCP/Azure).
  • Proficiency with Git and version control workflows.
  • Familiarity with AI-powered development tools or exposure to projects involving large language models (LLMs) is a plus.


Skills:

  • Strong analytical and debugging skills with the ability to solve complex problems.
  • Good communication and collaboration skills across teams.
  • Ability to work independently with minimal supervision while being a strong team player.
  • Growth mindset – eagerness to learn new technologies and improve continuously.


Read more
Leading provider of Capital Market solutions in India

Leading provider of Capital Market solutions in India

Agency job
via HyrHub by Neha Koshy
Bengaluru (Bangalore)
1 - 2 yrs
₹2L - ₹7L / yr
skill iconPython
skill iconGo Programming (Golang)
skill iconDocker
skill iconKubernetes
Linux/Unix
+3 more

Core Responsibilities:

  • Design, develop, and maintain backend services using Python or Golang.
  • Write clean, efficient, and well-documented code following best practices.
  • Build and consume RESTful APIs and microservices.
  • Collaborate with QA, DevOps, and product teams for smooth feature delivery.
  • Participate in peer code reviews and technical discussions.
  • Debug and fix issues, ensuring system stability and performance.
  • Continuously learn and apply new technologies and tools in backend development.


Experience and Expertise:

  • 0–2 years of software development experience (internships or projects acceptable).
  • Proficiency in at least one backend programming language (Python or Golang).
  • Strong understanding of object-oriented programming and software fundamentals.
  • Knowledge of data structures, algorithms, and database concepts.
  • Familiarity with Linux-based development environments.
  • Exposure to Git and version control workflows.


Skills:

  • Strong analytical and problem-solving ability.
  • Willingness to learn, adapt, and take ownership.
  • Effective communication and teamwork skills.
  • Curiosity for emerging technologies, including AI-driven development, backend technologies, distributed systems, and modern engineering practices.
Read more
Remote only
2 - 4 yrs
₹30L - ₹37L / yr
skill iconPython
skill iconVue.js

Strong Full stack/Backend engineer profile

Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)

Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures

Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS

Mandatory (AI): Must have hands on experience with using AI tools (eg: Claude, Cursor, GitHub Copilot, Codeium, Deepdcode) for coding

Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis

Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS

Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring

Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design

Mandatory (Company) : Product companies (B2B SaaS preferred)

Read more
Appiness interactive

Appiness interactive

Agency job
via Appiness Interactive by Shashirekha S
Mumbai, Gurugram
3.5 - 6 yrs
₹4L - ₹10L / yr
skill iconPython
RestAPI
skill iconDjango
skill iconFlask
FastAPI
+4 more

Job Description – Backend Python Developer(Mid-Level) 

📍 Location: Mumbai/Gurgaon |  Full-time


Backend Python Developer

Role Overview

We are seeking a skilled Backend Python Developer to design, develop, and maintain backend services, APIs, and integrations that power our AI-driven automation solutions.

You will collaborate closely with senior engineers, AI/ML teams, and frontend developers to build scalable, high-performance systems. This role is ideal for professionals with solid backend experience who are eager to deepen their expertise in Python, cloud technologies, and AI-based applications.

Key Responsibilities

  • Develop and maintain backend APIs, services, and system integrations using Python
  • Collaborate on system design and architecture discussions with senior engineers
  • Write clean, scalable, and well-documented code following best practices
  • Ensure performance, scalability, and reliability in cloud environments
  • Design and manage SQL/NoSQL databases for structured and unstructured data
  • Support integration of AI/ML models into production workflows
  • Participate in code reviews, unit testing, and debugging
  • Contribute to CI/CD pipelines, containerization, and DevOps processes

Required Skills & Qualifications

  • 3–5 years of experience in backend development
  • Strong proficiency in Python
  • Hands-on experience with frameworks such as FastAPI, Flask, or Django
  • Experience building and consuming REST APIs (GraphQL is a plus)
  • Strong database knowledge: PostgreSQL, MySQL, MongoDB, or Redis
  • Familiarity with cloud platforms (AWS, GCP, or Azure)
  • Hands-on experience with Docker and Kubernetes
  • Strong understanding of OOP, data structures, algorithms, and design patterns

Preferred Skills

  • Exposure to AI/ML workflows or a strong interest in learning
  • Experience with message brokers such as Kafka, RabbitMQ, or Celery
  • Knowledge of asynchronous programming (asyncio, Celery, etc.)
  • Experience with unit testing frameworks (PyTest, unittest)
  • Understanding of API security and authentication (OAuth2, JWT)

What We Offer

  • Competitive compensation with growth opportunities
  • Opportunity to work on AI-first automation products used globally
  • Mentorship from experienced senior engineers
  • Flexible work environment
  • Continuous learning support in Python, Cloud, and AI/Automation technologies



Read more
ManpowerGroup
Shirisha Jangi
Posted by Shirisha Jangi
Bengaluru (Bangalore), Hyderabad
7 - 15 yrs
₹20L - ₹27L / yr
Data engineering
skill iconJava
skill iconPython
SQL
skill iconScala
+3 more

Immediate hiring for Senior Data Engineer

📍 Location: Hyderabad/Bangalore

💼 Experience: 7+Years

🕒 Employment Type: Full-Time

🏢 Work Mode: Hybrid

📅 Notice Period: 0-1Month serving notice only

 

   We are seeking a highly skilled and motivated Data Engineer to join our innovative team. As a Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support our enterprise-wide data-driven initiatives. You will collaborate closely with cross-functional teams to ensure the availability, reliability, and performance of our data systems and solutions.

 

🔎 Key Responsibilities:

  • Data Pipeline Development
  • Data Modeling and Architecture
  • Data Integration and API Development
  • Data Infrastructure Management
  • Collaboration and Documentation

 

🎯 Required Skills:

  • Bachelor’s degree in computer science, Engineering, Information Systems, or a related field.
  • 7+ years of proven experience in data engineering, software development, or related technical roles.
  • 7+ years of experience in programming languages commonly used in data engineering (Python, Java, SQL, Stored Procedures, Scala, etc.).
  • 7+ years of experience with database systems, data modeling, and advanced SQL.
  • 7+ years of experience with ETL tools such as SSIS, Snowflake, Databricks, Azure Data Factory, Stored Procedures, etc.
  • Experience with big data technologies such as Hadoop, Spark, Kafka, etc.
  • 5+ years of experience working with cloud platforms like Azure, AWS, or Google Cloud.
  • Strong analytical, problem-solving, and debugging skills with high attention to detail.
  • Excellent communication and collaboration skills in a team-oriented, fast-paced environment.
  • Ability to adapt to rapidly evolving technologies and business requirements.

 

 

Read more
Tekit Software solution Pvt Ltd
Vrishti V
Posted by Vrishti V
Bengaluru (Bangalore)
4 - 10 yrs
₹1L - ₹20L / yr
skill iconPython
Hadoop
skill iconElastic Search
Logstash
Kibana
+1 more

Data Engineer

Location: Bangalore

Experience: 4+ Years

Notice Period: Immediate Joiners


Key Skills

  • Strong experience in Python
  • Hands-on experience with Hadoop
  • Experience with ELK Stack (Elasticsearch, Logstash, Kibana)Mandatory
  • Strong knowledge of SQL
  • Experience in building and maintaining data pipelines


Responsibilities

  • Design, build, and optimize scalable data pipelines
  • Work with large-scale datasets using Hadoop ecosystem
  • Implement and maintain ELK stack for data logging and monitoring
  • Write efficient and optimized SQL queries
  • Collaborate with engineering and analytics teams to support data-driven solutions
Read more
US based company

US based company

Agency job
via Techno Wise by Chanchal Amin
Ahmedabad
3 - 6 yrs
₹5L - ₹18L / yr
TypeScript
skill iconPython
FastAPI
skill iconDjango
skill iconFlask
+8 more

Job Requirements

• 3+ years of professional backend development experience with Python, and working knowledge

of TypeScript.

• Solid understanding of Python frameworks (e.g., FastAPI, Django, Flask) and TypeScript-based

backend frameworks (e.g., Node.js, NestJS, Express)

• Hands-on experience using Temporal to design and orchestrate workflows.

• Proven expertise in data extraction, normalization, and deduplication.

• Strong experience implementing proxy solutions and navigating bot-detection mechanisms (e.g.,

Cloudflare).

• Experience with Docker, containerized deployments, and cloud platforms such as GCP or Azure.

• Proficiency with database technologies including MongoDB and Elasticsearch.

• Demonstrated experience designing and maintaining scalable, high-performance APIs.

• Working knowledge of software testing methodologies (unit, integration, and end-to-end).

• Familiarity with CI/CD pipelines and version control systems like Git.

• Strong problem-solving abilities, attention to detail, and comfort working in agile, fast-paced

environments.

• Excellent communication skills with the ability to operate effectively in ambiguous or loosely

defined problem spaces.

Read more
Bengaluru (Bangalore)
2 - 6 yrs
₹10L - ₹23L / yr
skill iconProgramming
Linux/Unix
Computer Networking
Routing & Switching
Firewall
+7 more


We are seeking a skilled and detail-oriented Member of Technical Staff focusing on Network Infrastructure, Linux Administration and Automation. The role involves managing and maintaining Linux-based systems and infrastructure, automating repetitive tasks, and ensuring smooth operation.


Requirements

 

  • In-depth experience with Linux systems (configuration, troubleshooting, networking, and administration)
  • Network infrastructure management knowledge. CCNA/CCNP or an equivalent certification is a plus
  • Scripting skills in at least one language (e.g., Bash, Python, Go).
  • Knowledge of version control systems like Git and experience with branching, merging, and tagging workflows
  • Experience with virtualization technologies such as Proxmox or VMWare, including the design, implementation, and management of virtualized infrastructures. Understanding of virtual machine provisioning, resource management, and performance optimization in virtual environments.
  • Experience with containerization technologies like Docker
  • Familiarity with monitoring and logging tools.
  • Experience with end point security.


Responsibilities

 

  • Network Infrastructure Management: Configure, manage, and troubleshoot routers, switches, firewalls, and wireless networks, Maintain and optimize network performance to ensure reliability and security.
  • Linux Administration: Manage and maintain Linux-based systems, ensuring high availability and performance.
  • Infrastructure Management: Managing servers, networks, storage, and other infrastructure components, capacity planning, and disaster recovery.
  • Automation: Scripting (Bash, Python, Golang, etc.), configuration management (Ansible, Puppet, Chef).
  • Virtualization: Design, implement, and manage virtualized environments, ensuring optimal performance and resource efficiency.


Enjoy a great environment, great people, and a great package

  • Stock Appreciation Rights - Generous pre series-B stock options
  • Generous Gratuity Plan - Long service compensation far exceeding Indian statutory requirements 
  • Health Insurance - Premium health insurance for employee, spouse and children 
  • Working Hours - Flexible working hours with sole focus on enabling a great work environment 
  • Work Environment - Work with top industry experts in an environment that fosters co-operation, learning and developing skills 
  • Make a Difference - We're here because we want to make an impact on the world - we hope you do too!


Read more
Foss Infotech
HR Foss
Posted by HR Foss
Chennai, Coimbatore
2 - 5 yrs
₹3L - ₹7L / yr
skill iconPython
Odoo (OpenERP)
skill iconPostgreSQL
skill iconJavascript

Role: ODOO Developer

Exp: 2+ Years

Location : Chennai

Preferred : Chennai Based Candidates


Key Responsibilities

  • Develop and customise Odoo modules based on business requirements.
  • Design, develop, and maintain ERP applications using the Odoo framework.
  • Implement and customise Odoo Manufacturing (MRP) modules including Work Orders, Bills of Materials (BoM), Routings, and Production Planning.
  • Integrate third-party applications and APIs using web services.
  • Work with the PostgreSQL database for data management, optimisation, and administration.
  • Develop Odoo views, reports, and UI components using HTML, CSS, XML.
  • Support server deployment, troubleshooting, and performance optimisation of Odoo applications.
  • Understand and enhance existing Odoo functionalities and provide technical improvements.
  • Collaborate with functional teams to translate business requirements into technical solutions.
  • Interact with clients and functional teams to understand requirements and support project delivery.


Required Skills

  • 2 years of experience in Odoo (OpenERP) development and customisation.
  • Hands-on experience in Odoo Manufacturing (MRP) module implementation and customisation.
  • Familiarity with Python web frameworks such as Django or Flask.
  • Strong understanding of Object-Orientated Programming (OOP).
  • Experience with web services and API integrations.
  • Experience with PostgreSQL database management and optimisation.
  • Understanding of ORM (Object Relational Mapper) frameworks.
  • Knowledge of server deployment and troubleshooting.



Read more
Remote only
4 - 6 yrs
₹20L - ₹25L / yr
Automation
Test Automation (QA)
Playwright
Appium
API
+9 more

Role Overview

We are looking for a QA Automation Engineer who can leverage AI-driven testing approaches to improve automation coverage, test reliability, and data generation.

The ideal candidate should have strong experience in backend-heavy automation testing, modern automation frameworks, and using AI tools to generate test cases, maintain test scripts, and create synthetic data for testing.


Key Responsibilities

  • Design and develop automated test frameworks for backend and API-heavy applications.
  • Use AI tools to generate test scripts from requirements (e.g., Gherkin/Cucumber-based test generation).
  • Implement and maintain self-healing test automation frameworks that adapt to UI changes.
  • Develop automated tests using Playwright, Appium, and other modern automation tools.
  • Create synthetic test data using AI while ensuring PII compliance.
  • Perform backend stress testing and API validation.
  • Work closely with engineering teams to ensure product quality and release readiness.
  • Continuously improve test coverage, test reliability, and automation efficiency.


Must-Have Skills

  • 4+ years of experience in QA Automation
  • Strong experience in automation testing frameworks
  • Hands-on experience with Playwright for web automation
  • Experience with Appium for mobile automation
  • Proficiency in Python for test scripting and data generation
  • Experience writing BDD-style test cases (Gherkin / Cucumber)
  • Experience in API testing and backend automation
  • Familiarity with AI-assisted test generation tools
  • Strong knowledge of CI/CD pipelines and automated testing workflows


Relevant Skills

  • Backend automation testing
  • Test automation frameworks design
  • AI-assisted test generation
  • Synthetic test data generation
  • Performance and stress testing
  • API testing tools (Postman, REST clients)
  • Test reporting and debugging
  • Version control using Git


AI & Automation Expertise

  • Using AI tools to generate test cases from requirements
  • Experience with self-healing test automation frameworks such as Mabl or Testim
  • Using AI to generate synthetic financial datasets for testing
  • Testing AI-powered applications or AI features


Tools & Technologies

  • Playwright
  • Appium
  • Python
  • Cucumber / Gherkin
  • CI/CD tools
  • Git


Strong Plus

  • Experience working in the Finance / FinTech sector
  • Experience testing AI-powered applications
  • Experience working closely with AI engineering teams


A LITTLE BIT ABOUT THE COMPANY:

Established in 2017, Fountane Inc is one part a Digital Product Studio that specializes in building superior product experiences, and one part Ventures Lab incubating and investing in new competitive technology businesses from scratch. Thus far, we’ve created half a dozen multi million valuation companies in the US, and a handful of sister ventures for large corporations including Target, US Ventures, Imprint Engine.

We’re a team of 100 strong from around the world that are radically open minded and believes in excellence, respecting one another and pushing our boundaries to furthest its ever been.


Read more
Albert Invent

at Albert Invent

4 candid answers
3 recruiters
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
1 - 4 yrs
Upto ₹22L / yr (Varies
)
Automation
Terraform
skill iconPython
skill iconNodeJS (Node.js)
skill iconAmazon Web Services (AWS)

The Software Engineer – SRE will be responsible for building and maintaining highly reliable, scalable, and secure infrastructure that powers the Albert platform. This role focuses on automation, observability, and operational excellence to ensure seamless deployment, performance, and reliability of core platform services.


Responsibilities

  • Act as a passionate representative of the Albert product and brand.
  • Work closely with Product Engineering and other stakeholders to plan and deliver core platform capabilities that enable scalability, reliability, and developer productivity.
  • Work with the Site Reliability Engineering (SRE) team on shared full-stack ownership of a collection of services and/or technology areas.
  • Understand the end-to-end configuration, technical dependencies, and overall behavioral characteristics of all microservices.
  • Be responsible for the design and delivery of the mission-critical stack with a focus on security, resiliency, scale, and performance.
  • Own end-to-end performance and operability.
  • Demonstrate a clear understanding of automation and orchestration principles.
  • Act as the escalation point for complex or critical issues that have not yet been documented as Standard Operating Procedures (SOPs).
  • Use a deep understanding of service topology and dependencies to troubleshoot issues and define mitigations.

Requirements

  • Bachelor’s degree in Computer Science, Engineering, or equivalent experience.
  • 1+ years of software engineering experience, with at least 1 year in an SRE role focused on automation.
  • Strong experience with Infrastructure as Code (IaC), preferably using Terraform.
  • Strong expertise in Python or Node.js, including designing RESTful APIs and microservices architecture.
  • Strong expertise in cloud infrastructure (AWS) and platform technologies including microservices, APIs, and distributed systems.
  • Hands-on experience with observability stacks including centralized log management, metrics, and tracing.
  • Familiarity with CI/CD tools such as CircleCI and performance testing using K6.
  • Passion for bringing more automation and engineering standards to organizations.
  • Experience building high-performance APIs with low latency (<200 ms).
  • Ability to work in a fast-paced environment and collaborate with peers and leaders.
  • Ability to lead technically, mentor engineers, and contribute to hiring and team growth.

Good to Have

  • Experience with Kubernetes and container orchestration.
  • Familiarity with observability tools such as Prometheus, Grafana, OpenTelemetry, Datadog.
  • Experience building internal developer platforms (IDPs) or reusable engineering frameworks.
  • Exposure to ML infrastructure or data engineering workflows.
  • Experience working in compliance-heavy environments (SOC2, HIPAA, etc.).


About Albert Invent


Albert Invent is a cutting-edge AI-driven software company headquartered in Oakland, California, on a mission to empower scientists and innovators in chemistry and materials science to invent the future faster. Scientists in 30+ countries use Albert to accelerate R&D with AI trained like a chemist, helping bring better products to market faster.

Why Join Albert Invent

  • Work with a mission-driven, fast-growing global team at the intersection of AI, data, and advanced materials science.
  • Collaborate with world-class scientists and technologists to redefine how new materials are discovered and developed.
  • Culture built on curiosity, collaboration, ownership, and continuous learning.
  • Opportunity to build cutting-edge AI tools that accelerate real-world R&D and solve global challenges such as sustainability and advanced manufacturing.


Read more
MNK GCS
Shreya Sipani
Posted by Shreya Sipani
Bengaluru (Bangalore)
7 - 11 yrs
₹25L - ₹30L / yr
skill iconReact.js
skill iconPython
skill iconDjango
RESTful APIs
FastAPI
+3 more
  • Mandatory Skills:
  • Python (min 4yrs)
  • React.js (min 4yrs)
  • Django, Fast API (min 4yrs)
  • Solid understanding of RESTful APIs and backend-frontend integration
  • PostgreSQ/ MySQL/MongoDB


Read more
Remote only
0 - 0 yrs
₹1L - ₹2L / yr
skill iconPython
skill iconReact.js
HTML CSS3

We are looking for passionate and motivated Developers to join our growing technical team. The ideal candidate should have strong foundational knowledge in Python/Django or React with Django and be eager to work on real-time web development projects.


Open Positions:

Python Django Developer

React + Django Developer

Key Responsibilities:

  • Develop, test, and maintain scalable web applications.
  • Write clean, efficient, and reusable code using Django and/or React.
  • Collaborate with UI/UX designers and backend developers to implement new features.
  • Debug, troubleshoot, and optimize application performance.
  • Participate in code reviews and contribute to team discussions.
  • Stay updated with the latest web development trends and technologies.

Requirements:

  • Basic to strong knowledge of Python and Django framework.
  • Familiarity with React.js (for React + Django role).
  • Understanding of REST APIs and database concepts.
  • Knowledge of HTML, CSS, and JavaScript.
  • Strong problem-solving and logical thinking skills.
  • Good communication and teamwork abilities.
  • Freshers and career restart candidates are welcome to apply.



More Info:

Company: Altos Technologies


Website: www.altostechnologies.in


Job Type: Permanent Job


Industry: IT / Web Development


Function: Software Development


Employment Type: Full-time


Location: Kochi & Chennai

Read more
Remote, Jaipur
3 - 4 yrs
₹3L - ₹6L / yr
skill iconPython
skill iconDjango
skill iconFlask
FastAPI
skill iconPostgreSQL
+2 more

We're hiring a Python Developer in Jaipur.

Not looking for someone who can recite design patterns. Looking for someone who can open a Django codebase, figure out what's broken,

 and fix it by end of day. 3-4 years. Django / Flask / FastAPI. REST APIs. PostgreSQL. If you've maintained production code (not just built tutorial projects) — this is your role.

Full-time | Jaipur | Industry-standard pay | Small team = real ownership

Read more
Service based company

Service based company

Agency job
via Codemind Staffing Solutions by Krishna kumar
Chennai
4 - 8 yrs
₹15L - ₹30L / yr
skill iconPython
Generative AI
Large Language Models (LLM)
Retrieval Augmented Generation (RAG)
FastAPI
+3 more

Job Title: Python Backend / GenAI Engineer (4+ Years)

Job Summary:

Looking for a Python Backend Engineer with experience in Generative AI, LangGraph workflows, data engineering, and AI evaluation using Arize AI.

Responsibilities

* Develop backend APIs using Python (FastAPI / Flask / Django)

* Build Generative AI and RAG-based applications

* Design LangGraph / agent workflows

* Create data engineering pipelines (ETL, data processing)

* Implement LLM monitoring and evaluation using Arize AI

* Integrate vector databases and AI services

* Maintain scalable and production-ready backend systems

Required Skills

* 4+ years of Python backend development

* Experience in Generative AI / LLM applications

* Knowledge of LangGraph / LangChain

* Experience in data engineering pipelines

* Familiarity with Arize AI or model evaluation tools

* Understanding of REST APIs, databases, Docker

Good to Have

* Cloud platforms (Azure / AWS )

* Vector databases (FAISS, Pinecone, Azure AI Search)


Read more
Talent Pro
Remote only
5 - 8 yrs
₹30L - ₹40L / yr
skill iconPython
skill iconJava
SQL

Strong Senior Backend Engineer profiles

Mandatory (Experience 1) – Must have 5+ years of hands-on Backend Engineering experience building scalable, production-grade systems

Mandatory (Experience 2) – Must have strong backend development experience using one or more frameworks (FastAPI / Django (Python), Spring (Java), Express (Node.js).

Mandatory (Experience 3) – Must have deep understanding of relevant libraries, tools, and best practices within the chosen backend framework

Mandatory (Experience 4) – Must have strong experience with databases, including SQL and NoSQL, along with efficient data modeling and performance optimization

Mandatory (Experience 5) – Must have experience designing, building, and maintaining APIs, services, and backend systems, including system design and clean code practices

Mandatory (Domain) – Experience with financial systems, billing platforms, or fintech applications is highly preferred (fintech background is a strong plus)

Mandatory (Company) – Must have worked in product companies / startups, preferably Series A to Series D

Mandatory (Education) – Candidates from Tier - 1 engineering institutes (IITs, BITS, are highly preferred)

Read more
Metron Security Private Limited
Prathamesh Shinde
Posted by Prathamesh Shinde
Pune
2 - 7 yrs
₹5L - ₹12L / yr
skill iconPython
skill iconGo Programming (Golang)
skill icon.NET
skill iconNodeJS (Node.js)

Job Description:


We are looking for a skilled Backend Developer with 2–5 years of experience in software development, specializing in Python and/or Golang. If you have strong programming skills, enjoy solving problems, and want to work on secure and scalable systems, we'd love to hear from you!


Location - Pune, Baner.

Interview Rounds - In Office


Key Responsibilities:

Design, build, and maintain efficient, reusable, and reliable backend services using Python and/or Golang

Develop and maintain clean and scalable code following best practices

Apply Object-Oriented Programming (OOP) concepts in real-world development

Collaborate with front-end developers, QA, and other team members to deliver high-quality features

Debug, optimize, and improve existing systems and codebase

Participate in code reviews and team discussions

Work in an Agile/Scrum development environment


Required Skills: Strong experience in Python or Golang (working knowledge of both is a plus)


Good understanding of OOP principles

Familiarity with RESTful APIs and back-end frameworks

Experience with databases (SQL or NoSQL)

Excellent problem-solving and debugging skills

Strong communication and teamwork abilities


Good to Have:

Prior experience in the security industry

Familiarity with cloud platforms like AWS, Azure, or GCP

Knowledge of Docker, Kubernetes, or CI/CD tools

Read more
Hyderabad
5 - 8 yrs
₹15L - ₹25L / yr
ETL
Snowflake
skill iconPython
SQL
Fivetran
+4 more

Role Overview


We are looking for a Senior Data Quality Engineer who is passionate about building reliable and scalable data platforms. In this role, you will ensure high-quality, trustworthy data across pipelines and analytics systems by designing robust data ingestion frameworks, implementing data quality checks, and optimizing data transformations.

You will work closely with data engineers, analytics teams, and product stakeholders to ensure data accuracy, consistency, and reliability across the organization.


Key Responsibilities


  • Cleanse, normalize, and enhance data quality across operational systems and new data sources flowing through the data platform.
  • Design, build, monitor, and maintain ETL/ELT pipelines using Python, SQL, and Airflow.
  • Develop and optimize data models, tables, and transformations in Snowflake.
  • Build and maintain data ingestion workflows, including API integrations, file ingestion, and database connectors.
  • Ensure data reliability, integrity, and performance across pipelines.
  • Perform comprehensive data profiling to understand data structures, detect anomalies, and resolve inconsistencies.
  • Implement data quality validation frameworks and automated checks across pipelines.
  • Use data integration and data quality tools such as Deequ, Great Expectations (GX), Splink, Fivetran, Workato, Informatica, etc., to onboard new data sources.
  • Troubleshoot pipeline failures and implement data monitoring and alerting mechanisms.
  • Collaborate with engineering, analytics, and product teams in an Agile development environment.


Required Technical Skills


Core Technologies


  • Strong hands-on experience with SQL
  • Python for data transformation and pipeline development
  • Workflow orchestration using Apache Airflow
  • Experience working with Snowflake data warehouse


Data Engineering Expertise


  • Strong understanding of ETL / ELT pipeline design
  • Data profiling and data quality validation techniques
  • Experience building data ingestion pipelines from APIs, files, and databases
  • Data modeling and schema design


Tools & Platforms


  • Data Quality Tools: Deequ, Great Expectations (GX), Splink
  • Data Integration Tools: Fivetran, Workato, Informatica
  • Cloud Platforms: AWS (preferred)
  • Version Control & DevOps: Git, CI/CD pipelines


Qualifications


  • 5–8 years of experience in Data Quality Engineering / Data Engineering
  • Strong expertise in SQL, Python, Airflow, and Snowflake
  • Experience working with large-scale datasets and distributed data systems
  • Solid understanding of data engineering best practices across the development lifecycle
  • Experience working in Agile environments (Scrum, sprint planning, etc.)
  • Strong analytical and problem-solving skills


What We Look For


  • Passion for data accuracy, reliability, and governance
  • Ability to identify and resolve complex data issues
  • Strong collaboration skills across data, engineering, and analytics teams
  • Ownership mindset and attention to data integrity and performance


Why Join Us


  • Opportunity to work on modern data platforms and large-scale datasets
  • Collaborate with high-performing data and engineering teams
  • Exposure to cloud data architecture and modern data tools
  • Competitive compensation and strong career growth opportunities
Read more
Hyderabad
5 - 8 yrs
₹15L - ₹30L / yr
skill iconJavascript
skill iconNodeJS (Node.js)
FastAPI
TypeScript
skill iconPython

Role Overview


We are looking for a highly skilled Senior Full Stack Developer with strong expertise in modern backend technologies and scalable web application development. The ideal candidate is passionate about building high-performance applications, robust APIs, and scalable systems while collaborating with cross-functional teams to deliver impactful solutions.


This role requires a developer who can work as an individual contributor, solve complex technical challenges, and build products that create real business impact.


Key Responsibilities


  • Design, develop, and maintain scalable full-stack web applications
  • Build and optimize robust backend services and RESTful APIs
  • Develop high-performance applications using Node.js and FastAPI
  • Collaborate with product managers, designers, and engineering teams to deliver end-to-end solutions
  • Ensure application performance, security, scalability, and reliability
  • Write clean, maintainable, and well-tested code
  • Participate in architecture discussions and code reviews
  • Troubleshot complex production issues and provided effective technical solutions
  • Follow modern development practices, coding standards, and CI/CD processes


Technical Skills


Core Technologies


  • JavaScript – Advanced proficiency
  • TypeScript – Strong hands-on experience
  • Node.js – Strong backend development expertise
  • Python (FastAPI) – API development and integration


Additional Skills (Good to Have)


  • Experience with modern frontend frameworks such as React / Angular / Vue
  • Experience with REST API design and microservices architecture
  • Knowledge of cloud platforms (AWS / Azure / GCP)
  • Experience with Docker, CI/CD pipelines
  • Familiarity with databases such as PostgreSQL, MySQL, or MongoDB


Required Qualifications


  • 5–8 years of experience in full-stack development
  • Proven experience building scalable web applications and APIs
  • Strong problem-solving and analytical skills
  • Experience working in Agile development environments
  • Ability to work independently and deliver high-quality solutions


What We Look For


  • Passion for clean code and scalable architecture
  • Strong ownership mindset
  • Ability to solve complex technical challenges
  • Excellent communication and collaboration skills
Read more
KJBN labs

at KJBN labs

2 candid answers
sakthi ganesh
Posted by sakthi ganesh
Bengaluru (Bangalore)
3 - 6 yrs
₹6L - ₹12L / yr
SQL
PL/SQL
skill iconPython

If you are good at writing complex queries, very good at python, and good at debugging, very good at understanding complex systems and can swim through logs to find the dropping point, and have been on the firefighting side to address bugs in live production systems, send your resume

Read more
HireTo
Rishita Sharma
Posted by Rishita Sharma
Hyderabad
5 - 13 yrs
₹15L - ₹30L / yr
snowflake
skill iconPython
SQL
Windows Azure
databricks
+4 more

Position Title : Senior Data Engineer(Founding Member) - Insurtech StartUp

Location : Hyderabad(Onsite)

Immediate to 15 days Joiners

Experience : 5+ to 13 Years

Role Summary

We are looking for a Senior Data Engineer who will play a foundational role in:

  • Client onboarding from a data perspective
  • Understanding complex insurance data flows
  • Designing secure, scalable ingestion pipelines
  • Establishing strong data modeling and governance standards

This role sits at the intersection of technology, data architecture, security, and business onboarding.

.

Key Responsibilities

  • Lead end-to-end data onboarding for new clients and partners, working closely with business and product teams to understand client systems, data formats, and migration constraints
  • Define and implement data ingestion strategies supporting multiple sources and formats, including CSV, XML, JSON files, and API-based integrations
  • Design, build, and operate robust, scalable ETL/ELT pipelines, supporting both batch and near-real-time data processing
  • Handle complex insurance-domain data including Contracts, Claims, Reserves, Cancellations, and Refunds
  • Architect ingestion pipelines with security-by-design principles, including secure credential management (keys, secrets, tokens), encryption at rest and in transit, and network-level controls where required
  • Enforce role-based and attribute-based access controls, ensuring strict data isolation, tenancy boundaries, and stakeholder-specific access rules
  • Design, maintain, and evolve canonical data models that support operational workflows, reporting & analytics, and regulatory/audit requirements
  • Define and enforce data governance standards, ensuring compliance with insurance and financial data regulations and consistent definitions of business metrics across stakeholders
  • Build and operate data pipelines on a cloud-native platform, leveraging distributed processing frameworks (Spark / PySpark), data lakes, lakehouses, and warehouses
  • Implement and manage orchestration, monitoring, alerting, and cost-optimization mechanisms across the data platform
  • Contribute to long-term data strategy, platform architecture decisions, and cost-optimization initiatives while maintaining strict security and compliance standards

Required Technical Skills

  • Core Stack: Python, Advanced SQL(Complex joins, window functions, performance tuning), Pyspark
  • Platforms: Azure, AWS, Data Bricks, Snowflake
  • ETL / Orchestration: Airflow or similar frameworks
  • Data Modeling: Star/Snowflake schema, dimensional modeling, OLAP/OLTP
  • Visualization Exposure: Power BI
  • Version Control & CI/CD: GitHub, Azure Devops, or equivalent
  • Integrations: APIs, real-time data streaming, ML model integration exposure

Preferred Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
  • 5+ years of experience in data engineering or similar roles
  • Strong ability to align technical solutions with business objectives
  • Excellent communication and stakeholder management skills

What We Offer

  • Direct collaboration with the core US data leadership team
  • High ownership and trust to manage the function end-to-end
  • Exposure to a global environment with advanced tools and best practices
Read more
Remote only
2 - 7 yrs
₹5L - ₹15L / yr
DevOps
CI/CD
skill iconDocker
skill iconKubernetes
skill iconAmazon Web Services (AWS)
+8 more

BluePMS Software Solutions Pvt Ltd is hiring a talented DevOps Engineer to join our growing engineering team. In this role, you will be responsible for building and maintaining scalable infrastructure, automating deployment processes, and improving the reliability of our software delivery pipelines.


KeyResponsibilities:

 1: Design, build, and maintain CI/CD pipelines for faster and reliable deployments.

 2: Manage and monitor cloud infrastructure and servers.

 3: Automate build, testing, and deployment processes.

 4: Collaborate with development and QA teams to improve release cycles.

 5: Monitor system performance and ensure high availability and reliability.

 6: Troubleshoot infrastructure and deployment issues.

 7: Implement security best practices in DevOps workflows.


RequiredSkills:

 1: Strong understanding of DevOps principles and CI/CD pipelines.

 2: Experience with Docker, Kubernetes, or containerization technologies.

 3: Familiarity with cloud platforms such as AWS, Azure, or GCP.

 4: Experience with Git, Jenkins, GitHub Actions, or similar tools.

 5: Basic scripting knowledge (Bash, Python, or Shell).

 6: Good understanding of Linux systems and networking concepts.


Eligibility:

 1: Experience: 2 – 7 years

 2: Qualification: Bachelor's degree in Computer Science, IT, or related field

 3: Strong analytical and problem-solving skills.


Location: Chennai / Remote


Apply here: https://connectsblue.com/jobs/753/devops-engineer-at-bluepms-software-solutions-pvt-ltd

Read more
Neuvamacro Technology Pvt Ltd
Remote only
5 - 15 yrs
₹12L - ₹15L / yr
Tableau
Snow flake schema
SQL
ETL
Data modeling
+4 more

Job Description:

Position Type: Full-Time Contract (with potential to convert to Permanent)

Location: Remote (Australian Time Zone)

Availability: Immediate Joiners Preferred

About the Role

We are seeking an experienced Tableau and Snowflake Specialist with 5+ years of hands‑on expertise to join our team as a full‑time contractor for the next few months. Based on performance and business requirements, this role has a strong potential to transition into a permanent position.

The ideal candidate is highly proficient in designing scalable dashboards, managing Snowflake data warehousing environments, and collaborating with cross-functional teams to drive data‑driven insights.

Key Responsibilities

  • Develop, design, and optimize advanced Tableau dashboards, reports, and visual analytics.
  • Build, maintain, and optimize datasets and data models in Snowflake Cloud Data Warehouse.
  • Collaborate with business stakeholders to gather requirements and translate them into analytics solutions.
  • Write efficient SQL queries, stored procedures, and data pipelines to support reporting needs.
  • Perform data profiling, data validation, and ensure data quality across systems.
  • Work closely with data engineering teams to improve data structures for better reporting efficiency.
  • Troubleshoot performance issues and implement best practices for both Snowflake and Tableau.
  • Support deployment, version control, and documentation of BI solutions.
  • Ensure availability of dashboards during Australian business hours.

Required Skills & Experience

  • 5+ years of strong hands-on experience with Tableau development (Dashboards, Storyboards, Calculated Fields, LOD Expressions).
  • 5+ years of experience working with Snowflake including schema design, warehouse configuration, and query optimization.
  • Advanced knowledge of SQL and performance tuning.
  • Strong understanding of data modeling, ETL processes, and cloud data platforms.
  • Experience working in fast-paced environments with tight delivery timelines.
  • Excellent communication and stakeholder management skills.
  • Ability to work independently and deliver high‑quality outputs aligned with business objectives.

Nice-to-Have Skills

  • Knowledge of Python or any ETL tool.
  • Experience with Snowflake integrations (Fivetran, DBT, Azure/AWS/GCP).
  • Tableau Server/Prep experience.

Contract Details

  • Full-Time Contract for several months.
  • High possibility of conversion to permanent, based on performance.
  • Must be available to work on the Australian Time Zone.
  • Immediate joiners are highly encouraged.


Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Bengaluru (Bangalore)
3 - 6 yrs
₹15L - ₹25L / yr
skill iconPython
skill iconGo Programming (Golang)
skill iconJava
skill iconAmazon Web Services (AWS)



We’re Hiring Backend Developers | Java / Go / Python | 3–5 Years | Bangalore

We are expanding our engineering team and looking for talented Backend Developers with 3–5 years of experience to join us in Bangalore.

If you enjoy building scalable systems, working with modern cloud technologies, and solving complex problems, this opportunity is for you!


💼 Position

Backend Developer (Java / Go / Python)

📍 Location: Bangalore

👨‍💻 Experience: 3–5 Years

🔎 What You Bring

✔ Strong proficiency in Go or similar backend languages like Python with Fast API or JAVA with Springboot .

✔ Experience designing RESTful APIs

✔ Hands-on experience with AWS / GCP

✔ Experience working with PostgreSQL, Redis, Kafka, or SQS

✔ Strong experience with Microservices architecture

✔ Hands-on experience with CI/CD pipelines

✔ Experience with containerized environments (Docker / Kubernetes)

✔ Familiarity with monitoring tools like Prometheus, Grafana, and Spring Actuator

✔ Strong understanding of data structures, algorithms, and system design fundamentals

✔ Ability to own features end-to-end and solve complex engineering problems

✔ Strong focus on code quality, observability, and operational ownership

✔ Comfortable working in fast-paced, high-growth environments





Read more
TVARIT GmbH

at TVARIT GmbH

2 candid answers
DrSoumya Sahadevan
Posted by DrSoumya Sahadevan
Pune
5 - 15 yrs
₹20L - ₹38L / yr
skill iconReact.js
API
AWS CloudFormation
skill iconDjango
skill iconNodeJS (Node.js)
+7 more

Availability: Full time 

Location: Pune, India 

Experience: 5- 6 years

 

Tvarit Solutions Private Limited (wholly owned subsidiary of TVARIT GmbH, Germany). TVARIT provides software to reduce manufacturing waste like scrap, energy, and machine downtime using its patented technology. With its software products, and highly competent team from renowned Universities, TVARIT has gained customer trust across 4 continents within a short span of 3 years. TVARIT is awarded among the top 8 out of 490 AI companies by European Data Incubator, apart from many more awards by the German government and industrial organizations making TVARIT one of the most innovative AI companies in Germany and Europe.  

 

We are looking for a passionate Full Stack Developer Level 2 to join our technology team in Pune Centre. You will be responsible for handling architecting, design, development, testing, leading the software development team and working toward infrastructure development that will support the company’s solutions. You will get an opportunity to work closely on projects that will involve the automation of the manufacturing process. 

 

Key Responsibilities 

· Full Stack Development: Design, develop, and maintain scalable web applications using React with TypeScript for the frontend and Node.js/Python for the backend.

· AI Integration: Collaborate with data scientists and ML engineers to integrate AI/ML models into the SaaS platform, ensuring seamless performance and usability.

· API Development & Optimization: Build and optimize high-performance REST APIs in Node.js and Python (Django, Flask, or FastAPI) to support real-time data processing and analytics.

· Database Engineering: Design, manage, and optimize data storage using relational (PostgreSQL), NoSQL (MongoDB/DynamoDB), graph, and vector databases for handling complex industrial data.

· Cloud-Native Deployment: Deploy, monitor, and manage services in containerized environments using Docker and Kubernetes on Linux-based systems (Ubuntu/Debian).

· System Architecture & Design: Contribute to architectural decisions, leveraging OOPs, microservices, domain-driven design, and design patterns to ensure scalability, security, and maintainability.

· Data Handling & Processing: Work with large-scale manufacturing datasets using Python (pandas) to enable predictive analytics and AI-driven insights.

· Collaboration & Agile Delivery: Partner with cross-functional teams—including product managers, manufacturing domain experts, and AI researchers—to translate business needs into technical solutions.

· Performance & Security: Ensure robust, secure, and high-performance software by implementing best practices in algorithms, data structures, and system design.

· Continuous Improvement: Stay updated on emerging technologies in AI, SaaS, and manufacturing systems to propose innovative solutions that enhance product capability.

 

Must have worked on these technologies.

· 5+ years of experience working with React-Typescript, node.js on a production level

· Python, pandas, High performance REST APIs in node and Python (in Django or Flask or Fast API)

· Databases: Relational DB like PostgreSQL, No SQL DB like Mongo or Dynamo DB, Vector databases, Graphs DBs

· OS: Linux flavor like Ubuntu, Debian

· Source Control and CI/CD

· Software Fundamentals: Excellent command on Algorithms and Data Structures

· Software design and Architecture: OOPs, Design Patterns, Micro Services, monolithic architectures, Domain driven Design

· Containers: Docker and Kubernetes

· Cloud: Fundamentals of AWS like S3 buckets, EC2, IAMs, Security groups


Benefits and Perks:

· Be part of the product which is transforming the manufacturing landscape with AI

· Culture of innovation, creativity, learning, and even failure, we believe in bringing out the best in you.

· Progressive leave policy for effective work-life balance.

· Get mentored by highly qualified internal resource groups and opportunities to avail industry-driven mentorship programs.

· Multicultural peer groups and supportive workplace policies. 

· Work from beaches, hills, mountains, and many more with the yearly workcation program; we believe in mixing elements of vacation and work.

 

 

 

How it's like to work for a Startup?

Working for TVARIT (deep-tech German IT Startup) can offer you a unique blend of innovation, collaboration, and growth opportunities. But it's essential to approach it with a willingness to adapt and thrive in a dynamic environment.

 

If this position sparked your interest, do apply today!

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort