50+ Python Jobs in Bangalore (Bengaluru) | Python Job openings in Bangalore (Bengaluru)
Apply to 50+ Python Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.
Build AI Systems That Change How Industries Operate
Tailored AI is not just another tech company. We’re building the McKinsey of AI systems a new kind of firm, made up of engineers who understand business deeply and use AI as a force multiplier.
As an SDE 2, you’ll lead and own the engineering for an entire product track, often working directly with clients and stakeholders. You’ll be the architect, the executor, and the problem-solver-in-chief. You’ll take vague problem statements, turn them into elegant solutions, and bring them to life in production.
What You’ll Do
- Architect and build AI-powered software solutions from scratch
- Own a full engineering track—backend, infra, integrations, and LLM workflows
- Interface with customers to align on specs, iterate fast, and deploy with confidence
- Mentor SDE 1s and Interns, conduct code reviews, and guide engineering quality
- Stay on top of AI trends, contribute to internal tooling and shared best practices
What You’ll Gain
- Leadership opportunties and fast progression to Senior SDE roles
- Deep knowledge of how AI is transforming industries while actually building it
- High ownership, zero bureaucracy, and direct influence on product direction
- Exposure to multi-agent AI systems, enterprise integrations, and scalable infra
Who You Are
- 2–3 years of strong backend engineering experience
- Proven track record of owning software modules and delivering in production
- Skilled in Python, Django/FastAPI, Postgres, AWS
- Exposure to system design and performance optimization
- Interest in AI tools like Langchain, OpenAI, vector DBs, etc.
- Strong analytical and communication skills
Tech Stack You’ll Work With
- Python, Django, FastAPI
- Postgres, Redis, S3
- EC2, Lambda, Cloudwatch
- Langchain, LLM APIs, Vector DBs
- REST APIs, Microservices, GitHub Actions
Some Real Problems You Might Work On
- Building a multi-agent career coaching assistant that guides users and automates job hunting
- Deploying a chatbot that generates employee performance reviews on-demand from HR data
- Designing an LLM pipeline to help Indian lawyers access precedents, statutes, and case law in seconds
Interview Process
- Screening – A quick call with a Co-Founder to align on fit
- CV + Puzzle + Programming – 1 hour round to gauge problem-solving and fundamentals
- Live Coding – Solve a coding task using Python + docs
- System Design – For SDE 2, a take-home problem and a detailed discussion round
We are hiring SDET 1 / 2 / 3 across levels. This is a high-ownership role where you will help ensure quality across web, mobile, and backend systems in a fast-scaling environment. This is not a traditional QA role. You are expected to think like an owner and take responsibility for quality in production.
Responsibilities:
- Test critical user journeys (booking, assignment, payments, cancellations).
- Perform manual, automation, API, and regression testing.
- Write and maintain test cases and automation for key flows.
- Collaborate closely with engineering and product teams.
- Identify risks early and help ensure smooth, reliable releases.
Requirements:
- Experience in QA / SDET roles (level-based on experience).
- Strong testing fundamentals and problem-solving skills.
- Exposure to automation tools (Selenium / Playwright / Cypress / Appium).
- API testing knowledge (REST, Postman).
- Comfort working in a fast-changing, scaling startup.
Job Title: Python Azure Databricks Developer
Location: Bangalore
Experience: 3-5 yrs
Employment Type: Full-Time
Role Overview
We are looking for an experienced Python Azure Databricks Developer to design, develop, and optimize scalable data pipelines and analytics solutions on Azure cloud. The ideal candidate should have strong expertise in Python, Azure Databricks, and big data technologies.
Key Responsibilities
- Develop and maintain scalable data pipelines using Python and Azure Databricks
- Design and implement ETL/ELT workflows for structured and unstructured data
- Work with PySpark for distributed data processing
- Integrate data from multiple sources including Azure Data Lake, SQL databases, APIs, etc.
- Optimize performance of Spark jobs and Databricks clusters
- Implement data quality checks and monitoring mechanisms
- Collaborate with Data Engineers, Data Scientists, and business teams
- Follow CI/CD and DevOps best practices for deployment
- Ensure security and compliance within Azure environment
Required Skills
- Strong hands-on experience in Python
- Experience with Azure Databricks
- Good knowledge of PySpark and Spark architecture
- Experience with Azure Data Factory (ADF)
- Knowledge of Azure Data Lake Storage (ADLS)
- SQL proficiency
- Experience in handling large-scale datasets
- Understanding of data warehousing concepts
Preferred Skills
- Experience with Delta Lake
- Knowledge of Azure Synapse Analytics
- Familiarity with CI/CD pipelines (Azure DevOps)
- Understanding of data governance and security in Azure
- Experience in Agile/Scrum methodology

Consumer Internet, Technology & Travel and Tourism Platform
Job Details
- Job Title: Lead DevOps Engineer
- Industry: Consumer Internet, Technology & Travel and Tourism Platform
- Function - IT
- Experience Required: 7-10 years
- Employment Type: Full Time
- Job Location: Bengaluru
- CTC Range: Best in Industry
Criteria:
- Strong Lead DevOps / Infrastructure Engineer Profiles.
- Must have 7+ years of hands-on experience working as a DevOps / Infrastructure Engineer.
- Candidate’s current title must be Lead DevOps Engineer (or equivalent Lead role) in the current organization
- Must have minimum 2+ years of team management / technical leadership experience, including mentoring engineers, driving infrastructure decisions, or leading DevOps initiatives.
- Must have strong hands-on experience with Kubernetes (container orchestration) including deployment, scaling, and cluster management.
- Must have experience with Infrastructure as Code (IaC) tools such as Terraform, Ansible, Chef, or Puppet.
- Must have strong scripting and automation experience using Python, Go, Bash, or similar scripting languages.
- Must have working experience with distributed databases or data systems such as MongoDB, Redis, Cassandra, Elasticsearch, or Puppet.
- Must have strong hands-on experience in Observability & Monitoring, CI/CD architecture, and Networking concepts in production environments.
- (Company) – Must be from B2C Product Companies only.
- (Education) – B.E/ B.Tech
Preferred
- Experience working in microservices architecture and event-driven systems.
- Exposure to cloud infrastructure, scalability, reliability, and cost optimization practices.
- (Skills) – Understanding of programming languages such as Go, Python, or Java.
- (Environment) – Experience working in high-growth startup or large-scale production environments.
Job Description
As a DevOps Engineer, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
- Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
- Codify our infrastructure
- Do what it takes to keep the uptime above 99.99%
- Understand the bigger picture and sail through the ambiguities
- Scale technology considering cost and observability and manage end-to-end processes
- Understand DevOps philosophy and evangelize the principles across the organization
- Strong communication and collaboration skills to break down the silos

Consumer Internet, Technology & Travel and Tourism Platform
Job Details
- Job Title: Senior DevOps Engineer
- Industry: Consumer Internet, Technology & Travel and Tourism Platform
- Function - IT
- Experience Required: 4-7 years
- Employment Type: Full Time
- Job Location: Bengaluru
- CTC Range: Best in Industry
Criteria:
- Strong DevOps / Infrastructure Engineer Profiles.
- Must have 4+ years of hands-on experience working as a DevOps Engineer / Infrastructure Engineer / SRE / DevOps Consultant.
- Must have hands-on experience with Kubernetes and Docker, including deployment, scaling, or containerized application management.
- Must have experience with Infrastructure as Code (IaC) or configuration management tools such as Terraform, Ansible, Chef, or Puppet.
- Must have strong automation and scripting experience using Python, Go, Bash, Shell, or similar scripting languages.
- Must have working experience with distributed databases or data systems such as MongoDB, Redis, Cassandra, Elasticsearch, or Kafka.
- Candidate must demonstrate strong expertise in at least one of the following areas - Databases / Distributed Data Systems, Observability & Monitoring, CI/CD Pipelines. Networking Concepts, Kubernetes / Container Platforms
- Candidates must be from B2C Product-based companies only.
- (Education) – BE / B.Tech or equivalent
Preferred
- Experience working with microservices or event-driven architectures.
- Exposure to cloud infrastructure, monitoring, reliability, and scalability practices.
- (Skills) – Understanding of programming languages such as Go, Python, or Java.
- Preferred (Environment) – Experience working in high-scale production or fast-growing product startups.
Job Description
As a DevOps Engineer, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
- Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
- Codify our infrastructure
- Do what it takes to keep the uptime above 99.99%
- Understand the bigger picture and sail through the ambiguities
- Scale technology considering cost and observability and manage end-to-end processes
- Understand DevOps philosophy and evangelize the principles across the organization
- Strong communication and collaboration skills to break down the silos
Candidate must have 6+ years of backend engineering experience, with 1–2 years leading engineers or owning major systems.
Candidate must be from a product-based organization with a startup mindset.
Must be strong in one core backend language: Node.js, Go, Java, or Python.
Deep understanding of distributed systems, caching, high availability, and microservices architecture.
Hands-on experience with AWS/GCP, Docker, Kubernetes, and CI/CD pipelines.
Strong command over system design, data structures, performance tuning, and scalable architecture
Ability to partner with Product, Data, Infrastructure, and lead end-to-end backend roadmap execution.
About the role:
We are looking for a skilled and driven Security Engineer to join our growing security team. This role requires a hands-on professional who can evaluate and strengthen the security posture of our
applications and infrastructure across Web, Android, iOS, APIs, and cloud-native environments.
The ideal candidate will also lead technical triage from our bug bounty program, integrate security into the DevOps lifecycle, and contribute to building a security-first engineering culture.
Required Skills & Experience:
● 3 to 6 years of solid hands-on experience in the VAPT domain
● Solid understanding of Web, Android, and iOS application security
● Experience with DevSecOps tools and integrating security into CI/CD
● Strong knowledge of cloud platforms (AWS/GCP/Azure) and their security models
● Familiarity with bug bounty programs and responsible disclosure practices
● Familiarity with tools like Burp Suite, MobSF, OWASP ZAP, Terraform, Checkov..etc
● Good knowledge of API security
● Scripting experience (Python, Bash, or similar) for automation tasks
Preferred Qualifications:
● OSCP, CEH, AWS Security Specialty, or similar certifications
● Experience working in a regulated environment (e.g., FinTech, InsurTech)
Responsibilities:
● Perform Security reviews, Vulnerability Assessments & Penetration Testing for Web,
Android, iOS, and API endpoints
● Perform Threat Modelling & anticipate potential attack vectors and improve security
architecture on complex or cross-functional components
● Identify and remediate OWASP Top 10 and mobile-specific vulnerabilities
● Conduct secure code reviews and red team assessments
● Integrate SAST, DAST, SCA, and secret scanning tools into CI/CD pipelines
● Automate security checks using tools like SonarQube, Snyk, Trivy, etc.
● Maintain and manage vulnerability scanning infrastructure
● Perform security assessments of AWS, Azure, and GCP environments, with an emphasis
on container security, particularly for Docker and Kubernetes.
● Implement guardrails for IAM, network segmentation, encryption, and cloud monitoring
● Contribute to infrastructure hardening for containers, Kubernetes, and virtual machines
● Triage bug bounty reports and coordinate remediation with engineering teams
● Act as the primary responder for external security disclosures
● Maintain documentation and metrics related to bug bounty and penetration testing
activities
● Collaborate with developers and architects to ensure secure design decisions
● Lead security design reviews for new features and products
● Provide actionable risk assessments and mitigation plans to stakeholders
Hiring for Senior Data Engineer
Exp : 7 - 12 yrs
Edu : BE/B.Tech
Work Location : Bengaluru / Hyderabad ( Hybrid )
Notice Period : Immediate - 15 days
Skills :
Python (Strong),
Advanced SQL,
Airflow /Spark/Kafka ,
Cloud (AWS/Azure/GCP)
Modern DW (Snowaflake/Databricks)
Data Pipeline ETL/ELT
JOB DESCRIPTION:
Location: Pune, Mumbai, Bangalore
Mode of Work : 3 days from Office
* Python : Strong expertise in data workflows and automation
* Pandas: For detailed data analysis and validation
* SQL: Querying and performing operations on Delta tables
* AWS Cloud: Compute and storage services
* OOPS concepts
Strong Senior Backend Engineer profiles
Mandatory (Experience 1) – Must have 5+ years of hands-on Backend Engineering experience building scalable, production-grade systems
Mandatory (Experience 2) – Must have strong backend development experience using one or more frameworks (FastAPI / Django (Python), Spring (Java), Express (Node.js).
Mandatory (Experience 3) – Must have deep understanding of relevant libraries, tools, and best practices within the chosen backend framework
Mandatory (Experience 4) – Must have strong experience with databases, including SQL and NoSQL, along with efficient data modeling and performance optimization
Mandatory (Experience 5) – Must have experience designing, building, and maintaining APIs, services, and backend systems, including system design and clean code practices
Mandatory (Domain) – Experience with financial systems, billing platforms, or fintech applications is highly preferred (fintech background is a strong plus)
Mandatory (Company) – Must have worked in product companies / startups, preferably Series A to Series D
Mandatory (Education) – Candidates from Tier - 1 engineering institutes (IITs, BITS, are highly preferred)
We are seeking a highly skilled Qt/QML Engineer to design and develop advanced GUIs for aerospace applications. The role requires working closely with system architects, avionics software engineers, and mission systems experts to create reliable, intuitive, and real-time UI for mission-critical systems such as UAV ground control stations, and cockpit displays.
Key Responsibilities
- Design, develop, and maintain high-performance UI applications using Qt/QML (Qt Quick, QML, C++).
- Translate system requirements into responsive, interactive, and user-friendly interfaces.
- Integrate UI components with real-time data streams from avionics systems, UAVs, or mission control software.
- Collaborate with aerospace engineers to ensure compliance with DO-178C, or MIL-STD guidelines where applicable.
- Optimise application performance for low-latency visualisation in mission-critical environments.
- Implement data visualisation (raster and vector maps, telemetry, flight parameters, mission planning overlays).
- Write clean, testable, and maintainable code while adhering to aerospace software standards.
- Work with cross-functional teams (system engineers, hardware engineers, test teams) to validate UI against operational requirements.
- Support debugging, simulation, and testing activities, including hardware-in-the-loop (HIL) setups.
Required Qualifications
- Bachelor’s / Master’s degree in Computer Science, Software Engineering, or related field.
- 1-3 years of experience in developing Qt/QML-based applications (Qt Quick, QML, Qt Widgets).
- Strong proficiency in C++ (11/14/17) and object-oriented programming.
- Experience integrating UI with real-time data sources (TCP/IP, UDP, serial, CAN, DDS, etc.).
- Knowledge of multithreading, performance optimisation, and memory management.
- Familiarity with aerospace/automotive domain software practices or mission-critical systems.
- Good understanding of UX principles for operator consoles and mission planning systems.
- Strong problem-solving, debugging, and communication skills.
Desirable Skills
- Experience with GIS/Mapping libraries (OpenSceneGraph, Cesium, Marble, etc.).
- Knowledge of OpenGL, Vulkan, or 3D visualisation frameworks.
- Exposure to DO-178C or aerospace software compliance.
- Familiarity with UAV ground control software (QGroundControl, Mission Planner, etc.) or similar mission systems.
- Experience with Linux and cross-platform development (Windows/Linux).
- Scripting knowledge in Python for tooling and automation.
- Background in defence, aerospace, automotive or embedded systems domain.
What We Offer
- Opportunity to work on cutting-edge aerospace and defence technologies.
- Collaborative and innovation-driven work culture.
- Exposure to real-world avionics and mission systems.
- Growth opportunities in autonomy, AI/ML for aerospace, and avionics UI systems.
Title: Quantitative Developer
Location : Mumbai
Candidates preferred with Master's
Who We Are
At Dolat Capital, we are a collective of traders, puzzle solvers, and tech enthusiasts passionate about decoding the intricacies of financial markets. From navigating volatile trading conditions with precision to continuously refining cutting-edge technologies and quantitative strategies, our work thrives at the intersection of finance and engineering.
We operate a robust, ultra-low latency infrastructure built for market-making and active trading across Equities, Futures, and Options—with some of the highest fill rates in the industry. If you're excited by technology, trading, and critical thinking, this is the place to evolve your skills into world class capabilities.
What You Will Do
This role offers a unique opportunity to work across both quantitative development and high frequency trading. You'll engineer trading systems, design and implement algorithmic strategies, and directly participate in live trading execution and strategy enhancement.
1. Quantitative Strategy & Trading Execution
- Design, implement, and optimize quantitative strategies for trading derivatives, index options, and ETFs
- Trade across options, equities, and futures, using proprietary HFT platforms
- Monitor and manage PnL performance, targeting Sharpe ratios of 6+
- Stay proactive in identifying market opportunities and inefficiencies in real-time HFT environments
- Analyze market behavior, particularly in APAC indices, to adjust models and positions dynamically
2. Trading Systems Development
- Build and enhance low-latency, high-throughput trading systems
- Develop tools to simulate trading strategies and access historical market data
- Design performance-optimized data structures and algorithms for fast execution
- Implement real-time risk management and performance tracking systems
3. Algorithmic and Quantitative Analysis
- Collaborate with researchers and traders to integrate strategies into live environments
- Use statistical methods and data-driven analysis to validate and refine models
- Work with large-scale HFT tick data using Python / C++
4. AI/ML Integration
- Develop and train AI/ML models for market prediction, signal detection, and strategy enhancement
- Analyze large datasets to detect patterns and alpha signals
5. System & Network Optimization
- Optimize distributed and concurrent systems for high-transaction throughput
- Enhance platform performance through network and systems programming
- Utilize deep knowledge of TCP/UDP and network protocols
6. Collaboration & Mentorship
- Collaborate cross-functionally with traders, engineers, and data scientists
- Represent Dolat in campus recruitment and industry events as a technical mentor
What We Are Looking For:
- Strong foundation in data structures, algorithms, and object-oriented programming (C++).
- Experience with AI/ML frameworks like TensorFlow, PyTorch, or Scikit-learn.
- Hands-on experience in systems programming within a Linux environment.
- Proficient and Hands on programming using python/ C++
- Familiarity with distributed computing and high-concurrency systems.
- Knowledge of network programming, including TCP/UDP protocols.
- Strong analytical and problem-solving skills.
A passion for technology-driven solutions in the financial markets.
About US:-
We turn customer challenges into growth opportunities.
Material is a global strategy partner to the world’s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences.
We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve.
Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners.
Experience Range: 4-10 Years
Role: Full Stack Developer
Duties:
As Full Stack Engineer, you will work in small teams in a highly collaborative way, use the latest technologies and enjoy seeing the direct impact from your work. Our highly skilled system architects and development managers configure software packages and build custom applications, creating the foundation for rapid and cost-effective implementation of systems that maximize value from day one. Our development teams are small, flexible and employ agile methodologies to quickly provide our consultants with the solutions they need. We combine the latest open source technologies together with traditional Enterprise software products.
The Role:
We create both rapid prototypes, usually in 2 to 3 weeks, as well as full-scale applications typically within 2 to 3 months, by working collaboratively and iteratively through design and development to deliver fully functioning web-based and mobile applications that meet business goals. Our Front-End Developers contribute to the architecture across the technology stack, from database to native apps.
Skills:
Minimum of 5–9 years of experience, with a proven record of hands-on software development in at least one of the following languages: Java, C#, C/C++, Python, JavaScript, Ruby, plus modern frontend proficiency in React and TypeScript. Demonstrated ownership of delivering end-to-end solutions (from design through production support), with strong proactivity in identifying opportunities, anticipating risks, and driving improvements without waiting for direction.
Significant experience designing, implementing, and operating Web Services and APIs (REST, SOAP, RPC, RMI) including API monitoring/observability and performance tuning. Solid understanding of network communication protocols (HTTP, TCP/IP, UDP, SMTP, DNS) and distributed system behaviors.
Capable of applying best coding practices, design patterns, and evaluating tradeoffs in complex, microservices-based architectures. Well versed in cloud computing (AWS), automated testing, CI/CD, and DevOps tooling; comfortable owning reliability, scalability, and operational excellence. Bonus: hands-on knowledge of Terraform (infrastructure as code).
Experience with relational data stores (MySQL, SQL Server, Oracle) and non-relational technologies, with strong proficiency in MongoDB (schema design, indexing, performance optimization), plus exposure to Elasticsearch, Cassandra, and related ecosystems. Strong professional experience with frameworks such as Node.js, AngularJS, Spring, Guice, and expertise building mobile, responsive/adaptive applications.
First-hand understanding of Agile development methodologies, with a commitment to engineering excellence (e.g., DRY, TDD, CI) and pragmatic delivery.
Non-Technical: First and foremost, passionate about technology, especially AI and emerging/disruptive technologies, and excited about translating innovation into real product impact. Strong command of English (verbal and written), excellent interpersonal skills, and a highly collaborative mindset, able to partner effectively across engineering, product, design, and stakeholders. Sound problem-solving ability to quickly process complex information and communicate it clearly and simply. Demonstrated leadership/mentorship, accountability, and a self-starter attitude suited to environments that foster entrepreneurial thinking.
What We Offer
- Professional Development and Mentorship.
- Hybrid work mode with remote friendly workplace. (6 times in a row Great Place To Work Certified).
- Health and Family Insurance.
- 40+ Leaves per year along with maternity & paternity leaves.
- Wellness, meditation and Counselling sessions.
Job Description: Data Analyst Intern
Location: On-site, Bangalore
Duration: 6 months (Full-time)
About us:
- Optimo Capital is a newly established NBFC founded by Prashant Pitti, who is also a co-founder of EaseMyTrip (a billion-dollar listed startup that grew profitably without any funding).
- Our mission is to serve the underserved MSME businesses with their credit needs in India. With less than 15% of MSMEs having access to formal credit, we aim to bridge this credit gap through a phygital model (physical branches + digital decision-making). As a technology and data-first company, tech lovers and data enthusiasts play a crucial role in building the analytics & tech at Optimo that helps the company thrive.
What we offer:
- Join our dynamic startup team and play a crucial role in core data analytics projects involving credit risk, lending strategy, credit features analytics, collections, and portfolio management.
- The analytics team at Optimo works closely with the Credit & Risk departments, helping them make data-backed decisions.
- This is an exceptional opportunity to learn, grow, and make a significant impact in a fast-paced startup environment.
- We believe that the freedom and accountability to make decisions in analytics and technology brings out the best in you and helps us build the best for the company.
- This environment offers you a steep learning curve and an opportunity to experience the direct impact of your analytics contributions. Along with this, we offer industry-standard compensation.
What we look for:
- We are looking for individuals with a strong analytical mindset, high levels of initiative / ownership, ability to drive tasks independently, clear communication and comfort working across teams.
- We value not only your skills but also your attitude and hunger to learn, grow, lead, and thrive, both individually and as part of a team.
- We encourage you to take on challenges, bring in new ideas, implement them, and build the best analytics systems.
Key Responsibilities:
- Conduct analytical deep-dives such as funnel analysis, cohort tracking, branch-wise performance reviews, TAT analysis, portfolio diagnostic, credit risk analytics that lead to clear actions.
- Work closely with stakeholders to convert business questions into measurable analyses and clearly communicated outputs.
- Support digital underwriting initiatives, including assisting in the development and analysis of underwriting APIs that enable decisioning on borrower eligibility (“whom to lend”) and exposure sizing (“how much to lend”).
- Develop and maintain periodic MIS and KPI reporting for key business functions (e.g., pipeline, disbursals, TAT, conversion, collections performance, portfolio trends).
- Use Python (pandas, numpy) to clean, transform, and analyse datasets; automate recurring reports and data workflows.
- Perform basic scripting to support data validation, extraction, and lightweight automation.
Required Skills and Qualifications:
- Strong proficiency in Excel, including pivots, lookup functions, data cleaning, and structured analysis.
- Strong working knowledge of SQL, including joins, aggregations, CTEs, and window functions.
- Proficiency in Python for data analysis (pandas, numpy); ability to write clean, maintainable scripts/notebooks.
- Strong logical reasoning and attention to detail, including the ability to identify errors and validate results rigorously.
- Ability to work with ambiguous requirements and imperfect datasets while maintaining output quality.
Preferred (Good to Have):
- REST APIs: A fundamental understanding of APIs and previous experience or projects related to API development/integrations.
- Familiarity with basic AWS tools/services: (S3, lambda, EC2, Glue Jobs).
- Experience with Git and basic engineering practices.
- Any experience with the lending/finance industry.
🚀 Job Title : Backend Engineer (Go / Python / Java)
Experience : 3+ Years
Location : Bangalore (Client Location – Work From Office)
Notice Period : Immediate to 15 Days
Open Positions : 4
Working Days : 6 Days a Week
🧠 Job Summary :
We are looking for a highly skilled Backend Engineer to build scalable, reliable, and high-performance systems in a fast-paced product environment.
You will own large features end-to-end — from design and development to deployment and monitoring — while collaborating closely with product, frontend, and infrastructure teams.
This role requires strong backend fundamentals, distributed systems exposure, and a mindset of operational ownership.
⭐ Mandatory Skills :
Strong backend development experience in Go / Python (FastAPI) / Java (Spring Boot) with hands-on expertise in Microservices, REST APIs, PostgreSQL, Redis, Kafka/SQS, AWS/GCP, Docker, Kubernetes, CI/CD, and strong DSA & System Design fundamentals.
🔧 Key Responsibilities :
- Design, develop, test, and deploy backend services end-to-end.
- Build scalable, modular, and production-grade microservices.
- Develop and maintain RESTful APIs.
- Architect reliable distributed systems with performance and fault tolerance in mind.
- Debug complex cross-system production issues.
- Implement secure development practices (authentication, authorization, data integrity).
- Work with monitoring dashboards, alerts, and performance metrics.
- Participate in code reviews and enforce engineering best practices.
- Contribute to CI/CD pipelines and release processes.
- Collaborate with product, frontend, and DevOps teams.
✅ Required Skills :
- Strong proficiency in Go OR Python (FastAPI) OR Java (Spring Boot).
- Hands-on experience building Microservices-based architectures.
- Strong understanding of REST APIs & distributed systems.
- Experience with PostgreSQL and Redis.
- Exposure to Kafka / SQS or other messaging systems.
- Hands-on experience with AWS or GCP.
- Experience with Docker and Kubernetes.
- Familiarity with CI/CD pipelines.
- Strong knowledge of Data Structures & System Design.
- Ability to independently own features and solve ambiguous engineering problems.
⭐ Preferred Background :
- Experience in product-based companies.
- Exposure to high-throughput or event-driven systems.
- Strong focus on code quality, observability, and reliability.
- Comfortable working in high-growth, fast-paced environments.
🧑💻 Interview Process :
- 1 Internal Screening Round
- HR Discussion (Project & Communication Evaluation)
- 3 Technical Rounds with Client
This is a fresh requirement, and interviews will be scheduled immediately.
About TradeLab
TradeLab is a leading fintech technology provider, delivering cutting-edge solutions to brokers, banks, and fintech platforms. Our portfolio includes high-performance Order & Risk Management Systems (ORMS), seamless MetaTrader integrations, AI-driven customer
engagement platforms such as PULSE LLaVA, and compliance-grade risk management solutions. With a proven track record of successful deployments at top-tier brokerages and financial institutions, TradeLab combines scalability, regulatory alignment, and innovation to
redefine digital broking and empower clients in the capital markets ecosystem.
Key Responsibilities
• Design, develop, and execute detailed automation & manual test cases based on functional and technical requirements.
• Develop, maintain, and execute automated test scripts using industry-standard tools and frameworks.
• Identify, document, and track software defects, collaborating with developers to ensure timely resolution.
• Conduct regression, integration, performance, and security testing as needed.
• Participate in the planning and review of test strategies, test plans, and test scenarios.
• Ensure comprehensive test coverage and maintain accurate test documentation and reports.
• Integrate automated tests into CI/CD pipelines for continuous quality assurance.
• Collaborate with cross-functional teams to understand product requirements and deliver high-quality releases.
• Participate in code and test case reviews, providing feedback to improve quality standards.
• Stay updated with emerging testing tools, techniques, and best practices.
Must-Have Qualifications
• Proven experience in software testing.
• Strong knowledge of QA methodologies, SDLC, and STLC.
• Proficiency in at least one programming/scripting language used for automation (e.g., Java, Python, JavaScript).
• Experience with automation tools such as Selenium, Appium, or similar.
• Ability to write and execute complex SQL queries for data validation.
• Familiarity with Agile/Scrum methodologies.
• Excellent analytical, problem-solving, and communication skills.
• Experience with bug tracking and test management tools (e.g., JIRA, TestRail).
• Bachelor’s degree in computer science, Engineering, or related field.
Why Join TradeLab?
• Innovative Environment: Join a fast-growing fintech leader at the forefront of transforming the Indian and global brokerage ecosystem with cutting-edge technology.
• Ownership & Impact: Take full ownership of a high-potential territory (Western India) with direct visibility to senior leadership and the opportunity to shape regional growth.
• Cutting-Edge Solutions: Gain hands-on experience with next-generation trading infrastructure, AI-driven platforms, and compliance-focused solutions.
• Growth Opportunities: Thrive in an entrepreneurial role with significant learning potential, professional development, and a steep growth trajectory.
What you will be working on?
- Driving product implementation from conceptualisation to delivery. This would involve planning and breaking down projects, leading architectural discussions and decisions, building high quality documentation and architecture diagrams, and driving the execution end to end.
- Own the development practices, processes, and standards for your team
- Own the technical architecture, drive engineering design, and shoulder critical decisions
- Understand, prioritize and deliver the feature roadmap while chipping away at the technical debt
- Work effectively with a cross-functional team of product managers, designers, developers, and QA
- Own the communication of the team’s progress and perception of the team itself
- Collaborate with the Support team to keep track of and triage technical issues and track them through to resolution
- Collaborate with Talent Acquisition to drive sourcing, screening, interviewing, and recruitment of the right talent for your team
- Continuously improve the productivity of your team by identifying investments in technology, process, and continuous delivery
- Own the morale of your team, unblock them at critical junctures, and break ties in a timely manner
- Own the careers of your team members, deliver regular and timely feedback, represent your team for annual reviews and reward your performers
- You will nurture and grow the team in order to deliver path-breaking solutions, as outlined above, for the business in the coming years
What we are looking for?
- 7+ years of total relevant experience with a minimum of one year of actively managing and owning the delivery of a high-performing engineering team.
- Bachelor's Degree in a technical field
- Ability to work in a very fast-paced environment with high degrees of vagueness.
- Excellent database knowledge and data modeling skills
- Excellent leadership skills to manage and mentor teams
- Experience designing and implementing distributed systems
- Superior management skills to manage multi-engineer projects and experience in delivering high-quality projects on time
- Track record of individual technical achievement
- Excellent command in CS fundamentals and in at least one interpreted language (PHP / Python / RoR)
- Experience developing software in a commercial software product development environment
- Experience leading teams that built software products for scale
- Excellent communication skills, open, collaborative, and proven team player
- Experience working with global customers and experience with agile processes and Serverless Architecture is a plus

This is for one of our Reputed Entertainment organisation
Key Responsibilities
· Advanced ML & Deep Learning: Design, develop, and deploy end-to-end Machine Learning models for Content Recommendation Engines, Churn Prediction, and Customer Lifetime Value (CLV).
· Generative AI Implementation: Prototype and integrate GenAI solutions (using LLMs like Gemini/GPT) for automated Metadata Tagging, Script Summarization, or AI-driven Chatbots for viewer engagement.
· Develop and maintain high-scale video processing pipelines using Python, OpenCV, and FFmpeg to automate scene detection, ad-break identification, and visual feature extraction for content enrichment
· Cloud Orchestration: Utilize GCP (Vertex AI, BigQuery, Dataflow) to build scalable data pipelines and manage the full ML lifecycle (MLOps).
· Business Intelligence & Storytelling: Create high-impact, automated dashboards in to track KPIs for data-driven decision making
· Cross-functional Collaboration: Work closely with Product, Design, Engineering, Content, and Marketing teams to translate "viewership data" into "strategic growth."
Preferred Qualifications
· Experience in Media/OTT: Prior experience working with large scale data from broadcast channels, videos, streaming platforms or digital ad-tech.
· Education: Master’s/Bachelor’s degree in a quantitative field (Computer Science, Statistics, Mathematics, or Data Science).
· Product Mindset: Ability to not just build a model, but to understand the business implications of the solution.
· Communication: Exceptional ability to explain "Neural Network outputs" to a "Creative Content Producer" in simple terms.
We are looking for a skilled and self-driven Data Engineer to strengthen our data platform and
improve overall data quality, reliability, and usability for customers and internal stakeholders.
This role is critical to building and maintaining scalable data pipelines, well-structured data
models, and analytics-ready systems. The ideal candidate has startup experience, enjoys
building systems from scratch, and takes ownership end to end.
Roles and Responsibilities
● Design, build, and maintain scalable data pipelines and ETL workflows to support
analytics and product use cases.
● Develop and manage data warehousing solutions using platforms like Snowflake,
Redshift, or ClickHouse.
● Ensure data quality, consistency, and reliability across all data sources and downstream
systems.
● Collaborate closely with product, analytics, and engineering teams to understand data
requirements.
● Build and optimize data models for reporting, dashboards, and analytics.
● Support and enable BI tools such as Power BI, Tableau, or Metabase.
● Monitor pipelines, troubleshoot issues, and continuously improve performance.
● Document data workflows, schemas, and processes for clarity and maintainability.
Skills and Qualifications
● Strong proficiency in SQL
● Experience with data warehousing platforms (i.e. Snowflake, Redshift, ClickHouse)
● Hands-on experience with ETL orchestration tools (i.e. Airflow, dbt, Dagster)
● Experience with dashboarding tools (Power BI, Tableau, Metabase)
● Strong programming skills in Python
● AWS experience is preferred
● Self-starter mindset with startup experience
● Strong problem-solving abilities
● Highly organized with a systems-thinking approach
● Ownership-driven and accountable
● Clear and effective communicator
AI Automation Engineer – Intelligent Systems
(AI Generalist – Automation & Intelligent Systems)
📍 Location: Bengaluru (Onsite)
🏢 Company: Learners Point Academy
📊 Reporting To: Head
🕒 Employment Type: Full-Time
🎯 Role Summary
Learners Point Academy is seeking a hands-on AI Automation Engineer to architect, deploy, and scale intelligent automation systems across Sales, Marketing, Academics, Operations, Finance, and Customer Experience.
🧠 What This Role Requires:
- A Systems Thinker
- A Hands-on Builder
- An Automation Architect
- An AI Deployment Specialist
Core Responsibilities
1️⃣ Operational Workflow Automation
- Automate CRM workflows (Bitrix24 / Zoho or similar)
- Build intelligent lead scoring systems
- Auto-generate proposals from structured CRM inputs
- Deploy WhatsApp automation with tiered logic
- Design cross-functional task routing systems
- Implement automated follow-up sequences
- Build cross-department reporting pipelines
2️⃣ AI Agents & Intelligence Systems
- Build internal AI Sales Assistant (copilot model)
- Develop Academic AI Assistant (summaries, grading support)
- Create AI-powered reporting dashboards
- Build centralized AI knowledge base
- Develop customer segmentation intelligence
- Implement predictive closure timeline models
3️⃣ LMS & Assessment Automation
- Design AI-powered quiz generation systems
- Implement auto-grading frameworks
- Integrate Zoom attendance with LMS tracking
- Automate certification workflows
- Build student performance dashboards
- Ensure seamless LMS–CRM synchronization
4️⃣ Revenue & Growth Intelligence
- Develop pipeline scoring engines
- Deploy sales copilot (email drafting, objection handling)
- Build AI-driven pricing optimization tools
- Design churn prediction logic
- Automate ad spend tracking systems
- Create performance intelligence dashboards
5️⃣ AI Architecture & Governance
- Define AI usage SOPs
- Maintain structured prompt libraries
- Document system architecture & workflows
- Ensure scalable, secure system design
- Build reusable frameworks — avoid patchwork automation
🔧 Required Technical Skills
Mandatory:
- Workflow Automation: Zapier / Make / n8n
- CRM Automation (Bitrix24 / Zoho / similar)
- LLM API Integration (OpenAI, Claude, etc.)
- REST APIs & Webhook Integrations
- Python or JavaScript scripting
- Google Workspace Automation
- Business Process Automation Design
Good to Have
- Lang Chain or AI Agent Frameworks
- Vector Databases & RAG Systems
- Whats App Business API Integration
- Workflow Orchestration Tools
- BI Tools (Power BI / Looker)
- LMS Integration Experience
🎓 Qualifications
- Bachelor’s / Master’s in Engineering, Computer Science, AI, or related field
- 3–6 years experience in AI deployment, automation, or systems integration
- Demonstrated experience implementing automation in business environments
- Portfolio of deployed AI systems (production-grade, not academic-only)
📈 Ideal Candidate Profile
You:
- Think in systems, not scripts
- Understand real-world business workflows
- Have deployed AI agents in production
- Can connect CRM + LMS + Communication tools seamlessly
- Can explain technical architecture clearly to leadership
- Prefer measurable business impact over experimental prototypes
🚫
This Role Is NOT For
- Pure ML researchers
- Academic AI model developers
- Candidates without business automation exposure
- Candidates without real deployment experience
Role & Responsibilities
As a Founding Engineer, you'll join the engineering team during an exciting growth phase, contributing to a platform that handles complex financial operations for B2B companies. You'll work on building scalable systems that automate billing, usage metering, revenue recognition, and financial reporting—directly impacting how businesses manage their revenue operations.
This role is ideal for someone who thrives in a dynamic startup environment where requirements evolve quickly and problems require creative solutions. You'll work on diverse technical challenges, from API development to external integrations, while collaborating with senior engineers, product managers, and customer success teams.
Key Responsibilities
- Build core platform features: Develop robust APIs, services, and integrations that power billing automation and revenue recognition capabilities.
- Work across the full stack: Contribute to backend services and frontend interfaces to ensure seamless user experiences.
- Implement critical integrations: Connect the platform with external systems including CRMs, data warehouses, ERPs, and payment processors.
- Optimize for scale: Design systems that handle complex pricing models, high-volume usage data, and real-time financial calculations.
- Drive quality and best practices: Write clean, maintainable code and participate in code reviews and architectural discussions.
- Solve complex problems: Debug issues across the stack and collaborate with cross-functional teams to address evolving client needs.
The Impact You'll Make
- Power business growth: Enable fast-growing B2B companies to scale billing and revenue operations efficiently.
- Build critical financial infrastructure: Contribute to systems handling high-value transactions with accuracy and compliance.
- Shape product direction: Join during a scaling phase where your contributions directly impact product evolution and customer success.
- Accelerate your expertise: Gain deep exposure to financial systems, B2B SaaS operations, and enterprise-grade software development.
- Drive the future of B2B commerce: Help build infrastructure supporting next-generation pricing models, from usage-based to value-based billing.
Ideal Candidate Profile
Experience
- 5+ years of hands-on Backend Engineering experience building scalable, production-grade systems.
- Strong backend development experience using one or more frameworks: FastAPI / Django (Python), Spring (Java), or Express (Node.js).
- Deep understanding of relevant libraries, tools, and best practices within the chosen backend framework.
- Strong experience with databases (SQL & NoSQL), including efficient data modeling and performance optimization.
- Proven experience designing, building, and maintaining APIs, services, and backend systems with solid system design and clean code practices.
Domain
- Experience with financial systems, billing platforms, or fintech applications is highly preferred.
Company Background
- Experience working in product companies or startups (preferably Series A to Series D).
Education
- Candidates from Tier 1 engineering institutes (IITs, BITS, etc.) are highly preferred.
Hiring: Full Stack Developer (Next.js + Python/Node.js) – 4+ Years Experience
We are looking for a skilled Full Stack Developer with 4+ years of experience in building scalable web applications using Next.js and either Python or Node.js on the backend.
🔹 Key Responsibilities:
- Develop and maintain web applications using Next.js (React framework)
- Build and manage RESTful APIs using Node.js (Express/NestJS) or Python (Django/FastAPI)
- Work on end-to-end feature development (frontend + backend)
- Integrate third-party APIs and services
- Optimize applications for performance and scalability
- Collaborate with cross-functional teams in an agile environment
🔹 Required Skills:
- 4+ years of full-stack development experience
- Strong hands-on experience with Next.js
- Backend expertise in Node.js or Python
About CloudThat:-
At CloudThat, we are driven by our mission to empower professionals and businesses to harness the full potential of cloud technologies. As a leader in cloud training and consulting services in India, our core values guide every decision we make and every customer interaction we have.
Role Overview:-
We are looking for a passionate and experienced Technical Trainer to join our expert team and help drive knowledge adoption across our customers, partners, and internal teams.
Key Responsibilities:
• Deliver high-quality, engaging technical training sessions both in-person and virtually to customers, partners, and internal teams.
• Design and develop training content, labs, and assessments based on business and technology requirements.
• Collaborate with internal and external SMEs to draft course proposals aligned with customer needs and current market trends.
• Assist in training and onboarding of other trainers and subject matter experts to ensure quality delivery of training programs.
• Create immersive lab-based sessions using diagrams, real-world scenarios, videos, and interactive exercises.
• Develop instructor guides, certification frameworks, learner assessments, and delivery aids to support end-to-end training delivery.
• Integrate hands-on project-based learning into courses to simulate practical environments and deepen understanding.
• Support the interpersonal and facilitation aspects of training fostering an inclusive, engaging, and productive learning environment
Skills & Qualifications:
• Experience developing content for professional certifications or enterprise skilling programs.
• Familiarity with emerging technology areas such as cloud computing, AI/ML, DevOps, or data engineering.
Technical Competencies:
- Expertise in languages like C, C++, Python, Java
- Understanding of algorithms and data structures
- Expertise on SQL
Or Directly Apply-https://cloudthat.keka.com/careers/jobdetails/95441
JOB DETAILS:
* Job Title: Principal Data Scientist
* Industry: Healthcare
* Salary: Best in Industry
* Experience: 6-10 years
* Location: Bengaluru
Preferred Skills: Generative AI, NLP & ASR, Transformer Models, Cloud Deployment, MLOps
Criteria:
- Candidate must have 7+ years of experience in ML, Generative AI, NLP, ASR, and LLMs (preferably healthcare).
- Candidate must have strong Python skills with hands-on experience in PyTorch/TensorFlow and transformer model fine-tuning.
- Candidate must have experience deploying scalable AI solutions on AWS/Azure/GCP with MLOps, Docker, and Kubernetes.
- Candidate must have hands-on experience with LangChain, OpenAI APIs, vector databases, and RAG architectures.
- Candidate must have experience integrating AI with EHR/EMR systems, ensuring HIPAA/HL7/FHIR compliance, and leading AI initiatives.
Job Description
Principal Data Scientist
(Healthcare AI | ASR | LLM | NLP | Cloud | Agentic AI)
Job Details
- Designation: Principal Data Scientist (Healthcare AI, ASR, LLM, NLP, Cloud, Agentic AI)
- Location: Hebbal Ring Road, Bengaluru
- Work Mode: Work from Office
- Shift: Day Shift
- Reporting To: SVP
- Compensation: Best in the industry (for suitable candidates)
Educational Qualifications
- Ph.D. or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field
- Technical certifications in AI/ML, NLP, or Cloud Computing are an added advantage
Experience Required
- 7+ years of experience solving real-world problems using:
- Natural Language Processing (NLP)
- Automatic Speech Recognition (ASR)
- Large Language Models (LLMs)
- Machine Learning (ML)
- Preferably within the healthcare domain
- Experience in Agentic AI, cloud deployments, and fine-tuning transformer-based models is highly desirable
Role Overview
This position is part of company, a healthcare division of Focus Group specializing in medical coding and scribing.
We are building a suite of AI-powered, state-of-the-art web and mobile solutions designed to:
- Reduce administrative burden in EMR data entry
- Improve provider satisfaction and productivity
- Enhance quality of care and patient outcomes
Our solutions combine cutting-edge AI technologies with live scribing services to streamline clinical workflows and strengthen clinical decision-making.
The Principal Data Scientist will lead the design, development, and deployment of cognitive AI solutions, including advanced speech and text analytics for healthcare applications. The role demands deep expertise in generative AI, classical ML, deep learning, cloud deployments, and agentic AI frameworks.
Key Responsibilities
AI Strategy & Solution Development
- Define and develop AI-driven solutions for speech recognition, text processing, and conversational AI
- Research and implement transformer-based models (Whisper, LLaMA, GPT, T5, BERT, etc.) for speech-to-text, medical summarization, and clinical documentation
- Develop and integrate Agentic AI frameworks enabling multi-agent collaboration
- Design scalable, reusable, and production-ready AI frameworks for speech and text analytics
Model Development & Optimization
- Fine-tune, train, and optimize large-scale NLP and ASR models
- Develop and optimize ML algorithms for speech, text, and structured healthcare data
- Conduct rigorous testing and validation to ensure high clinical accuracy and performance
- Continuously evaluate and enhance model efficiency and reliability
Cloud & MLOps Implementation
- Architect and deploy AI models on AWS, Azure, or GCP
- Deploy and manage models using containerization, Kubernetes, and serverless architectures
- Design and implement robust MLOps strategies for lifecycle management
Integration & Compliance
- Ensure compliance with healthcare standards such as HIPAA, HL7, and FHIR
- Integrate AI systems with EHR/EMR platforms
- Implement ethical AI practices, regulatory compliance, and bias mitigation techniques
Collaboration & Leadership
- Work closely with business analysts, healthcare professionals, software engineers, and ML engineers
- Implement LangChain, OpenAI APIs, vector databases (Pinecone, FAISS, Weaviate), and RAG architectures
- Mentor and lead junior data scientists and engineers
- Contribute to AI research, publications, patents, and long-term AI strategy
Required Skills & Competencies
- Expertise in Machine Learning, Deep Learning, and Generative AI
- Strong Python programming skills
- Hands-on experience with PyTorch and TensorFlow
- Experience fine-tuning transformer-based LLMs (GPT, BERT, T5, LLaMA, etc.)
- Familiarity with ASR models (Whisper, Canary, wav2vec, DeepSpeech)
- Experience with text embeddings and vector databases
- Proficiency in cloud platforms (AWS, Azure, GCP)
- Experience with LangChain, OpenAI APIs, and RAG architectures
- Knowledge of agentic AI frameworks and reinforcement learning
- Familiarity with Docker, Kubernetes, and MLOps best practices
- Understanding of FHIR, HL7, HIPAA, and healthcare system integrations
- Strong communication, collaboration, and mentoring skills
Python API Connector Developer
Peliqan is a highly scalable and secure cloud solution for data collaboration in the modern data stack. We are on a mission to reinvent how data is shared in companies.
We are looking for a Python Developer to join our team and help us build a robust and reliable Connectors that connect with existing REST APIs. The ideal candidate has a strong background in consuming APIs, working with REST APIs and GraphQL APIs, using Postman, and general development skills in Python.
In this role, you will be responsible for developing data connectors, these are Python wrappers that consume existing APIs from various data sources such as SaaS CRM systems, accounting software, ERP systems, eCommerce platforms etc. You will become an expert in handling APIs to perform ETL data extraction from these sources into the Peliqan data warehouse.
You will also maintain documentation, and provide technical support to our internal and external clients. Additionally, you will be required to collaborate with other teams such as Product, Engineering, and Design to ensure the successful implementation of our connectors.If you have an eye for detail and a passion for technology, we want to hear from you!
Your responsibilities
- As a Python API Connector Developer at Peliqan.io, you are responsible for developing high-quality ETL connectors to extract data from SaaS data sources by consuming REST APIs and GraphQL APIs.
- Develop and maintain Connector documentation, including code samples and usage guidelines
- Troubleshoot and debug complex technical problems related to APIs and connectors
- Collaborate with other developers to ensure the quality and performance of our connectors
What makes you a great candidate
- Expert knowledge of technologies such as RESTful APIs, GraphQL, JSON, XML, OAuth2 flows, HTTP, SSL/TLS, Webhooks, API authentication methods, rate limiting, paging strategies in APIs, headers, response codes
- Basic knowledge of web services technologies, including SOAP and WSDL
- Proficiency in database technologies such as MySQL, MongoDB, etc.
- Experience with designing, building, and maintaining public and private APIs
- Excellent understanding of REST APIs (consuming APIs in Postman)
- Coding in Python
- Experienced in working with JSON, JSON parsing in Python and JSON path
- Good understanding of SaaS software, CRM, Marketing Automation, Accounting, and ERP systems (as they will be the main source of data)
- Analytical mindset: you are capable of discussing technical requirements with customers and implementing these in the Peliqan platform
- Customer-driven, great communication skills
- You are motivated, proactive, you have eyes for details
About RapidClaims
RapidClaims is a leader in AI-driven revenue cycle management, transforming medical
coding and revenue operations with cutting-edge technology.
The company has raised $11 million in total funding from top investors, including Accel
and Together Fund.
Join us as we scale a cloud-native platform that runs transformer-based Large Language
Models rigorously fine-tuned on millions of clinical notes and claims every month. You’ll
engineer autonomous coding pipelines that parse ICD-10-CM, CPT® and HCPCS at
lightning speed, deliver reimbursement insights with sub-second latency and >99 %
accuracy, and tackle the deep-domain challenges that make clinical AI one of the
hardest—and most rewarding—problems in tech.
Engineering Manager- Job Overview
We are looking for an Engineering Manager who can lead a team of engineers while
staying deeply involved in technical decisions. This role requires a strong mix of people
leadership, system design expertise, and execution focus to deliver high-quality product
features at speed. You will work closely with Product and Leadership to translate
requirements into scalable technical solutions and build a high-performance engineering
culture.
Key Responsibilities:
● Lead, mentor, and grow a team of software engineers
● Drive end-to-end ownership of product features from design to deployment
● Work closely with Product to translate requirements into technical solutions
● Define architecture and ensure scalability, reliability, and performance
● Establish engineering best practices, code quality, and review standards
● Improve development velocity, sprint planning, and execution discipline
● Hire strong engineering talent and build a solid team
● Create a culture of accountability, ownership, and problem-solving
● Ensure timely releases without compromising quality
● Stay hands-on with critical technical decisions and reviews
.
Requirements:
● 5+ years of software engineering experience, with 2+ years in team leadership
● Strong experience in building and scaling backend systems and APIs
● Experience working in a product/startup environment
● Good understanding of system design, architecture, and databases
● Ability to manage engineers while remaining technically credible
● Strong problem-solving and decision-making skills
● Experience working closely with Product teams
● High ownership mindset and bias for action
Good to Have
● Experience in healthcare tech / automation / RPA / AI tools
● Experience building internal tools and workflow systems
● Exposure to cloud infrastructure (AWS/GCP/Azure)
JOB DETAILS:
* Job Title: Specialist I - DevOps Engineering
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 7-10 years
* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
Job Description
Job Summary:
As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.
The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.
Key Responsibilities:
- Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
- Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
- Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
- Define migration scope — determine how much history to migrate and plan the repository structure.
- Manage branch renaming and repository organization for optimized post-migration workflows.
- Collaborate with development teams to determine migration points and finalize migration strategies.
- Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.
Required Qualifications:
- Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
- Hands-on experience with P4-Fusion.
- Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
- Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
- Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
- Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
- Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
- Familiarity with CI/CD pipeline integration to validate workflows post-migration.
- Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
- Excellent communication and collaboration skills for cross-team coordination and migration planning.
- Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.
Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools
Must-Haves
Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)
Hi,
Greetings from Ampera!
we are looking for a Data Scientist with strong Python & Forecasting experience.
Title : Data Scientist – Python & Forecasting
Experience : 4 to 7 Yrs
Location : Chennai/Bengaluru
Type of hire : PWD and Non PWD
Employment Type : Full Time
Notice Period : Immediate Joiner
Working hours : 09:00 a.m. to 06:00 p.m.
Workdays : Mon - Fri
Job Description:
We are looking for an experienced Data Scientist with strong expertise in Python programming and forecasting techniques. The ideal candidate should have hands-on experience building predictive and time-series forecasting models, working with large datasets, and deploying scalable solutions in production environments.
Key Responsibilities
- Develop and implement forecasting models (time-series and machine learning based).
- Perform exploratory data analysis (EDA), feature engineering, and model validation.
- Build, test, and optimize predictive models for business use cases such as demand forecasting, revenue prediction, trend analysis, etc.
- Design, train, validate, and optimize machine learning models for real-world business use cases.
- Apply appropriate ML algorithms based on business problems and data characteristics
- Write clean, modular, and production-ready Python code.
- Work extensively with Python Packages & libraries for data processing and modelling.
- Collaborate with Data Engineers and stakeholders to deploy models into production.
- Monitor model performance and improve accuracy through continuous tuning.
- Document methodologies, assumptions, and results clearly for business teams.
Technical Skills Required:
Programming
- Strong proficiency in Python
- Experience with Pandas, NumPy, Scikit-learn
Forecasting & Modelling
- Hands-on experience in Time Series Forecasting (ARIMA, SARIMA, Prophet, etc.)
- Experience with ML-based forecasting models (XGBoost, LightGBM, Random Forest, etc.)
- Understanding of seasonality, trend decomposition, and statistical modeling
Data & Deployment
- Experience handling structured and large datasets
- SQL proficiency
- Exposure to model deployment (API-based deployment preferred)
- Knowledge of MLOps concepts is an added advantage
Tools (Preferred)
- TensorFlow / PyTorch (optional)
- Airflow / MLflow
- Cloud platforms (AWS / Azure / GCP)
Educational Qualification
- Bachelor’s or Master’s degree in Data Science, Statistics, Computer Science, Mathematics, or related field.
Key Competencies
- Strong analytical and problem-solving skills
- Ability to communicate insights to technical and non-technical stakeholders
- Experience working in agile or fast-paced environments
Accessibility & Inclusion Statement
We are committed to creating an inclusive environment for all employees, including persons with disabilities. Reasonable accommodations will be provided upon request.
Equal Opportunity Employer (EOE) Statement
Ampera Technologies is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.
Strong Backend Engineer Profiles
Mandatory (Experience 1) – Must have 2+ years of hands-on Backend Engineering experience building production-grade systems in a B2B SaaS or product environment
Mandatory (Experience 2) – Must have strong backend development experience using at least one backend framework such as FastAPI / Django (Python), Spring (Java), or Express (Node.js)
Mandatory (Experience 3) – Must have a solid understanding of backend fundamentals, including API development, service-oriented architecture, data structures, algorithms, and clean coding practices
Mandatory (Experience 4) – Must have strong experience working with databases (SQL and/or NoSQL), including efficient data modeling and query optimization
Mandatory (Experience 5) – Must have experience designing, building, and maintaining APIs and backend services, including integrations with external systems (CRMs, payment gateways, ERPs, data platforms, etc.)
Mandatory (Experience 6) – Must have experience working in cloud-based environments (AWS / GCP / Azure) and be familiar with Git-based collaborative development workflows
Mandatory (Domain) – Experience with financial systems, billing platforms, fintech applications, or SaaS revenue-related workflows is highly preferred
Mandatory (Domain) – Experience with financial systems, billing platforms, or fintech applications is highly preferred (fintech background is a strong plus)
Mandatory (Company) – Must have worked in product companies / startups, preferably Series A to Series D
Preferred
Preferred (Education) – Candidates from Tier - 1 engineering institutes (IITs, BITS are highly preferred)
Strong Backend Engineer Profiles
Mandatory (Experience 1) – Must have 2+ years of hands-on Backend Engineering experience building production-grade systems in a B2B SaaS or product environment
Mandatory (Experience 2) – Must have strong backend development experience using at least one backend framework such as FastAPI / Django (Python), Spring (Java), or Express (Node.js)
Mandatory (Experience 3) – Must have a solid understanding of backend fundamentals, including API development, service-oriented architecture, data structures, algorithms, and clean coding practices
Mandatory (Experience 4) – Must have strong experience working with databases (SQL and/or NoSQL), including efficient data modeling and query optimization
Mandatory (Experience 5) – Must have experience designing, building, and maintaining APIs and backend services, including integrations with external systems (CRMs, payment gateways, ERPs, data platforms, etc.)
Mandatory (Experience 6) – Must have experience working in cloud-based environments (AWS / GCP / Azure) and be familiar with Git-based collaborative development workflows
Mandatory (Domain) – Experience with financial systems, billing platforms, fintech applications, or SaaS revenue-related workflows is highly preferred
Mandatory (Domain) – Experience with financial systems, billing platforms, or fintech applications is highly preferred (fintech background is a strong plus)
Mandatory (Company) – Must have worked in product companies / startups, preferably Series A to Series D
Candidate must have 6+ years of backend engineering experience, with 1–2 years leading engineers or owning major systems.
Candidate must be from a product-based organization with a startup mindset.
Must be strong in one core backend language: Node.js, Go, Java, or Python.
Deep understanding of distributed systems, caching, high availability, and microservices architecture.
Hands-on experience with AWS/GCP, Docker, Kubernetes, and CI/CD pipelines.
Strong command over system design, data structures, performance tuning, and scalable architecture
Ability to partner with Product, Data, Infrastructure, and lead end-to-end backend roadmap execution.
10–14 years of experience in software engineering, with strong emphasis on backend and data architecture for large-scale systems.
Proven experience designing and deploying distributed, event-driven systems and streaming data pipelines.
Expert proficiency in Go/Python, including experience with microservices, APIs, and concurrency models.
Deep understanding of data flows across multi-sensor and multi-modal sources, including ingestion, transformation, and synchronization.
Experience building real-time APIs for interactive web applications and data-heavy workflows.
Familiarity with frontend ecosystems (React, TypeScript) and rendering frameworks leveraging WebGL/WebGPU.
Hands-on experience with CI/CD, Kubernetes, Docker, and Infrastructure as Code (Terraform, Helm).
Job Title: Backend Engineer – Python (AI Backend)
Location: Bangalore, India
Experience: 1–2 Years
Job Description
We are looking for a Backend Engineer with strong Python skills and hands-on exposure to AI-based applications. The candidate will be responsible for developing scalable backend services and supporting AI-powered systems such as LLM integrations, AI agents, and RAG pipelines.
Key Responsibilities
- Develop and maintain backend services using Python (FastAPI preferred)
- Build and manage RESTful APIs for frontend and AI integrations
- Support development of AI-driven features (LLMs, RAG systems, AI agents)
- Design and maintain both monolithic and microservices architectures
- Optimize database performance and backend scalability
- Work with DevOps for Docker-based deployments
Required Skills
- Strong experience in Python backend development
- Hands-on experience with FastAPI / Django / Flask
- Knowledge of REST APIs and microservices
- Experience with AI applications (LLM usage, prompt engineering basics)
- Database knowledge: MongoDB, PostgreSQL or MySQL
- Experience with Docker and basic cloud platforms (AWS/GCP/Azure)
- Hands-on experience with Redis for caching and in-memory storage
Good to Have
- Experience integrating payment gateways (Razorpay, Stripe, PayU, etc.)
- Exposure to event-driven architectures using RabbitMQ, Kafka, or Redis Streams
- Kubernetes
- Understanding of model fine-tuning concepts
Location: Bangalore, India
Experience: 1–2 Years
Job Description
We are looking for a Backend Engineer with strong Python skills and hands-on exposure to AI-based applications. The candidate will be responsible for developing scalable backend services and supporting AI-powered systems such as LLM integrations, AI agents, and RAG pipelines.
Key Responsibilities
- Develop and maintain backend services using Python (FastAPI preferred)
- Build and manage RESTful APIs for frontend and AI integrations
- Support development of AI-driven features (LLMs, RAG systems, AI agents)
- Design and maintain both monolithic and microservices architectures
- Optimize database performance and backend scalability
- Work with DevOps for Docker-based deployments
Required Skills
- Strong experience in Python backend development
- Hands-on experience with FastAPI / Django / Flask
- Knowledge of REST APIs and microservices
- Experience with AI applications (LLM usage, prompt engineering basics)
- Database knowledge: MongoDB, PostgreSQL or MySQL
- Experience with Docker and basic cloud platforms (AWS/GCP/Azure)
- Hands-on experience with Redis for caching and in-memory storage
Good to Have
- Experience integrating payment gateways (Razorpay, Stripe, PayU, etc.)
- Exposure to event-driven architectures using RabbitMQ, Kafka, or Redis Streams
- Kubernetes
- Understanding of model fine-tuning concepts
Job Title:
Senior Full Stack Developer
Experience: 5 to 7 Years. Minimum 5yrs FSD exp mandatory
Location: Bangalore (Onsite)
About ProductNova:
ProductNova is a fast-growing product development organization that partners with ambitious companies to build, modernize, and scale high-impact digital products. Our teams of product leaders, engineers, AI specialists, and growth experts work at the intersection of strategy, technology, and execution to help organizations create differentiated product portfolios and accelerate business outcomes.
Founded in early 2023, ProductNova has successfully designed, built, and launched 20+ large-scale, AI-powered products and platforms across industries. We specialize in solving complex business problems through thoughtful product design, robust engineering, and responsible use of AI.
What We Do
Product Development
We design and build user-centric, scalable, AI-native B2B SaaS products that are deeply aligned with business goals and long-term value creation.
Our end-to-end product development approach covers the full lifecycle:
● Product discovery and problem definition
● User research and product strategy
● Experience design and rapid prototyping
● AI-enabled engineering, testing, and platform architecture
● Product launch, adoption, and continuous improvement
From early concepts to market-ready solutions, we focus on building products that are resilient, scalable, and ready for real-world adoption. Post-launch, we work closely with customers to iterate based on user feedback and expand products across new use cases, customer segments, and markets.
Growth & Scale
For early-stage companies and startups, we act as product partners—shaping ideas into viable products, identifying target customers, achieving product-market fit, and supporting go-to-market execution, iteration, and scale.
For established organizations, we help unlock the next phase of growth by identifying opportunities to modernize and scale existing products, enter new geographies, and build entirely new product lines. Our teams enable innovation through AI, platform re-architecture, and portfolio expansion to support sustained business growth.
Role Overview
We are looking for a Senior Full Stack Developer with strong expertise in frontend development using React JS, backend microservices architecture in C#/Python, and hands-on experience with AI-enabled development tools. The ideal candidate should be comfortable working in an onsite environment and collaborating closely with cross-functional teams to deliver scalable, high-quality applications.
Key Responsibilities:
• Develop and maintain responsive, high-performance frontend applications using React JS
• Design, develop, and maintain microservices-based backend systems using C#, Python
• Experienced in building Data Layer and Databases using MS SQL, Cosmos DB, PostgreSQL
• Leverage AI-assisted development tools (Cursor / GitHub Copilot) to improve coding
efficiency and quality
• Collaborate with product managers, designers, and backend teams to deliver end-to-end
solutions
• Write clean, reusable, and well-documented code following best practices
• Participate in code reviews, debugging, and performance optimization
• Ensure application security, scalability, and reliability
Mandatory Technical Skills:
• Strong hands-on experience in React JS (Frontend Coding) – 3+ yrs
• Solid experience in Microservices Architecture C#, Python – 3+ yrs
• Experience building Data Layer and Databases using MS SQL – 2+ yrs
• Practical exposure to AI-enabled development using Cursor or GitHub Copilot – 1yr
• Good understanding of REST APIs and system integration
• Experience with version control systems (Git), ADO
Good to Have:
• Experience with cloud platforms (Azure)
• Knowledge of containerization tools like Docker and Kubernetes
• Exposure to CI/CD pipelines
• Understanding of Agile/Scrum methodologies
Why Join ProductNova
● Work on real-world, high-impact products used at scale
● Collaborate with experienced product, engineering, and AI leaders
● Solve complex problems with ownership and autonomy
● Build AI-first systems, not experimental prototypes
● Grow rapidly in a culture that values clarity, execution, and learning
If you are passionate about building meaningful products, solving hard problems, and shaping the future of AI-driven software, ProductNova offers the environment and challenges to grow your career.
ROLE: Ai ML Senior Developer
Exp: 5 to 8 Years
Location: Bangalore (Onsite)
About ProductNova
ProductNova is a fast-growing product development organization that partners with
ambitious companies to build, modernize, and scale high-impact digital products. Our teams
of product leaders, engineers, AI specialists, and growth experts work at the intersection of
strategy, technology, and execution to help organizations create differentiated product
portfolios and accelerate business outcomes.
Founded in early 2023, ProductNova has successfully designed, built, and launched 20+
large-scale, AI-powered products and platforms across industries. We specialize in solving
complex business problems through thoughtful product design, robust engineering, and
responsible use of AI.
Product Development
We design and build user-centric, scalable, AI-native B2B SaaS products that are deeply
aligned with business goals and long-term value creation.
Our end-to-end product development approach covers the full lifecycle:
1. Product discovery and problem definition
2. User research and product strategy
3. Experience design and rapid prototyping
4. AI-enabled engineering, testing, and platform architecture
5. Product launch, adoption, and continuous improvement
From early concepts to market-ready solutions, we focus on building products that are
resilient, scalable, and ready for real-world adoption. Post-launch, we work closely with
customers to iterate based on user feedback and expand products across new use cases,
customer segments, and markets.
Growth & Scale
For early-stage companies and startups, we act as product partners—shaping ideas into
viable products, identifying target customers, achieving product-market fit, and supporting
go-to-market execution, iteration, and scale.
For established organizations, we help unlock the next phase of growth by identifying
opportunities to modernize and scale existing products, enter new geographies, and build
entirely new product lines. Our teams enable innovation through AI, platform re-
architecture, and portfolio expansion to support sustained business growth.
Role Overview:
We are seeking an experienced AI / ML Senior Developer with strong hands-on expertise in large language models (LLMs) and AI-driven application development. The ideal candidate will have practical experience working with GPT and Anthropic models, building and training B2B products powered by AI, and leveraging AI-assisted development tools to deliver scalable and intelligent solutions.
Key Responsibilities:
1. Model Analysis & Optimization
Analyze, customize, and optimize GPT and Anthropic-based models to ensure reliability, scalability, and performance for real-world business use cases.
2. AI Product Design & Development
Design and build AI-powered products, including model training, fine-tuning, evaluation, and performance optimization across development lifecycles.
3. Prompt Engineering & Response Quality
Develop and refine prompt engineering strategies to improve model accuracy, consistency, relevance, and contextual understanding.
4. AI Service Integration
Build, integrate, and deploy AI services into applications using modern development practices, APIs, and scalable architectures.
5. AI-Assisted Development Productivity
Leverage AI-enabled coding tools such as Cursor and GitHub Copilot to accelerate development, improve code quality, and enhance efficiency.
6. Cross-Functional Collaboration
Work closely with product, business, and engineering teams to translate business requirements into effective AI-driven solutions.
7. Model Monitoring & Continuous Improvement
Monitor model performance, analyze outputs, and iteratively improve accuracy, safety, and overall system effectiveness.
Qualifications:
1. Hands-on experience analyzing, developing, fine-tuning, and optimizing GPT and Anthropic-based large language models.
2. Strong expertise in prompt design, experimentation, and optimization to enhance response accuracy and reliability.
3. Proven experience building, training, and deploying chatbots or conversational AI systems.
4. Practical experience using AI-assisted coding tools such as Cursor or GitHub Copilot in production environments.
5. Solid programming experience in Python, with strong problem-solving and development fundamentals.
6. Experience working with embeddings, similarity search, and vector databases for retrieval-augmented generation (RAG).
7. Knowledge of MLOps practices, including model deployment, versioning, monitoring, and lifecycle management.
8. Experience with cloud environments such as Azure, AWS for deploying and managing AI solutions.
9. Experience with APIs, microservices architecture, and system integration for scalable AI applications.
Why Join Us
• Build cutting-edge AI-powered B2B SaaS products
• Own architecture and technology decisions end-to-end
• Work with highly skilled ML and Full Stack teams
• Be part of a fast-growing, innovation-driven product organization
If you are a results-driven AiML Senior Developer with a passion for developing innovative products that drive business growth, we invite you to join our dynamic team at ProductNova.
ROLE - TECH LEAD/ARCHITECT with AI Expertise
Experience: 10–15 Years
Location: Bangalore (Onsite)
Company Type: Product-based | AI B2B SaaS
About ProductNova
ProductNova is a fast-growing product development organization that partners with
ambitious companies to build, modernize, and scale high-impact digital products. Our teams
of product leaders, engineers, AI specialists, and growth experts work at the intersection of
strategy, technology, and execution to help organizations create differentiated product
portfolios and accelerate business outcomes.
Founded in early 2023, ProductNova has successfully designed, built, and launched 20+
large-scale, AI-powered products and platforms across industries. We specialize in solving
complex business problems through thoughtful product design, robust engineering, and
responsible use of AI.
Product Development
We design and build user-centric, scalable, AI-native B2B SaaS products that are deeply
aligned with business goals and long-term value creation.
Our end-to-end product development approach covers the full lifecycle:
1. Product discovery and problem definition
2. User research and product strategy
3. Experience design and rapid prototyping
4. AI-enabled engineering, testing, and platform architecture
5. Product launch, adoption, and continuous improvement
From early concepts to market-ready solutions, we focus on building products that are
resilient, scalable, and ready for real-world adoption. Post-launch, we work closely with
customers to iterate based on user feedback and expand products across new use cases,
customer segments, and markets.
Growth & Scale
For early-stage companies and startups, we act as product partners—shaping ideas into
viable products, identifying target customers, achieving product-market fit, and supporting
go-to-market execution, iteration, and scale.
For established organizations, we help unlock the next phase of growth by identifying
opportunities to modernize and scale existing products, enter new geographies, and build
entirely new product lines. Our teams enable innovation through AI, platform re-
architecture, and portfolio expansion to support sustained business growth.
Role Overview
We are looking for a Tech Lead / Architect to drive the end-to-end technical design and
development of AI-powered B2B SaaS products. This role requires a strong hands-on
technologist who can work closely with ML Engineers and Full Stack Development teams,
own the product architecture, and ensure scalability, security, and compliance across the
platform.
Key Responsibilities
• Lead the end-to-end architecture and development of AI-driven B2B SaaS products
• Collaborate closely with ML Engineers, Data Scientists, and Full Stack Developers to
integrate AI/ML models into production systems
• Define and own the overall product technology stack, including backend, frontend,
data, and cloud infrastructure
• Design scalable, resilient, and high-performance architectures for multi-tenant SaaS
platforms
• Drive cloud-native deployments (Azure) using modern DevOps and CI/CD practices
• Ensure data privacy, security, compliance, and governance (SOC2, GDPR, ISO, etc.)
across the product
• Take ownership of application security, access controls, and compliance
requirements
• Actively contribute hands-on through coding, code reviews, complex feature development and architectural POCs
• Mentor and guide engineering teams, setting best practices for coding, testing, and
system design
• Work closely with Product Management and Leadership to translate business
requirements into technical solutions
Qualifications:
• 10–15 years of overall experience in software engineering and product
development
• Strong experience building B2B SaaS products at scale
• Proven expertise in system architecture, design patterns, and distributed systems
• Hands-on experience with cloud platforms (Azure, AWS/GCP)
• Solid background in backend technologies (Python/ .NET / Node.js / Java) and
modern frontend frameworks like (React, JS, etc.)
• Experience working with AI/ML teams in deploying and tuning ML models into production
environments
• Strong understanding of data security, privacy, and compliance frameworks
• Experience with microservices, APIs, containers, Kubernetes, and cloud-native
architectures
• Strong working knowledge of CI/CD pipelines, DevOps, and infrastructure as code
• Excellent communication and leadership skills with the ability to work cross-
functionally
• Experience in AI-first or data-intensive SaaS platforms
• Exposure to MLOps frameworks and model lifecycle management
• Experience with multi-tenant SaaS security models
• Prior experience in product-based companies or startups
Why Join Us
• Build cutting-edge AI-powered B2B SaaS products
• Own architecture and technology decisions end-to-end
• Work with highly skilled ML and Full Stack teams
• Be part of a fast-growing, innovation-driven product organization
If you are a results-driven Technical Lead with a passion for developing innovative products that drives business growth, we invite you to join our dynamic team at ProductNova.
JOB DETAILS:
* Job Title: DevOps Engineer (Azure)
* Industry: Technology
* Salary: Best in Industry
* Experience: 2-5 years
* Location: Bengaluru, Koramangala
Review Criteria
- Strong Azure DevOps Engineer Profiles.
- Must have minimum 2+ years of hands-on experience as an Azure DevOps Engineer with strong exposure to Azure DevOps Services (Repos, Pipelines, Boards, Artifacts).
- Must have strong experience in designing and maintaining YAML-based CI/CD pipelines, including end-to-end automation of build, test, and deployment workflows.
- Must have hands-on scripting and automation experience using Bash, Python, and/or PowerShell
- Must have working knowledge of databases such as Microsoft SQL Server, PostgreSQL, or Oracle Database
- Must have experience with monitoring, alerting, and incident management using tools like Grafana, Prometheus, Datadog, or CloudWatch, including troubleshooting and root cause analysis
Preferred
- Knowledge of containerisation and orchestration tools such as Docker and Kubernetes.
- Knowledge of Infrastructure as Code and configuration management tools such as Terraform and Ansible.
- Preferred (Education) – BE/BTech / ME/MTech in Computer Science or related discipline
Role & Responsibilities
- Build and maintain Azure DevOps YAML-based CI/CD pipelines for build, test, and deployments.
- Manage Azure DevOps Repos, Pipelines, Boards, and Artifacts.
- Implement Git branching strategies and automate release workflows.
- Develop scripts using Bash, Python, or PowerShell for DevOps automation.
- Monitor systems using Grafana, Prometheus, Datadog, or CloudWatch and handle incidents.
- Collaborate with dev and QA teams in an Agile/Scrum environment.
- Maintain documentation, runbooks, and participate in root cause analysis.
Ideal Candidate
- 2–5 years of experience as an Azure DevOps Engineer.
- Strong hands-on experience with Azure DevOps CI/CD (YAML) and Git.
- Experience with Microsoft Azure (OCI/AWS exposure is a plus).
- Working knowledge of SQL Server, PostgreSQL, or Oracle.
- Good scripting, troubleshooting, and communication skills.
- Bonus: Docker, Kubernetes, Terraform, Ansible experience.
- Comfortable with WFO (Koramangala, Bangalore).
Job Summary
We are looking for a skilled Python AI Developer with experience in Flask, AI/LLM integrations, and Model Context Protocol (MCP) to build intelligent, scalable, and production-ready AI-powered applications. You will work on AI agents, tool integrations, and backend APIs that interact with modern LLM platforms like OpenAI/Claude.
Key Responsibilities
- Develop and maintain backend services using Python and Flask/FastAPI
- Build and integrate AI/LLM-based features using OpenAI, Claude, or similar models
- Implement Model Context Protocol (MCP) / tool-function calling frameworks
- Design and develop AI agents that interact with external tools, APIs, and databases
- Build RAG pipelines using vector databases for intelligent retrieval
- Manage context, memory, and conversation state in AI workflows
- Create scalable REST APIs for AI-powered applications
- Optimize performance, security, and reliability of AI services
- Collaborate with product, frontend, and data teams
Required Skills
- Strong Python programming
- Experience with Flask or FastAPI
- Hands-on with OpenAI / GPT / Claude / LLM APIs
- Experience in MCP / Tool Calling / Function Calling / AI Agents
- Knowledge of RAG, embeddings, and vector databases (Pinecone, FAISS, Chroma, Weaviate)
- API development, JSON, async programming
- Understanding of prompt engineering and context handling
Good to Have
- LangChain / LlamaIndex
- Docker / Cloud (AWS, GCP, Azure)
- WebSockets / Streaming responses
- Authentication, API security, JWT
- Basic ML / NLP knowledge
- CI/CD and deployment experience
AuxoAI is seeking a skilled and experienced Data Engineer to join our dynamic team. The ideal candidate will have 3-7 years of prior experience in data engineering, with a strong background in working on modern data platforms. This role offers an exciting opportunity to work on diverse projects, collaborating with cross-functional teams to design, build, and optimize data pipelines and infrastructure.
Location : Bangalore, Hyderabad, Mumbai, and Gurgaon
Responsibilities:
· Designing, building, and operating scalable on-premises or cloud data architecture
· Analyzing business requirements and translating them into technical specifications
· Design, develop, and implement data engineering solutions using DBT on cloud platforms (Snowflake, Databricks)
· Design, develop, and maintain scalable data pipelines and ETL processes
· Collaborate with data scientists and analysts to understand data requirements and implement solutions that support analytics and machine learning initiatives.
· Optimize data storage and retrieval mechanisms to ensure performance, reliability, and cost-effectiveness
· Implement data governance and security best practices to ensure compliance and data integrity
· Troubleshoot and debug data pipeline issues, providing timely resolution and proactive monitoring
· Stay abreast of emerging technologies and industry trends, recommending innovative solutions to enhance data engineering capabilities.
Requirements
· Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
· Overall 3+ years of prior experience in data engineering, with a focus on designing and building data pipelines
· Experience of working with DBT to implement end-to-end data engineering processes on Snowflake and Databricks
· Comprehensive understanding of the Snowflake and Databricks ecosystem
· Strong programming skills in languages like SQL and Python or PySpark.
· Experience with data modeling, ETL processes, and data warehousing concepts.
· Familiarity with implementing CI/CD processes or other orchestration tools is a plus.
Review Criteria
- Strong Senior Backend Engineer profiles
- Must have 5+ years of hands-on Backend Engineering experience building scalable, production-grade systems
- Must have strong backend development experience using one or more frameworks (FastAPI / Django (Python), Spring (Java), Express (Node.js).
- Must have deep understanding of relevant libraries, tools, and best practices within the chosen backend framework
- Must have strong experience with databases, including SQL and NoSQL, along with efficient data modeling and performance optimization
- Must have experience designing, building, and maintaining APIs, services, and backend systems, including system design and clean code practices
- Experience with financial systems, billing platforms, or fintech applications is highly preferred (fintech background is a strong plus)
- (Company) – Must have worked in product companies / startups, preferably Series A to Series D
- (Education) – Candidates from top engineering institutes (IITs, BITS, or equivalent Tier-1 colleges) are preferred
Role & Responsibilities
As a Founding Engineer at company, you'll join our engineering team during an exciting growth phase, contributing to a platform that handles complex financial operations for B2B companies. You'll work on building scalable systems that automate billing, usage metering, revenue recognition, and financial reporting—directly impacting how businesses manage their revenue operations.
This role is perfect for someone who thrives in a dynamic startup environment where requirements evolve quickly and problems need creative solutions. You'll work on diverse technical challenges, from API development to external integrations, while collaborating with senior engineers, product managers, and customer success teams.
Key Responsibilities-
- Build core platform features: Develop robust APIs, services, and integrations that power company’s billing automation and revenue recognition capabilities
- Work across the full stack: Contribute to both backend services and frontend interfaces, ensuring seamless user experiences
- Implement critical integrations: Connect company with external systems including CRMs, data warehouses, ERPs, and payment processors
- Optimize for scale: Build systems that handle complex pricing models, high-volume usage data, and real-time financial calculations
- Drive quality and best practices: Write clean, maintainable code while participating in code reviews and architectural discussions
- Solve complex problems: Debug issues across the stack and work closely with teams to address evolving client needs
The Impact You'll Make-
- Power business growth: Your code will directly enable billing and revenue operations for fast-growing B2B companies, helping them scale without operational bottlenecks
- Build critical financial infrastructure: Contribute to systems handling millions in transactions while ensuring accurate, compliant revenue recognition
- Shape product direction: Join during our scaling phase where your contributions immediately impact product evolution and customer success
- Accelerate your expertise: Gain deep knowledge in financial systems, B2B SaaS operations, and enterprise software while working with industry veterans
- Drive the future of B2B commerce: Help create infrastructure powering next-generation pricing models from usage-based to value-based billing.
About the Role
Qiro is building the infrastructure powering the next generation of underwriting, credit analytics, and tokenized private credit markets.
As a Senior Full Stack Developer, you’ll design, build, and scale systems that bridge complex financial data, decision logic, and investor interfaces. You’ll work across the stack — from backend APIs and data systems to high-performance frontends for credit operations and analytics.
You’ll join a small, high-ownership founding team where every engineer contributes directly to product direction, architecture, and delivery. This is a hands-on, in-office role for builders who thrive in fast-paced startup environments and want to shape how institutional credit infrastructure is built from the ground up.
What You’ll Do
- Lead design and development of scalable backend systems and APIs using Python / TypeScript for underwriting, repayment, and investor workflows
- Build data-rich, interactive interfaces with React, Next.js, and React Query, powering analytics and workflow automation
- Implement form-driven, validation-heavy systems using Zod and Formik for onboarding, KYC, and investor flows
- Architect API-first frontends that integrate seamlessly with credit models, smart contracts, and data pipelines
- Write clean, maintainable, and well-tested code, and set technical standards for the team
- Collaborate with product, design, and data teams to translate complex financial logic into intuitive user experiences
- Own features end-to-end — from planning and architecture to deployment and monitoring
- Mentor and guide junior developers as we expand the engineering team
Who You Are
- 5+ years of professional full-stack experience, with 2+ years in senior or lead roles
- Strong proficiency in Python, TypeScript, and React, with experience in both backend and frontend architecture
- Skilled in RESTful API design, state management, and component-driven UI development
- Hands-on experience building data-driven or fintech products
- Familiar with validation, CI/CD, testing, and monitoring best practices
- Excellent communicator with a strong product sense and ownership mindset
Bonus Points
- Experience with AWS (Lambda, API Gateway, DynamoDB, RDS) and serverless architectures
- Familiarity with FastAPI, Pydantic, PostgreSQL, or similar frameworks
- Exposure to underwriting, lending, or credit analytics systems
- Prior experience in early-stage startups or 0→1 product environments
Why Join Qiro
- Be part of the founding engineering team shaping the future of credit infrastructure
- Work directly with founders, designers, and data scientists to ship impactful products
- Work from office only — collaboration, iteration, and speed are core to how we build
- Flat hierarchy and high autonomy — your work directly impacts direction and outcomes
- Competitive salary and meaningful equity
- Location: Bangalore
Our Culture
We believe in first-principles thinking, craftsmanship, and fast execution.
Everyone at Qiro is part of the core — we design for transparency, build for scale, and execute with ownership.
If you’re a builder who thrives in hands-on, high-ownership environments, and loves turning financial logic into elegant systems — we’d love to meet you.
Strong Senior Backend Engineer profiles
Mandatory (Experience 1) – Must have 5+ years of hands-on Backend Engineering experience building scalable, production-grade systems
Mandatory (Experience 2) – Must have strong backend development experience using one or more frameworks (FastAPI / Django (Python), Spring (Java), Express (Node.js).
Mandatory (Experience 3) – Must have deep understanding of relevant libraries, tools, and best practices within the chosen backend framework
Mandatory (Experience 4) – Must have strong experience with databases, including SQL and NoSQL, along with efficient data modeling and performance optimization
Mandatory (Experience 5) – Must have experience designing, building, and maintaining APIs, services, and backend systems, including system design and clean code practices
Mandatory (Domain) – Experience with financial systems, billing platforms, or fintech applications is highly preferred (fintech background is a strong plus)
Mandatory (Company) – Must have worked in product companies / startups, preferably Series A to Series D
Mandatory (Education) – Candidates from top engineering institutes (IITs, BITS, or equivalent Tier-1 colleges) are preferred
Strong Senior Backend Engineer profiles
Mandatory (Experience 1) – Must have 5+ years of hands-on Backend Engineering experience building scalable, production-grade systems
Mandatory (Experience 2) – Must have strong backend development experience using one or more frameworks (FastAPI / Django (Python), Spring (Java), Express (Node.js).
Mandatory (Experience 3) – Must have deep understanding of relevant libraries, tools, and best practices within the chosen backend framework
Mandatory (Experience 4) – Must have strong experience with databases, including SQL and NoSQL, along with efficient data modeling and performance optimization
Mandatory (Experience 5) – Must have experience designing, building, and maintaining APIs, services, and backend systems, including system design and clean code practices
Mandatory (Domain) – Experience with financial systems, billing platforms, or fintech applications is highly preferred (fintech background is a strong plus)
Mandatory (Company) – Must have worked in product companies / startups, preferably Series A to Series D
Mandatory (Education) – Candidates from top engineering institutes (IITs, BITS, or equivalent Tier-1 colleges) are preferred
Job Title : Python Developer (Crawlers / APIs / Async Programming)
Experience : 3 to 6 Years
Notice Period : Immediate to 15 Days
Job Location : Bangalore
Interview Process : 1 Internal Round + 2 Client Rounds
Mandatory Skill :
Strong Python experience with crawlers, REST APIs, async/multithreading, and PostgreSQL/MySQL in a cloud environment.
Role Overview :
We are looking for a highly skilled Python Developer with strong hands-on experience in building web crawlers, REST APIs, and advanced Python applications. The ideal candidate should be proficient in writing clean, efficient, and scalable code, and comfortable working with asynchronous programming, multithreading, and cloud-native environments.
Key Responsibilities :
- Build and ship new features and fixes in a fast-paced environment.
- Design, develop, test, and deploy scalable Python applications.
- Develop robust web crawlers and RESTful APIs.
- Write clean, secure, and maintainable code following SOLID principles.
- Design and document features, components, and systems.
- Collaborate closely with cross-functional teams.
- Support, monitor, and maintain existing products.
- Continuously learn and improve technical expertise.
Mandatory Skills :
- 3 to 5 years of strong hands-on experience in Python
- Experience in building web crawlers and REST APIs
- Strong knowledge of multithreading and async I/O in Python
- Experience with PostgreSQL or MySQL
- Strong understanding of OOP/Functional Programming and clean coding practices
- Experience with Docker / Containers
- Exposure to Cloud platforms (AWS or GCP)
- Excellent written and verbal communication skills
- High ownership mindset and accountability
Good to Have :
- Experience with Kubernetes, RabbitMQ, Redis
- Contributions to Open Source Projects
- Experience working with AI APIs and tools
What the Interview Will Focus On :
- Problem-solving and coding skills
- Hands-on Python programming
- Low-Level Design
- Database Design concepts
Your job:
• Implement customer-specific applications with focus on business logic and algorithmic development using high-level languages
• Define and develop clear software architectures and interfaces connecting hardware control and user interfaces
• Collaborate closely with global development teams, especially at headquarters in Germany
• Maintain and enhance implemented applications, ensuring quality documentation and issue tracking in ALM tools
• Investigate, evaluate, and resolve problems through solution-oriented proposals across diverse customer environments
• Support the implementation of libraries and components with emphasis on scalable software design
Your qualification:
• Completed Bachelor’s or Master’s degree in Computer Science, Information Technology, Automation, or a comparable field
• Minimum of 5 years of professional experience with with high-level programming languages such as Python or Node.js and UI frameworks like HTML, CSS, TypeScript, Angular, or C# / WPF
• Very good technical understanding and sound knowledge of electric automation and mechatronic or robotic/end-of-arm systems is desirable • Knowledge of PLC-programming and industrial fieldbus protocols is advantageous
• Excellent command of written and spoken English communication skills
• Proficiency in modern development environments, interface design, and application lifecycle management (ALM) tools
• Analytical, customer-focused, collaborative, and proactive in problem-solving and continuous learning
• Open to occasional travel and short-term international assignments for customer or HQ collaboration
Job Description:
We are looking for a skilled Backend Developer with 2–5 years of experience in software development, specializing in Python and/or Golang. If you have strong programming skills, enjoy solving problems, and want to work on secure and scalable systems, we'd love to hear from you!
Location - Pune, Baner.
Interview Rounds - In Office
Cybersecurity Understanding Must
Key Responsibilities:
Design, build, and maintain efficient, reusable, and reliable backend services using Python and/or Golang
Develop and maintain clean and scalable code following best practices
Apply Object-Oriented Programming (OOP) concepts in real-world development
Collaborate with front-end developers, QA, and other team members to deliver high-quality features
Debug, optimize, and improve existing systems and codebase
Participate in code reviews and team discussions
Work in an Agile/Scrum development environment
Required Skills: Strong experience in Python or Golang (working knowledge of both is a plus)
Good understanding of OOP principles
Familiarity with RESTful APIs and back-end frameworks
Experience with databases (SQL or NoSQL)
Excellent problem-solving and debugging skills
Strong communication and teamwork abilities
Good to Have:
Prior experience in the security industry
Familiarity with cloud platforms like AWS, Azure, or GCP
Knowledge of Docker, Kubernetes, or CI/CD tools
Job Description:
Test Design & Execution
Design and execute detailed, well-structured test plans, test cases, and test scenarios to ensure high-quality product releases.
Automation Development
Develop and maintain automated test scripts for functional and regression testing using tools such as Selenium, Cypress, or Playwright.
Defect Management
Identify, log, and track defects through to resolution using tools like Jira, ensuring minimal impact on production releases.
API & Backend Testing
Conduct API testing using Postman, perform backend validation, and execute database testing using SQL/Oracle.
Collaboration
Work closely with developers, product managers, and UX designers in an Agile/Scrum environment to embed quality across the SDLC.
CI/CD Integration
Integrate automated test suites into CI/CD pipelines using platforms such as Jenkins or Azure DevOps.
Required Skills & Experience
- Minimum 2+ years of experience in Software Quality Assurance or Automation Testing.
- Hands-on experience with Selenium WebDriver, Cypress, or Playwright.
- Proficiency in at least one programming/scripting language: Java, Python, or JavaScript.
- Strong experience in functional, regression, integration, and UI testing.
- Solid understanding of SQL for data validation and backend testing.
- Familiarity with Git for version control, Jira for defect tracking, and Postman for API testing.
Desirable Skills
- Experience in mobile application testing (Android/iOS).
- Exposure to performance testing tools such as JMeter.
- Experience working with cloud platforms like AWS or Azure.
We're looking for an experienced Zoho Developer (2-4 years) to join our team! You'll work with internal teams to understand business requirements, configure and customize Zoho apps, and deliver end-to-end solutions. You'll also provide support, troubleshoot issues, and guide junior team members. Required skills include hands-on experience with Zoho Creator, CRM, Flow, Books, Analytics, and more, plus strong problem-solving and communication skills. Experience in client-facing roles and managing multiple projects is a plus.
Required Skills & Qualifications
● Strong hands-on experience with LLM frameworks and models, including LangChain,
OpenAI (GPT-4), and LLaMA
● Proven experience in LLM orchestration, workflow management, and multi-agent
system design using frameworks such as LangGraph
● Strong problem-solving skills with the ability to propose end-to-end solutions and
contribute at an architectural/system design level
● Experience building scalable AI-backed backend services using FastAPI and
asynchronous programming patterns
● Solid experience with cloud infrastructure on AWS, including EC2, S3, and Load
Balancers
● Hands-on experience with Docker and containerization for deploying and managing
AI/ML applications
● Good understanding of Transformer-based architectures and how modern LLMs work
internally
● Strong skills in data processing and analysis using NumPy and Pandas
● Experience with data visualization tools such as Matplotlib and Seaborn for analysis
and insights
● Hands-on experience with Retrieval-Augmented Generation (RAG), including
document ingestion, embeddings, and vector search pipelines
● Experience in model optimization and training techniques, including fine-tuning,
LoRA, and QLoRA
Nice to Have / Preferred
● Experience designing and operating production-grade AI systems
● Familiarity with cost optimization, observability, and performance tuning for
LLM-based applications
● Exposure to multi-cloud or large-scale AI platforms


















