Cutshort logo
Python Jobs in Bangalore (Bengaluru)

50+ Python Jobs in Bangalore (Bengaluru) | Python Job openings in Bangalore (Bengaluru)

Apply to 50+ Python Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.

icon
Impacto Digifin Technologies

at Impacto Digifin Technologies

4 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
3yrs+
Best in industry
Test Automation (QA)
Software Testing (QA)
cypress
skill iconPython
skill iconJavascript
+4 more

Job Description

We are looking for a hands-on QA Automation Analyst to design, build, and maintain end-to-end automation frameworks for high-quality banking and financial applications. You will be responsible for ensuring robust test coverage, validating business workflows, and integrating testing within CI/CD pipelines. You’ll collaborate closely with product, engineering, and DevOps teams to uphold compliance, audit readiness, and rapid delivery in an agile environment.


Domain: Banking / Financial Services


Work Schedule: Monday to Saturday, with alternate Saturdays off.


Key Responsibilities

  • Design, develop, and maintain end-to-end automation frameworks from scratch using modern tools and best practices.
  • Develop and execute test plans, test cases, and automation scripts for functional, regression, integration, and API testing.
  • Build automation using Selenium, PyTest, Robot Framework, Playwright (highlighted), or Cypress.
  • Perform API testing for REST services using Postman, Swagger, or Rest Assured; validate responses, contracts, and data consistency.
  • Integrate automation frameworks with CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI/CD, or similar).
  • Participate in requirement analysis, impact assessment, sprint ceremonies, and cross-functional discussions.
  • Validate data using SQL and support User Acceptance Testing (UAT); generate reports and release sign-offs.
  • Log, track, and close defects using standard bug-tracking tools; perform root-cause analysis for recurring issues.
  • Maintain QA artifacts for audit and compliance purposes.
  • Mentor junior QA team members and contribute to process improvements.


Qualifications & Skills

  • Bachelor’s degree in Computer Science, Engineering, or related field.
  • 2+ years of hands-on experience in QA automation for enterprise applications, preferably in the banking/financial domain.
  • Strong understanding of SDLC, STLC, QA methodologies, tools, and best practices.
  • Experience designing end-to-end automation frameworks from scratch.
  • Hands-on with manual and automation testing (Selenium, PyTest, Robot Framework, Playwright, Cypress).
  • Experience in API testing and validating RESTful services; knowledge of Rest Assured is a plus.
  • Proficient with databases and SQL (PostgreSQL, MySQL, Oracle).
  • Solid experience in Agile/Scrum environments and tools like Jira, TestLink, or equivalent.
  • Strong understanding of CI/CD pipelines and deployment automation using Jenkins or similar tools.
  • Knowledge of version control tools (Git) and collaborative workflows.
  • Excellent analytical, problem-solving, documentation, and communication skills.

Nice to Have / Bonus

  • Exposure to performance testing tools like JMeter or Gatling.
  • Programming experience in Java, Python, or JavaScript for automation scripting.
  • ISTQB or equivalent QA certification.

Why Join Us

  • Opportunity to work on mission-critical banking applications.
  • Hands-on exposure to modern automation tools and frameworks.
  • Work in a collaborative, agile, and fast-paced environment.
  • Contribute to cutting-edge CI/CD automation and testing strategies.
Read more
Blurgs AI

at Blurgs AI

2 candid answers
Nikita Sinha
Posted by Nikita Sinha
Hyderabad, Bengaluru (Bangalore)
1 - 3 yrs
Upto ₹16L / yr (Varies
)
skill iconPython
Apache Kafka
skill iconMongoDB
skill iconJava

We are seeking a Senior Data Engineer to design, build, and maintain a robust, scalable on-premise data infrastructure. The role focuses on real-time and batch data processing using technologies such as Apache Pulsar, Apache Flink, MongoDB, ClickHouse, Docker, and Kubernetes.

Ideal candidates have strong systems knowledge, deep backend data experience, and a passion for building efficient, low-latency data pipelines in a non-cloud, on-prem environment.


Key Responsibilities

1. Data Pipeline & Streaming Development

  • Design and implement real-time data pipelines using Apache Pulsar and Apache Flink to support mission-critical systems.
  • Build high-throughput, low-latency data ingestion and processing workflows across streaming and batch workloads.
  • Integrate internal systems and external data sources into a unified on-premise data platform.

2. Data Storage & Modelling

  • Design efficient data models for MongoDB, ClickHouse, and other on-prem databases to support analytical and operational use cases.
  • Optimize storage formats, indexing strategies, and partitioning schemes for performance and scalability.

3. Infrastructure & Containerization

  • Deploy, manage, and monitor containerized data services using Docker and Kubernetes in on-prem environments.

4. Performance, Monitoring & Reliability

  • Monitor and fine-tune the performance of streaming jobs and database queries.
  • Implement robust logging, metrics, and alerting frameworks to ensure high availability and operational stability.
  • Identify pipeline bottlenecks and implement proactive optimizations.

Required Skills & Experience

  • Strong experience in data engineering with a focus on on-premise environments.
  • Expertise in streaming technologies such as Apache Pulsar, Apache Flink, or similar platforms.
  • Deep hands-on experience with MongoDB, ClickHouse, or other NoSQL/columnar databases.
  • Proficient in Python for data processing and backend development.
  • Practical experience deploying and managing systems using Docker and Kubernetes.
  • Strong understanding of Linux systems, performance tuning, and resource monitoring.

Preferred Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related fields (or equivalent experience).

Additional Responsibilities for Senior-Level Hires

Leadership & Mentorship

  • Guide, mentor, and support junior engineers; establish best practices and code quality standards.

System Architecture

  • Lead the design and optimization of complex real-time and batch data pipelines for scalability and performance.

Sensor Data Expertise

  • Build and optimize sensor-driven data pipelines and stateful stream processing systems for mission-critical domains such as maritime and defense.

End-to-End Ownership

  • Take full responsibility for the performance, reliability, and optimization of on-premise data systems.


Read more
Cspar Enterprises Private Limited
Bengaluru (Bangalore)
7 - 14 yrs
₹9L - ₹12L / yr
skill iconDjango
skill iconPython
skill icon.NET
skill iconPHP
skill iconReact.js
+16 more

Job Description -Technical Project Manager

Job Title: Technical Project Manager

Location: Bhopal / Bangalore (On-site)

Experience Required: 7+ Years

Industry: Fintech / SaaS / Software Development

Role Overview

We are looking for a Technical Project Manager (TPM) who can bridge the gap between management and developers. The TPM will manage Android, Frontend, and Backend teams, ensure smooth development processes, track progress, evaluate output quality, resolve technical issues, and deliver timely reports.

Key Responsibilities

Project & Team Management

  • Manage daily tasks for Android, Frontend, and Backend developers
  • Conduct daily stand-ups, weekly planning, and reviews
  • Track progress, identify blockers, and ensure timely delivery
  • Maintain sprint boards, task estimations, and timelines

Technical Requirement Translation

  • Convert business requirements into technical tasks
  • Communicate requirements clearly to developers
  • Create user stories, flow diagrams, and PRDs
  • Ensure requirements are understood and implemented correctly

Quality & Build Review

  • Validate build quality, UI/UX flow, functionality
  • Check API integrations, errors, performance issues
  • Ensure coding practices and architecture guidelines are followed
  • Perform preliminary QA before handover to testing or clients

Issue Resolution

  • Identify development issues early
  • Coordinate with developers to fix bugs
  • Escalate major issues to founders with clear insights

Reporting & Documentation

  • Daily/weekly reports to management
  • Sprint documentation, release notes
  • Maintain project documentation & version control processes

Cross-Team Communication

  • Act as the single point of contact for management
  • Align multiple tech teams with business goals
  • Coordinate with HR and operations for resource planning

Required Skills

  • Strong understanding of Android, Web (Frontend/React), Backend development flows
  • Knowledge of APIs, Git, CI/CD, basic testing
  • Experience with Agile/Scrum methodologies
  • Ability to review builds and suggest improvements
  • Strong documentation skills (Jira, Notion, Trello, Asana)
  • Excellent communication & leadership
  • Ability to handle pressure and multiple projects

Good to Have

  • Prior experience in Fintech projects
  • Basic knowledge of UI/UX
  • Experience in preparing FSD/BRD/PRD
  • QA experience or understanding of test cases

Salary Range: 9 to 12 LPA

Read more
Codemonk

at Codemonk

4 candid answers
2 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
7yrs+
Upto ₹35L / yr (Varies
)
skill iconNodeJS (Node.js)
skill iconPython
Google Cloud Platform (GCP)
RESTful APIs
SQL
+4 more

Like us, you'll be deeply committed to delivering impactful outcomes for customers.

  • 7+ years of demonstrated ability to develop resilient, high-performance, and scalable code tailored to application usage demands.
  • Ability to lead by example with hands-on development while managing project timelines and deliverables. Experience in agile methodologies and practices, including sprint planning and execution, to drive team performance and project success.
  • Deep expertise in Node.js, with experience in building and maintaining complex, production-grade RESTful APIs and backend services.
  • Experience writing batch/cron jobs using Python and Shell scripting.
  • Experience in web application development using JavaScript and JavaScript libraries.
  • Have a basic understanding of Typescript, JavaScript, HTML, CSS, JSON and REST based applications.
  • Experience/Familiarity with RDBMS and NoSQL Database technologies like MySQL, MongoDB, Redis, ElasticSearch and other similar databases.
  • Understanding of code versioning tools such as Git.
  • Understanding of building applications deployed on the cloud using Google cloud platform(GCP)or Amazon Web Services (AWS)
  • Experienced in JS-based build/Package tools like Grunt, Gulp, Bower, Webpack.
Read more
Appiness

Appiness

Agency job
via appiness Interactive by Tejashwini B
Bengaluru (Bangalore)
5 - 7 yrs
₹12L - ₹16L / yr
skill iconReact.js
skill iconPython
FastAPI
Pydantic
SQL
+1 more

Position: Senior Full Stack Engineer

Engagement Type: On-site

Location: Bangalore


Position Overview:

We are seeking an experienced Senior Full Stack Engineer with strong React and Python expertise to design, develop, and maintain full-stack features across both frontend and backend components.


Key Responsibilities:

• Build full-stack features owning both frontend and backend components

• Develop frontend modules using React, vite, bun, chakra-ui, react-query, and react-table

• Implement backend logic, APIs, and data models using Python, fastapi, pydantic, and sqlmodel

• Design clean API contracts between frontend and backend components focusing on performance and clarity

• Build robust data flows across multi-architecture compute workflows including job management, orchestration views, and resource interactions

• Collaborate with SRE, runtime, and platform teams to ensure backend services integrate smoothly

• Troubleshoot issues across the full stack from UI inconsistencies to backend failures

• Participate in architectural and design discussions for both client and server components

• Ensure production readiness, reliability, and scalability of features


Required Skills:

• Strong hands-on development experience with React and Python

• Experience with fastapi, pydantic, sqlmodel, and modern async backend development

• Deep understanding of vite, bun, and frontend build/performance optimization

• Ability to manage frontend async workflows using react-query

• Experience working with complex data tables using react-table

• Strong API design skills

• Experience building modular services and UI-driven systems with high data throughput

• Ability to operate independently in a high-speed environment

• Strong communication and cross-functional collaboration skills


Preferred (Bonus) Skills:

• Experience working with cloud APIs, compute runtimes, schedulers, or developer tooling

• Knowledge of SRE concepts including Kubernetes, CI/CD, and observability

• Familiarity with licensing work



Read more
Meraki Labs
Bengaluru (Bangalore)
3 - 4 yrs
₹30L - ₹50L / yr
skill iconPython
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconNextJs (Next.js)
RESTful APIs
+4 more

Job Title:Full Stack Developer 

Location: Bangalore, India


About Us:


Meraki Labs stands at the forefront of India's deep-tech innovation landscape, operating as a dynamic venture studio established by the visionary entrepreneur Mukesh Bansal. Our core mission revolves around the creation and rapid scaling of AI-first and truly "moonshot" startups, nurturing them from their nascent stages into industry leaders. We achieve this through an intensive, hands-on partnership model, working side-by-side with exceptional founders who possess both groundbreaking ideas and the drive to execute them.


Currently, Meraki Labs is channeling its significant expertise and resources into a particularly ambitious endeavor: a groundbreaking EdTech platform. This initiative is poised to revolutionize the field of education by democratizing access to world-class STEM learning for students globally. Our immediate focus is on fundamentally redefining how physics is taught and experienced, moving beyond traditional methodologies to deliver an immersive, intuitive, and highly effective learning journey that transcends geographical and socioeconomic barriers. Through this platform, we aim to inspire a new generation of scientists, engineers, and innovators, ensuring that cutting-edge educational resources are within reach of every aspiring learner, everywhere.


Role Overview:


As a Full Stack Developer, you will be at the foundation of building this intelligent learning ecosystem by connecting the front-end experience, backend architecture, and AI-driven components that bring the platform to life. You’ll own key systems that power the AI Tutor, Simulation Lab, and learning content delivery, ensuring everything runs smoothly, securely, and at scale. This role is ideal for engineers who love building end-to-end products that blend technology, user experience, and real-time intelligence.

Your Core Impact

  • You will build the spine of the platform, ensuring seamless communication between AI models, user interfaces, and data systems.
  • You’ll translate learning and AI requirements into tangible, performant product features.
  • Your work will directly shape how thousands of students experience physics through our AI Tutor and simulation environment.


Key Responsibilities:


Platform Architecture & Backend Development

  • Design and implement robust, scalable APIs that power user authentication, course delivery, and AI Tutor integration.
  • Build the data pipelines connecting LLM responses, simulation outputs, and learner analytics.
  • Create and maintain backend systems that ensure real-time interaction between the AI layer and the front-end interface.
  • Ensure security, uptime, and performance across all services.

Front-End Development & User Experience

  • Develop responsive, intuitive UIs (React, Next.js or similar) for learning dashboards, course modules, and simulation interfaces.
  • Collaborate with product designers to implement layouts for AI chat, video lessons, and real-time lab interactions.
  • Ensure smooth cross-device functionality for students accessing the platform on mobile or desktop.

AI Integration & Support

  • Work closely with the AI/ML team to integrate the AI Tutor and Simulation Lab outputs within the platform experience.
  • Build APIs that pass context, queries, and results between learners, models, and the backend in real time.
  • Optimize for low latency and high reliability, ensuring students experience immediate and natural interactions with the AI Tutor.

Data, Analytics & Reporting

  • Build dashboards and data views for educators and product teams to derive insights from learner behavior.
  • Implement secure data storage and export pipelines for progress analytics.

Collaboration & Engineering Culture

  • Work closely with AI Engineers, Prompt Engineers, and Product Leads to align backend logic with learning outcomes.
  • Participate in code reviews, architectural discussions, and system design decisions.
  • Help define engineering best practices that balance innovation, maintainability, and performance.


Required Qualifications & Skills

  • 3–5 years of professional experience as a Full Stack Developer or Software Engineer.
  • Strong proficiency in Python or Node.js for backend services.
  • Hands-on experience with React / Next.js or equivalent modern front-end frameworks.
  • Familiarity with databases (SQL/NoSQL), REST APIs, and microservices.
  • Experience with real-time data systems (WebSockets or event-driven architectures).
  • Exposure to AI/ML integrations or data-intensive backends.
  • Knowledge of AWS/GCP/Azure and containerized deployment (Docker, Kubernetes).
  • Strong problem-solving mindset and attention to detail.
Read more
Meraki Labs
Agency job
via ENTER by Rajkishor Mishra
Bengaluru (Bangalore)
3 - 4 yrs
₹30L - ₹50L / yr
skill iconPython
FastAPI
skill iconFlask
Langchaing
Generative AI
+3 more

Job Title: AI Engineer

Location: Bangalore, India


About Us:


Meraki Labs stands at the forefront of India's deep-tech innovation landscape, operating as a dynamic venture studio established by the visionary entrepreneur Mukesh Bansal. Our core mission revolves around the creation and rapid scaling of AI-first and truly "moonshot" startups, nurturing them from their nascent stages into industry leaders. We achieve this through an intensive, hands-on partnership model, working side-by-side with exceptional founders who possess both groundbreaking ideas and the drive to execute them.


Currently, Meraki Labs is channeling its significant expertise and resources into a particularly ambitious endeavor: a groundbreaking EdTech platform. This initiative is poised to revolutionize the field of education by democratizing access to world-class STEM learning for students globally. Our immediate focus is on fundamentally redefining how physics is taught and experienced, moving beyond traditional methodologies to deliver an immersive, intuitive, and highly effective learning journey that transcends geographical and socioeconomic barriers. Through this platform, we aim to inspire a new generation of scientists, engineers, and innovators, ensuring that cutting-edge educational resources are within reach of every aspiring learner, everywhere.



Role Overview:


As an AI Engineer on the Capacity team, you will design, build, and deploy the intelligent systems that power our AI Tutor and Simulation Lab.

You’ll collaborate closely with prompt engineers, product managers, and full-stack developers to build scalable AI features that connect language, reasoning, and real-world learning. This is not a traditional ML ops role,  it’s an opportunity to engineer how intelligence flows across the product: from tutoring interactions to real-time physics reasoning.


Your Core Impact

  • Build the AI backbone that drives real-time tutoring, contextual reasoning, and simulation feedback.
  • Translate learning logic and educational goals into deployable, scalable AI systems.
  • Enable the AI Tutor to think, reason, and respond based on structured academic material and live learner inputs.


Key Responsibilities:

1. AI System Architecture & Development

  • Design and develop scalable AI systems that enable chat-based tutoring, concept explainability, and interactive problem solving.
  • Implement and maintain model-serving APIs, vector databases, and context pipelines to connect content, learners, and the tutor interface.
  • Contribute to the design of the AI reasoning layer that interprets simulation outputs and translates them into learner-friendly explanations.

2. Simulation Lab Intelligence

  • Work with the ML team to integrate LLMs with the Simulation Lab; enabling the system to read experiment variables, predict outcomes, and explain results dynamically.
  • Create evaluation loops that compare student actions against expected results and generate personalized feedback through the tutor.
  • Support the underlying ML logic for physics-based prediction and real-time data flow between lab modules and the tutor layer.

3. Model Integration & Optimization

  • Fine-tune, evaluate, and deploy LLMs or smaller domain models that serve specific platform functions.
  • Design retrieval and grounding workflows so that all model outputs reference the correct textbook or course material.
  • Optimize performance, latency, and scalability for high-traffic, interactive learning environments.

4. Collaboration & Research

  • Partner with Prompt Engineers to ensure reasoning consistency across tutoring and simulations.
  • Work with Product and Education teams to define use cases that align AI behavior with learning goals.
  • Stay updated with new model capabilities and research advancements in RAG, tool use, and multi-modal learning systems.

5. Data & Infrastructure

  • Maintain robust data pipelines for model inputs (textbooks, transcripts, lab data) and evaluation sets.
  • Ensure privacy-safe data handling and continuous model performance tracking.
  • Deploy and monitor AI workloads using cloud platforms (AWS, GCP, or Azure).etc.

Soft Skills:

  • Strong problem-solving and analytical abilities.
  • Eagerness to learn, innovate and deliver impactful results.

Required Qualifications & Skills

  • 3–4 years of experience in AI engineering, ML integration, or backend systems for AI-driven products.
  • Strong proficiency in Python, with experience in frameworks like FastAPI, Flask, or LangChain.
  • Familiarity with LLMs, embeddings, RAG systems, and vector databases (Pinecone, FAISS, Chroma, etc.).
  • Experience building APIs and integrating with frontend components.
  • Working knowledge of cloud platforms (AWS, GCP, Azure) and model deployment environments.
  • Understanding of data structures, algorithms, and OOP principles.
Read more
Hashone Careers
Madhavan I
Posted by Madhavan I
Bengaluru (Bangalore)
6 - 10 yrs
₹15L - ₹30L / yr
skill iconPython
skill iconMachine Learning (ML)
Scikit-Learn
XGBoost
PyTorch
+1 more

Job Description: Applied Scientist

Location: Bangalore / Hybrid / Remote

Company: LodgIQ

Industry: Hospitality / SaaS / Machine Learning


About LodgIQ

Headquartered in New York, LodgIQ delivers a revolutionary B2B SaaS platform to the

travel industry. By leveraging machine learning and artificial intelligence, we enable precise

forecasting and optimized pricing for hotel revenue management. Backed by Highgate

Ventures and Trilantic Capital Partners, LodgIQ is a well-funded, high-growth startup with a

global presence.

About the Role

We are seeking a highly motivated Applied Scientist to join our Data Science team. This

individual will play a key role in enhancing and scaling our existing forecasting and pricing

systems and developing new capabilities that support our intelligent decision-making

platform.

We are looking for team members who: ● Are deeply curious and passionate about applying machine learning to real-world

problems. ● Demonstrate strong ownership and the ability to work independently. ● Excel in both technical execution and collaborative teamwork. ● Have a track record of shipping products in complex environments.

What You’ll Do ● Build, train, and deploy machine learning and operations research models for

forecasting, pricing, and inventory optimization. ● Work with large-scale, noisy, and temporally complex datasets. ● Collaborate cross-functionally with engineering and product teams to move models

from research to production. ● Generate interpretable and trusted outputs to support adoption of AI-driven rate

recommendations. ● Contribute to the development of an AI-first platform that redefines hospitality revenue

management.

Required Qualifications ● Bachelor's or Master’s degree or PhD in Computer Science or related field. ● 3-5 years of hands-on experience in a product-centric company, ideally with full model

lifecycle exposure.


Commented [1]: Leaving note here

Acceptable Degree types - Masters or PhD

Fields

Operations Research

Industrial/Systems Engineering

Computer Science

Applied Mathematics


● Demonstrated ability to apply machine learning and optimization techniques to solve

real-world business problems.

● Proficient in Python and machine learning libraries such as PyTorch, statsmodel,

LightGBM, scikit-learn, XGBoost

● Strong knowledge of Operations Research models (Stochastic optimization, dynamic

programming) and forecasting models (time-series and ML-based).

● Understanding of machine learning and deep learning foundations.

● Translate research into commercial solutions

● Strong written and verbal communication skills to explain complex technical concepts

clearly to cross-functional teams.

● Ability to work independently and manage projects end-to-end.

Preferred Experience

● Experience in revenue management, pricing systems, or demand forecasting,

particularly within the hotel and hospitality domain.

● Applied knowledge of reinforcement learning techniques (e.g., bandits, Q-learning,

model-based control).

● Familiarity with causal inference methods (e.g., DAGs, treatment effect estimation).

● Proven experience in collaborative product development environments, working closely

with engineering and product teams.

Why LodgIQ?

● Join a fast-growing, mission-driven company transforming the future of hospitality.

● Work on intellectually challenging problems at the intersection of machine learning,

decision science, and human behavior.

● Be part of a high-impact, collaborative team with the autonomy to drive initiatives from

ideation to production.

● Competitive salary and performance bonuses.

● For more information, visit https://www.lodgiq.com

Read more
Deqode

at Deqode

1 recruiter
Samiksha Agrawal
Posted by Samiksha Agrawal
Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Mumbai, Pune, Bengaluru (Bangalore), Hyderabad, Jaipur, Bhopal
5 - 8 yrs
₹5L - ₹13L / yr
skill iconPython
Azure
Artificial Intelligence (AI)
FastAPI
skill iconFlask
+3 more

Job Description: Python-Azure AI Developer

Experience: 5+ years

Locations: Bangalore | Pune | Chennai | Jaipur | Hyderabad | Gurgaon | Bhopal

Mandatory Skills:

  • Python: Expert-level proficiency with FastAPI/Flask
  • Azure Services: Hands-on experience integrating Azure cloud services
  • Databases: PostgreSQL, Redis
  • AI Expertise: Exposure to Agentic AI technologies, frameworks, or SDKs with strong conceptual understanding

Good to Have:

  • Workflow automation tools (n8n or similar)
  • Experience with LangChain, AutoGen, or other AI agent frameworks
  • Azure OpenAI Service knowledge

Key Responsibilities:

  • Develop AI-powered applications using Python and Azure
  • Build RESTful APIs with FastAPI/Flask
  • Integrate Azure services for AI/ML workloads
  • Implement agentic AI solutions
  • Database optimization and management
  • Workflow automation implementation


Read more
Meraki Labs
Agency job
via ENTER by Rajkishor Mishra
Bengaluru (Bangalore)
8 - 12 yrs
₹60L - ₹70L / yr
skill iconMachine Learning (ML)
Generative AI
skill iconPython
Artificial Intelligence (AI)
Large Language Models (LLM) tuning
+9 more

Job Overview:


As a Technical Lead, you will be responsible for leading the design, development, and deployment of AI-powered Edtech solutions. You will mentor a team of engineers, collaborate with data scientists, and work closely with product managers to build scalable and efficient AI systems. The ideal candidate should have 8-10 years of experience in software development, machine learning, AI use case development and product creation along  with strong expertise in cloud-based architectures.


Key Responsibilities:


AI Tutor & Simulation Intelligence

  • Architect the AI intelligence layer that drives contextual tutoring, retrieval-based reasoning, and fact-grounded explanations.
  • Build RAG (retrieval-augmented generation) pipelines and integrate verified academic datasets from textbooks and internal course notes.
  • Connect the AI Tutor with the Simulation Lab, enabling dynamic feedback — the system should read experiment results, interpret them, and explain why outcomes occur.
  • Ensure AI responses remain transparent, syllabus-aligned, and pedagogically accurate.


Platform & System Architecture

  • Lead the development of a modular, full-stack platform unifying courses, explainers, AI chat, and simulation windows in a single environment.
  • Design microservice architectures with API bridges across content systems, AI inference, user data, and analytics.
  • Drive performance, scalability, and platform stability — every millisecond and every click should feel seamless.


Reliability, Security & Analytics

  • Establish system observability and monitoring pipelines (usage, engagement, AI accuracy).
  • Build frameworks for ethical AI, ensuring transparency, privacy, and student safety.
  • Set up real-time learning analytics to measure comprehension and identify concept gaps.


Leadership & Collaboration

  • Mentor and elevate engineers across backend, ML, and front-end teams.
  • Collaborate with the academic and product teams to translate physics pedagogy into engineering precision.
  • Evaluate and integrate emerging tools — multi-modal AI, agent frameworks, explainable AI — into the product roadmap.


Qualifications & Skills:


  • 8–10 years of experience in software engineering, ML systems, or scalable AI product builds.
  • Proven success leading cross-functional AI/ML and full-stack teams through 0→1 and scale-up phases.
  • Expertise in cloud architecture (AWS/GCP/Azure) and containerization (Docker, Kubernetes).
  • Experience designing microservices and API ecosystems for high-concurrency platforms.
  • Strong knowledge of LLM fine-tuning, RAG pipelines, and vector databases (Pinecone, Weaviate, etc.).
  • Demonstrated ability to work with educational data, content pipelines, and real-time systems.


Bonus Skills (Nice to Have):

  • Experience with multi-modal AI models (text, image, audio, video).
  • Knowledge of AI safety, ethical AI, and explain ability techniques.
  • Prior work in AI-powered automation tools or AI-driven SaaS products.
Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
7 - 9 yrs
₹15L - ₹28L / yr
databricks
skill iconPython
SQL
PySpark
skill iconAmazon Web Services (AWS)
+9 more

Role Proficiency:

This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required.


Skill Examples:

  1. Proficiency in SQL Python or other programming languages used for data manipulation.
  2. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF.
  3. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery).
  4. Conduct tests on data pipelines and evaluate results against data quality and performance specifications.
  5. Experience in performance tuning.
  6. Experience in data warehouse design and cost improvements.
  7. Apply and optimize data models for efficient storage retrieval and processing of large datasets.
  8. Communicate and explain design/development aspects to customers.
  9. Estimate time and resource requirements for developing/debugging features/components.
  10. Participate in RFP responses and solutioning.
  11. Mentor team members and guide them in relevant upskilling and certification.

 

Knowledge Examples:

  1. Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF.
  2. Proficient in SQL for analytics and windowing functions.
  3. Understanding of data schemas and models.
  4. Familiarity with domain-related data.
  5. Knowledge of data warehouse optimization techniques.
  6. Understanding of data security concepts.
  7. Awareness of patterns frameworks and automation practices.


 

Additional Comments:

# of Resources: 22 Role(s): Technical Role Location(s): India Planned Start Date: 1/1/2026 Planned End Date: 6/30/2026

Project Overview:

Role Scope / Deliverables: We are seeking highly skilled Data Engineer with strong experience in Databricks, PySpark, Python, SQL, and AWS to join our data engineering team on or before 1st week of Dec, 2025.

The candidate will be responsible for designing, developing, and optimizing large-scale data pipelines and analytics solutions that drive business insights and operational efficiency.

Design, build, and maintain scalable data pipelines using Databricks and PySpark.

Develop and optimize complex SQL queries for data extraction, transformation, and analysis.

Implement data integration solutions across multiple AWS services (S3, Glue, Lambda, Redshift, EMR, etc.).

Collaborate with analytics, data science, and business teams to deliver clean, reliable, and timely datasets.

Ensure data quality, performance, and reliability across data workflows.

Participate in code reviews, data architecture discussions, and performance optimization initiatives.

Support migration and modernization efforts for legacy data systems to modern cloud-based solutions.


Key Skills:

Hands-on experience with Databricks, PySpark & Python for building ETL/ELT pipelines.

Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).

Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).

Experience with data modeling, schema design, and performance optimization.

Familiarity with CI/CD pipelines, version control (Git), and workflow orchestration (Airflow preferred).

Excellent problem-solving, communication, and collaboration skills.

 

Skills: Databricks, Pyspark & Python, Sql, Aws Services

 

Must-Haves

Python/PySpark (5+ years), SQL (5+ years), Databricks (3+ years), AWS Services (3+ years), ETL tools (Informatica, Glue, DataProc) (3+ years)

Hands-on experience with Databricks, PySpark & Python for ETL/ELT pipelines.

Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).

Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).

Experience with data modeling, schema design, and performance optimization.

Familiarity with CI/CD pipelines, Git, and workflow orchestration (Airflow preferred).


******

Notice period - Immediate to 15 days

Location: Bangalore

Read more
Mantle Solutions- A Lulu Group Company
Nikita Sinha
Posted by Nikita Sinha
Bangalore (Whitefield)
2 - 4 yrs
Upto ₹20L / yr (Varies
)
skill iconPython
SQL
skill iconMachine Learning (ML)
skill iconData Analytics

We are seeking a hands-on eCommerce Analytics & Insights Lead to help establish and scale our newly launched eCommerce business. The ideal candidate is highly data-savvy, understands eCommerce deeply, and can lead KPI definition, performance tracking, insights generation, and data-driven decision-making.

You will work closely with cross-functional teams—Buying, Marketing, Operations, and Technology—to build dashboards, uncover growth opportunities, and guide the evolution of our online channel.


Key Responsibilities

Define & Monitor eCommerce KPIs

  • Set up and track KPIs across the customer journey: traffic, conversion, retention, AOV/basket size, repeat rate, etc.
  • Build KPI frameworks aligned with business goals.

Data Tracking & Infrastructure

  • Partner with marketing, merchandising, operations, and tech teams to define data tracking requirements.
  • Collaborate with eCommerce and data engineering teams to ensure data quality, completeness, and availability.

Dashboards & Reporting

  • Build dashboards and automated reports to track:
  • Overall site performance
  • Category & product performance
  • Marketing ROI and acquisition effectiveness

Insights & Performance Diagnosis

Identify trends, opportunities, and root causes of underperformance in areas such as:

  • Product availability & stock health
  • Pricing & promotions
  • Checkout funnel drop-offs
  • Customer retention & cohort behavior
  • Channel acquisition performance

Conduct:

  • Cohort analysis
  • Funnel analytics
  • Customer segmentation
  • Basket analysis

Data-Driven Growth Initiatives

  • Propose and evaluate experiments, optimization ideas, and quick wins.
  • Help business teams interpret KPIs and take informed decisions.

Required Skills & Experience

  • 2–5 years experience in eCommerce analytics (grocery retail experience preferred).
  • Strong understanding of eCommerce metrics and analytics frameworks (Traffic → Conversion → Repeat → LTV).
  • Proficiency with tools such as:
  • Google Analytics / GA4
  • Excel
  • SQL
  • Power BI or Tableau
  • Experience working with:
  • Digital marketing data
  • CRM and customer data
  • Product/category performance data
  • Ability to convert business questions into analytical tasks and produce clear, actionable insights.
  • Familiarity with:
  • Customer journey mapping
  • Funnel analysis
  • Basket and behavioral analysis
  • Comfortable working in fast-paced, ambiguous, and build-from-scratch environments.
  • Strong communication and stakeholder management skills.
  • Strong technical capability in at least one programming language: SQL or PySpark.

Good to Have

  • Experience with eCommerce platforms (Shopify, Magento, Salesforce Commerce, etc.).
  • Exposure to A/B testing, recommendation engines, or personalization analytics.
  • Knowledge of Python/R for deeper analytics (optional).
  • Experience with tracking setup (GTM, event tagging, pixel/event instrumentation).
Read more
Technology, Information and Internet Company

Technology, Information and Internet Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 10 yrs
₹20L - ₹65L / yr
Data Structures
CI/CD
Microservices
Architecture
Cloud Computing
+19 more

Required Skills: CI/CD Pipeline, Data Structures, Microservices, Determining overall architectural principles, frameworks and standards, Cloud expertise (AWS, GCP, or Azure), Distributed Systems


Criteria:

  • Candidate must have 6+ years of backend engineering experience, with 1–2 years leading engineers or owning major systems.
  • Must be strong in one core backend language: Node.js, Go, Java, or Python.
  • Deep understanding of distributed systems, caching, high availability, and microservices architecture.
  • Hands-on experience with AWS/GCP, Docker, Kubernetes, and CI/CD pipelines.
  • Strong command over system design, data structures, performance tuning, and scalable architecture
  • Ability to partner with Product, Data, Infrastructure, and lead end-to-end backend roadmap execution.


Description

What This Role Is All About

We’re looking for a Backend Tech Lead who’s equally obsessed with architecture decisions and clean code, someone who can zoom out to design systems and zoom in to fix that one weird memory leak. You’ll lead a small but sharp team, drive the backend roadmap, and make sure our systems stay fast, lean, and battle-tested.

 

What You’ll Own

● Architect backend systems that handle India-scale traffic without breaking a sweat.

● Build and evolve microservices, APIs, and internal platforms that our entire app depends on.

● Guide, mentor, and uplevel a team of backend engineers—be the go-to technical brain.

● Partner with Product, Data, and Infra to ship features that are reliable and delightful.

● Set high engineering standards—clean architecture, performance, automation, and testing.

● Lead discussions on system design, performance tuning, and infra choices.

● Keep an eye on production like a hawk: metrics, monitoring, logs, uptime.

● Identify gaps proactively and push for improvements instead of waiting for fires.

 

What Makes You a Great Fit

● 6+ years of backend experience; 1–2 years leading engineers or owning major systems.

● Strong in one core language (Node.js / Go / Java / Python) — pick your sword.

● Deep understanding of distributed systems, caching, high-availability, and microservices.

● Hands-on with AWS/GCP, Docker, Kubernetes, CI/CD pipelines.

● You think data structures and system design are not interviews — they’re daily tools.

● You write code that future-you won’t hate.

● Strong communication and a let’s figure this out attitude.

 

Bonus Points If You Have

● Built or scaled consumer apps with millions of DAUs.

● Experimented with event-driven architecture, streaming systems, or real-time pipelines.

● Love startups and don’t mind wearing multiple hats.

● Experience on logging/monitoring tools like Grafana, Prometheus, ELK, OpenTelemetry.

 

Why company Might Be Your Best Move

● Work on products used by real people every single day.

● Ownership from day one—your decisions will shape our core architecture.

● No unnecessary hierarchy; direct access to founders and senior leadership.

● A team that cares about quality, speed, and impact in equal measure.

● Build for Bharat — complex constraints, huge scale, real impact.


Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Kochi (Cochin), Trivandrum, Hyderabad, Thiruvananthapuram
8 - 10 yrs
₹10L - ₹25L / yr
Business Analysis
Data Visualization
PowerBI
SQL
Tableau
+18 more

Job Description – Senior Technical Business Analyst

Location: Trivandrum (Preferred) | Open to any location in India

Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST

 

About the Role

We are seeking highly motivated and analytically strong Senior Technical Business Analysts who can work seamlessly with business and technology stakeholders to convert a one-line problem statement into a well-defined project or opportunity. This role is ideal for fresh graduates who have a strong foundation in data analytics, data engineering, data visualization, and data science, along with a strong drive to learn, collaborate, and grow in a dynamic, fast-paced environment.

As a Technical Business Analyst, you will be responsible for translating complex business challenges into actionable user stories, analytical models, and executable tasks in Jira. You will work across the entire data lifecycle—from understanding business context to delivering insights, solutions, and measurable outcomes.

 

Key Responsibilities

Business & Analytical Responsibilities

  • Partner with business teams to understand one-line problem statements and translate them into detailed business requirementsopportunities, and project scope.
  • Conduct exploratory data analysis (EDA) to uncover trends, patterns, and business insights.
  • Create documentation including Business Requirement Documents (BRDs)user storiesprocess flows, and analytical models.
  • Break down business needs into concise, actionable, and development-ready user stories in Jira.

Data & Technical Responsibilities

  • Collaborate with data engineering teams to design, review, and validate data pipelinesdata models, and ETL/ELT workflows.
  • Build dashboards, reports, and data visualizations using leading BI tools to communicate insights effectively.
  • Apply foundational data science concepts such as statistical analysispredictive modeling, and machine learning fundamentals.
  • Validate and ensure data quality, consistency, and accuracy across datasets and systems.

Collaboration & Execution

  • Work closely with product, engineering, BI, and operations teams to support the end-to-end delivery of analytical solutions.
  • Assist in development, testing, and rollout of data-driven solutions.
  • Present findings, insights, and recommendations clearly and confidently to both technical and non-technical stakeholders.

 

Required Skillsets

Core Technical Skills

  • 6+ years of Technical Business Analyst experience within an overall professional experience of 8+ years
  • Data Analytics: SQL, descriptive analytics, business problem framing.
  • Data Engineering (Foundational): Understanding of data warehousing, ETL/ELT processes, cloud data platforms (AWS/GCP/Azure preferred).
  • Data Visualization: Experience with Power BI, Tableau, or equivalent tools.
  • Data Science (Basic/Intermediate): Python/R, statistical methods, fundamentals of ML algorithms.

 

Soft Skills

  • Strong analytical thinking and structured problem-solving capability.
  • Ability to convert business problems into clear technical requirements.
  • Excellent communication, documentation, and presentation skills.
  • High curiosity, adaptability, and eagerness to learn new tools and techniques.

 

Educational Qualifications

  • BE/B.Tech or equivalent in:
  • Computer Science / IT
  • Data Science

 

What We Look For

  • Demonstrated passion for data and analytics through projects and certifications.
  • Strong commitment to continuous learning and innovation.
  • Ability to work both independently and in collaborative team environments.
  • Passion for solving business problems using data-driven approaches.
  • Proven ability (or aptitude) to convert a one-line business problem into a structured project or opportunity.

 

Why Join Us?

  • Exposure to modern data platforms, analytics tools, and AI technologies.
  • A culture that promotes innovation, ownership, and continuous learning.
  • Supportive environment to build a strong career in data and analytics.

 

Skills: Data Analytics, Business Analysis, Sql


Must-Haves

Technical Business Analyst (6+ years), SQL, Data Visualization (Power BI, Tableau), Data Engineering (ETL/ELT, cloud platforms), Python/R

 

******

Notice period - 0 to 15 days (Max 30 Days)

Educational Qualifications: BE/B.Tech or equivalent in: (Computer Science / IT) /Data Science

Location: Trivandrum (Preferred) | Open to any location in India

Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST

Read more
Albert Invent

at Albert Invent

4 candid answers
3 recruiters
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
4 - 6 yrs
Upto ₹30L / yr (Varies
)
skill iconPython
AWS Lambda
Amazon Redshift
Snow flake schema
SQL

To design, build, and optimize scalable data infrastructure and pipelines that enable efficient

data collection, transformation, and analysis across the organization. The Senior Data Engineer

will play a key role in driving data architecture decisions, ensuring data quality and availability,

and empowering analytics, product, and engineering teams with reliable, well-structured data to

support business growth and strategic decision-making.


Responsibilities:

• Develop, and maintain SQL and NoSQL databases, ensuring high performance,

scalability, and reliability.

• Collaborate with the API team and Data Science team to build robust data pipelines and

automations.

• Work closely with stakeholders to understand database requirements and provide

technical solutions.

• Optimize database queries and performance tuning to enhance overall system

efficiency.

• Implement and maintain data security measures, including access controls and

encryption.

• Monitor database systems and troubleshoot issues proactively to ensure uninterrupted

service.

• Develop and enforce data quality standards and processes to maintain data integrity.

• Create and maintain documentation for database architecture, processes, and

procedures.

• Stay updated with the latest database technologies and best practices to drive

continuous improvement.

• Expertise in SQL queries and stored procedures, with the ability to optimize and fine-tune

complex queries for performance and efficiency.

• Experience with monitoring and visualization tools such as Grafana to monitor database

performance and health.


Requirements:

• 4+ years of experience in data engineering, with a focus on large-scale data systems.

• Proven experience designing data models and access patterns across SQL and NoSQL

ecosystems.

• Hands-on experience with technologies like PostgreSQL, DynamoDB, S3, GraphQL, or

vector databases.

• Proficient in SQL stored procedures with extensive expertise in MySQL schema design,

query optimization, and resolvers, along with hands-on experience in building and

maintaining data warehouses.

• Strong programming skills in Python or JavaScript, with the ability to write efficient,

maintainable code.

• Familiarity with distributed systems, data partitioning, and consistency models.

• Familiarity with observability stacks (Prometheus, Grafana, OpenTelemetry) and

debugging production bottlenecks.

• Deep understanding of cloud infrastructure (preferably AWS), including networking, IAM,

and cost optimization.

• Prior experience building multi-tenant systems with strict performance and isolation

guarantees.

• Excellent communication and collaboration skills to influence cross-functional technical

decisions.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Swet Patel
Posted by Swet Patel
Bengaluru (Bangalore)
5 - 13 yrs
Best in industry
databricks
skill iconPython
SQL
PySpark
Spark

Key Responsibilities

We are seeking an experienced Data Engineer with a strong background in Databricks, Python, Spark/PySpark and SQL to design, develop, and optimize large-scale data processing applications. The ideal candidate will build scalable, high-performance data engineering solutions and ensure seamless data flow across cloud and on-premise platforms.

Key Responsibilities:

  • Design, develop, and maintain scalable data processing applications using DatabricksPython, and PySpark/Spark.
  • Write and optimize complex SQL queries for data extraction, transformation, and analysis.
  • Collaborate with data engineers, data scientists, and other stakeholders to understand business requirements and deliver high-quality solutions.
  • Ensure data integrity, performance, and reliability across all data processing pipelines.
  • Perform data analysis and implement data validation to ensure high data quality.
  • Implement and manage CI/CD pipelines for automated testing, integration, and deployment.
  • Contribute to continuous improvement of data engineering processes and tools.

Required Skills & Qualifications:

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
  • Proven experience as a Databricks with strong expertise in Python, SQL and Spark/PySpark.
  • Strong proficiency in SQL, including working with relational databases and writing optimized queries.
  • Solid programming experience in Python, including data processing and automation.


Read more
Deltek
Harsha Mehrotra
Posted by Harsha Mehrotra
Bengaluru (Bangalore)
7 - 15 yrs
Best in industry
skill iconPython
SQL server
Integration

Position Responsibilities:


  • Design & Develop integration and automation solutions based on technical specifications.
  • Support in testing activities, including integration testing, end-to-end (business process) testing and UAT
  • Being aware of CI-CD, engineering best practices, and SDLC process
  • Should have an excellent understanding of all existing integration and automation.
  • Understand the product integration requirement and solve it right, which is scalable, performant & resilient.
  • Develop using TDD methodology, apply appropriate design methodologies & coding standards
  • Able to conduct code reviews, quick at debugging
  • Able to deconstruct a complex issue & resolve it
  • Support in testing activities, including integration testing, end-to-end (business process) testing, and UAT
  • Able to work with the stakeholders/customers, able to synthesise the business requirements, and suggest the best integration approaches – Process analyst
  • Able to suggest, own & adapt to new technical frameworks/solutions & implement continuous process improvements for better delivery


Qualifications:


  • A minimum of 7-9 years of experience in developing integration/automation solutions or related experience
  • 3-4 years of experience in a technical architect or lead role
  • Strong working experience in Python is preferred
  • Good understanding of integration concepts, methodologies, and technologies
  • Good communication, presentation skills, Strong interpersonal skills with the ability to convey and relate ideas to others and work collaboratively to get things done.
Read more
Capace Software Private Limited
Bhopal, Bengaluru (Bangalore)
7 - 13 yrs
₹9L - ₹12L / yr
Android
skill iconAndroid Development
frontend
Backend testing
fintech
+16 more

Job Description -Technical Project Manager

Job Title: Technical Project Manager

Location: Bhopal / Bangalore (On-site)

Experience Required: 7+ Years

Industry: Fintech / SaaS / Software Development

Role Overview

We are looking for a Technical Project Manager (TPM) who can bridge the gap between management and developers. The TPM will manage Android, Frontend, and Backend teams, ensure smooth development processes, track progress, evaluate output quality, resolve technical issues, and deliver timely reports.

Key Responsibilities

Project & Team Management

  • Manage daily tasks for Android, Frontend, and Backend developers
  • Conduct daily stand-ups, weekly planning, and reviews
  • Track progress, identify blockers, and ensure timely delivery
  • Maintain sprint boards, task estimations, and timelines

Technical Requirement Translation

  • Convert business requirements into technical tasks
  • Communicate requirements clearly to developers
  • Create user stories, flow diagrams, and PRDs
  • Ensure requirements are understood and implemented correctly

Quality & Build Review

  • Validate build quality, UI/UX flow, functionality
  • Check API integrations, errors, performance issues
  • Ensure coding practices and architecture guidelines are followed
  • Perform preliminary QA before handover to testing or clients

Issue Resolution

  • Identify development issues early
  • Coordinate with developers to fix bugs
  • Escalate major issues to founders with clear insights

Reporting & Documentation

  • Daily/weekly reports to management
  • Sprint documentation, release notes
  • Maintain project documentation & version control processes

Cross-Team Communication

  • Act as the single point of contact for management
  • Align multiple tech teams with business goals
  • Coordinate with HR and operations for resource planning

Required Skills

  • Strong understanding of Android, Web (Frontend/React), Backend development flows
  • Knowledge of APIs, Git, CI/CD, basic testing
  • Experience with Agile/Scrum methodologies
  • Ability to review builds and suggest improvements
  • Strong documentation skills (Jira, Notion, Trello, Asana)
  • Excellent communication & leadership
  • Ability to handle pressure and multiple projects

Good to Have

  • Prior experience in Fintech projects
  • Basic knowledge of UI/UX
  • Experience in preparing FSD/BRD/PRD
  • QA experience or understanding of test cases

Salary Range: 9 to 12 LPA

Read more
Bizita Technologies
Bengaluru (Bangalore), Bhopal
7 - 10 yrs
₹9L - ₹12L / yr
Android
Frontend
Backend testing
Fullstack Developer
Fintech
+8 more

Job Description -Technical Project Manager

Job Title: Technical Project Manager

Location: Bhopal / Bangalore (On-site)

Experience Required: 7+ Years

Industry: Fintech / SaaS / Software Development

Role Overview

We are looking for a Technical Project Manager (TPM) who can bridge the gap between management and developers. The TPM will manage Android, Frontend, and Backend teams, ensure smooth development processes, track progress, evaluate output quality, resolve technical issues, and deliver timely reports.

Key Responsibilities

Project & Team Management

  • Manage daily tasks for Android, Frontend, and Backend developers
  • Conduct daily stand-ups, weekly planning, and reviews
  • Track progress, identify blockers, and ensure timely delivery
  • Maintain sprint boards, task estimations, and timelines

Technical Requirement Translation

  • Convert business requirements into technical tasks
  • Communicate requirements clearly to developers
  • Create user stories, flow diagrams, and PRDs
  • Ensure requirements are understood and implemented correctly

Quality & Build Review

  • Validate build quality, UI/UX flow, functionality
  • Check API integrations, errors, performance issues
  • Ensure coding practices and architecture guidelines are followed
  • Perform preliminary QA before handover to testing or clients

Issue Resolution

  • Identify development issues early
  • Coordinate with developers to fix bugs
  • Escalate major issues to founders with clear insights

Reporting & Documentation

  • Daily/weekly reports to management
  • Sprint documentation, release notes
  • Maintain project documentation & version control processes

Cross-Team Communication

  • Act as the single point of contact for management
  • Align multiple tech teams with business goals
  • Coordinate with HR and operations for resource planning

Required Skills

  • Strong understanding of Android, Web (Frontend/React), Backend development flows
  • Knowledge of APIs, Git, CI/CD, basic testing
  • Experience with Agile/Scrum methodologies
  • Ability to review builds and suggest improvements
  • Strong documentation skills (Jira, Notion, Trello, Asana)
  • Excellent communication & leadership
  • Ability to handle pressure and multiple projects

Good to Have

  • Prior experience in Fintech projects
  • Basic knowledge of UI/UX
  • Experience in preparing FSD/BRD/PRD
  • QA experience or understanding of test cases

Salary Range: 9 to 12 LPA

Read more
Bengaluru (Bangalore)
2 - 5 yrs
₹7L - ₹8L / yr
Playwrite
Selenium
Selenium Web driver
skill iconJenkins
skill iconPython
+8 more



Job Title: QA Automation Analyst – End-to-End Framework Development (Playwright)

Location: Brookefield

Experience: 2+ years

Domain: Banking / Financial Services

Job Description

We are looking for a hands-on QA Automation Analyst to design, build, and maintain end-to-end automation frameworks for high-quality banking and financial applications. You will be responsible for ensuring robust test coverage, validating business workflows, and integrating testing within CI/CD pipelines. You’ll collaborate closely with product, engineering, and DevOps teams to uphold compliance, audit readiness, and rapid delivery in an agile environment.

Key Responsibilities

  • Design, develop, and maintain end-to-end automation frameworks from scratch using modern tools and best practices.
  • Develop and execute test plans, test cases, and automation scripts for functional, regression, integration, and API testing.
  • Build automation using Selenium, PyTest, Robot Framework, Playwright (highlighted), or Cypress.
  • Perform API testing for REST services using Postman, Swagger, or Rest Assured; validate responses, contracts, and data consistency.
  • Integrate automation frameworks with CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI/CD, or similar).
  • Participate in requirement analysis, impact assessment, sprint ceremonies, and cross-functional discussions.
  • Validate data using SQL and support User Acceptance Testing (UAT); generate reports and release sign-offs.
  • Log, track, and close defects using standard bug-tracking tools; perform root-cause analysis for recurring issues.
  • Maintain QA artifacts for audit and compliance purposes.
  • Mentor junior QA team members and contribute to process improvements.

Qualifications & Skills

  • Bachelor’s degree in Computer Science, Engineering, or related field.
  • 2+ years of hands-on experience in QA automation for enterprise applications, preferably in the banking/financial domain.
  • Strong understanding of SDLC, STLC, QA methodologies, tools, and best practices.
  • Experience designing end-to-end automation frameworks from scratch.
  • Hands-on with manual and automation testing (Selenium, PyTest, Robot Framework, Playwright, Cypress).
  • Experience in API testing and validating RESTful services; knowledge of Rest Assured is a plus.
  • Proficient with databases and SQL (PostgreSQL, MySQL, Oracle).
  • Solid experience in Agile/Scrum environments and tools like Jira, TestLink, or equivalent.
  • Strong understanding of CI/CD pipelines and deployment automation using Jenkins or similar tools.
  • Knowledge of version control tools (Git) and collaborative workflows.
  • Excellent analytical, problem-solving, documentation, and communication skills.

Nice to Have / Bonus

  • Exposure to performance testing tools like JMeter or Gatling.
  • Programming experience in Java, Python, or JavaScript for automation scripting.
  • ISTQB or equivalent QA certification.

Why Join Us

  • Opportunity to work on mission-critical banking applications.
  • Hands-on exposure to modern automation tools and frameworks.
  • Work in a collaborative, agile, and fast-paced environment.
  • Contribute to cutting-edge CI/CD automation and testing strategies.




Read more
Sagri
Bengaluru (Bangalore)
5 - 8 yrs
₹14L - ₹15L / yr
skill iconReact.js
skill iconPython
skill iconNextJs (Next.js)
skill iconAmazon Web Services (AWS)
TypeScript
+3 more
  • 5+ years full-stack development
  • Proficiency in AWS cloud-native development
  • Experience with microservices & async architectures
  • Strong TypeScript proficiency
  • Strong Python proficiency
  • React.js expertise
  • Next.js expertise
  • PostgreSQL + PostGIS experience
  • GraphQL development experience
  • Prisma ORM experience
  • Experience in B2C product development(Retail/Ecommerce)
  • Looking for candidates based out of Bangalore only


CTC: up to 40 LPA


If interested kindly share your updated resume at 82008 31681


Read more
AryuPay Technologies
Bhavana Chaudhari
Posted by Bhavana Chaudhari
Bengaluru (Bangalore), Bhopal
7 - 10 yrs
₹9L - ₹12L / yr
skill iconAndroid Development
Frontend
Backend testing
Fullstack Developer
skill iconPython
+8 more

Job Description -Technical Project Manager

Job Title: Technical Project Manager

Location: Bhopal / Bangalore (On-site)

Experience Required: 7+ Years

Industry: Fintech / SaaS / Software Development

Role Overview

We are looking for a Technical Project Manager (TPM) who can bridge the gap between management and developers. The TPM will manage Android, Frontend, and Backend teams, ensure smooth development processes, track progress, evaluate output quality, resolve technical issues, and deliver timely reports.

Key Responsibilities

Project & Team Management

  • Manage daily tasks for Android, Frontend, and Backend developers
  • Conduct daily stand-ups, weekly planning, and reviews
  • Track progress, identify blockers, and ensure timely delivery
  • Maintain sprint boards, task estimations, and timelines

Technical Requirement Translation

  • Convert business requirements into technical tasks
  • Communicate requirements clearly to developers
  • Create user stories, flow diagrams, and PRDs
  • Ensure requirements are understood and implemented correctly

Quality & Build Review

  • Validate build quality, UI/UX flow, functionality
  • Check API integrations, errors, performance issues
  • Ensure coding practices and architecture guidelines are followed
  • Perform preliminary QA before handover to testing or clients

Issue Resolution

  • Identify development issues early
  • Coordinate with developers to fix bugs
  • Escalate major issues to founders with clear insights

Reporting & Documentation

  • Daily/weekly reports to management
  • Sprint documentation, release notes
  • Maintain project documentation & version control processes

Cross-Team Communication

  • Act as the single point of contact for management
  • Align multiple tech teams with business goals
  • Coordinate with HR and operations for resource planning

Required Skills

  • Strong understanding of Android, Web (Frontend/React), Backend development flows
  • Knowledge of APIs, Git, CI/CD, basic testing
  • Experience with Agile/Scrum methodologies
  • Ability to review builds and suggest improvements
  • Strong documentation skills (Jira, Notion, Trello, Asana)
  • Excellent communication & leadership
  • Ability to handle pressure and multiple projects

Good to Have

  • Prior experience in Fintech projects
  • Basic knowledge of UI/UX
  • Experience in preparing FSD/BRD/PRD
  • QA experience or understanding of test cases

Salary Range: 9 to 12 LPA

Read more
Bengaluru (Bangalore)
1 - 3 yrs
₹4L - ₹5L / yr
skill iconPython
skill iconRust
PyTorch
model context protocol
Generative AI
+3 more

ML Intern

Hyperworks Imaging is a cutting-edge technology company based out of Bengaluru, India since 2016. Our team uses the latest advances in deep learning and multi-modal machine learning techniques to solve diverse real world problems. We are rapidly growing, working with multiple companies around the world.

JOB OVERVIEW

We are seeking a talented and results-oriented ML Intern to join our growing team in India. In this role, you will be responsible for developing and implementing new advanced ML algorithms and AI agents for creating assistants of the future. 

The ideal candidate will work on a complete ML pipeline starting from extraction, transformation and analysis of data to developing novel ML algorithms. The candidate will implement latest research papers and closely work with various stakeholders to ensure data-driven decisions and integrate the solutions into a robust ML pipeline.

RESPONSIBILITIES:

  • Create AI agents using Model Context Protocols (MCPs), Claude Code, DsPy etc.
  • Develop custom evals for AI agents.
  • Build and maintain ML pipelines
  • Optimize and evaluate ML models to ensure accuracy and performance.
  • Define system requirements and integrate ML algorithms into cloud based workflows.
  • Write clean, well-documented, and maintainable code following best practices


REQUIREMENTS:

  • 1-3+ years of experience in data science, machine learning, or a similar role.
  • Demonstrated expertise with python, PyTorch, and TensorFlow.
  • Graduated/Graduating with B.Tech/M.Tech/PhD degrees in Electrical Engg./Electronics Engg./Computer Science/Maths and Computing/Physics
  • Has done coursework in Linear Algebra, Probability, Image Processing, Deep Learning and Machine Learning.
  • Has demonstrated experience with Model Context Protocols (MCPs), DsPy, AI Agents, MLOps etc


WHO CAN APPLY:

Only those candidates will be considered who,

  • have relevant skills and interests
  • can commit full time
  • Can show prior work and deployed projects
  • can start immediately

Please note that we will reach out to ONLY those applicants who satisfy the criteria listed above.

SALARY DETAILS: Commensurate with experience.

JOINING DATE: Immediate

JOB TYPE: Full-time




Read more
NeoGenCode Technologies Pvt Ltd
Shivank Bhardwaj
Posted by Shivank Bhardwaj
Bengaluru (Bangalore)
1 - 8 yrs
₹5L - ₹30L / yr
skill iconPython
skill iconReact.js
skill iconPostgreSQL
TypeScript
skill iconNextJs (Next.js)
+11 more


Job Summary

We are seeking a highly skilled Full Stack Engineer with 2+ years of hands-on experience to join our high-impact engineering team. You will work across the full stack—building scalable, high-performance frontends using Typescript & Next.js and developing robust backend services using Python (FastAPI/Django).

This role is crucial in shaping product experiences and driving innovation at scale.


Mandatory Candidate Background

  • Experience working in product-based companies only
  • Strong academic background
  • Stable work history
  • Excellent coding skills and hands-on development experience
  • Strong foundation in Data Structures & Algorithms (DSA)
  • Strong problem-solving mindset
  • Understanding of clean architecture and code quality best practices


Key Responsibilities

  • Design, develop, and maintain scalable full-stack applications
  • Build responsive, performant, user-friendly UIs using Typescript & Next.js
  • Develop APIs and backend services using Python (FastAPI/Django)
  • Collaborate with product, design, and business teams to translate requirements into technical solutions
  • Ensure code quality, security, and performance across the stack
  • Own features end-to-end: architecture, development, deployment, and monitoring
  • Contribute to system design, best practices, and the overall technical roadmap


Requirements

Must-Have:

  • 2+ years of professional full-stack engineering experience
  • Strong expertise in Typescript / Next.js OR Python (FastAPI, Django) — must be familiar with both areas
  • Experience building RESTful APIs and microservices
  • Hands-on experience with Git, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure)
  • Strong debugging, optimization, and problem-solving abilities
  • Comfortable working in fast-paced startup environments


Good-to-Have:

  • Experience with containerization (Docker/Kubernetes)
  • Exposure to message queues or event-driven architectures
  • Familiarity with modern DevOps and observability tooling


Read more
Banking Industry

Banking Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Mangalore, Pune, Mumbai
3 - 5 yrs
₹8L - ₹11L / yr
skill iconData Analytics
SQL
Relational Database (RDBMS)
skill iconJava
skill iconPython
+1 more

Required Skills: Strong SQL Expertise, Data Reporting & Analytics, Database Development, Stakeholder & Client Communication, Independent Problem-Solving & Automation Skills

 

Review Criteria

· Must have Strong SQL skills (queries, optimization, procedures, triggers)

· Must have Advanced Excel skills

· Should have 3+ years of relevant experience

· Should have Reporting + dashboard creation experience

· Should have Database development & maintenance experience

· Must have Strong communication for client interactions

· Should have Ability to work independently

· Willingness to work from client locations.

 

Description

Who is an ideal fit for us?

We seek professionals who are analytical, demonstrate self-motivation, exhibit a proactive mindset, and possess a strong sense of responsibility and ownership in their work.

 

What will you get to work on?

As a member of the Implementation & Analytics team, you will:

● Design, develop, and optimize complex SQL queries to extract, transform, and analyze data

● Create advanced reports and dashboards using SQL, stored procedures, and other reporting tools

● Develop and maintain database structures, stored procedures, functions, and triggers

● Optimize database performance by tuning SQL queries, and indexing to handle large datasets efficiently

● Collaborate with business stakeholders and analysts to understand analytics requirements

● Automate data extraction, transformation, and reporting processes to improve efficiency


What do we expect from you?

For the SQL/Oracle Developer role, we are seeking candidates with the following skills and Expertise:

● Proficiency in SQL (Window functions, stored procedures) and MS Excel (advanced Excel skills)

● More than 3 plus years of relevant experience

● Java / Python experience is a plus but not mandatory

● Strong communication skills to interact with customers to understand their requirements

● Capable of working independently with minimal guidance, showcasing self-reliance and initiative

● Previous experience in automation projects is preferred

● Work From Office: Bangalore/Navi Mumbai/Pune/Client locations

 

Read more
ClanX

at ClanX

2 candid answers
Ariba Khan
Posted by Ariba Khan
Bengaluru (Bangalore)
5 - 7 yrs
Upto ₹45L / yr (Varies
)
skill iconPython
Natural Language Processing (NLP)
skill iconMachine Learning (ML)
OCR
Large Language Models (LLM)
+1 more

This opportunity through ClanX is for Parspec (direct payroll with Parspec)


Please note you would be interviewed for this role at Parspec while ClanX is a recruitment partner helping Parspec find the right candidates and manage the hiring process.


Parspec is hiring a Machine Learning Engineer to build state-of-the-art AI systems, focusing on NLP, LLMs, and computer vision in a high-growth, product-led environment.


Company Details:

Parspec is a fast-growing technology startup revolutionizing the construction supply chain. By digitizing and automating critical workflows, Parspec empowers sales teams with intelligent tools to drive efficiency and close more deals. With a mission to bring a data-driven future to the construction industry, Parspec blends deep domain knowledge with AI expertise.


Requirements:

  • Bachelor’s or Master’s degree in Science or Engineering.
  • 5-7 years of experience in ML and data science.
  • Recent hands-on work with LLMs, including fine-tuning, RAG, agent flows, and integrations.
  • Strong understanding of foundational models and transformers.
  • Solid grasp of ML and DL fundamentals, with experience in CV and NLP.
  • Recent experience working with large datasets.
  • Python experience with ML libraries like numpy, pandas, sklearn, matplotlib, nltk and others.
  • Experience with frameworks like Hugging Face, Spacy, BERT, TensorFlow, PyTorch, OpenRouter or Modal.


Requirements:

Must haves

  • Design, develop, and deploy NLP, CV, and recommendation systems
  • Train and implement deep learning models
  • Research and explore novel ML architectures
  • Build and maintain end-to-end ML pipelines
  • Collaborate across product, design, and engineering teams
  • Work closely with business stakeholders to shape product features
  • Ensure high scalability and performance of AI solutions
  • Uphold best practices in engineering and contribute to a culture of excellence
  • Actively participate in R&D and innovation within the team

Good to haves

  • Experience building scalable AI pipelines for extracting structured data from unstructured sources.
  • Experience with cloud platforms, containerization, and managed AI services.
  • Knowledge of MLOps practices, CI/CD, monitoring, and governance.
  • Experience with AWS or Django.
  • Familiarity with databases and web application architecture.
  • Experience with OCR or PDF tools.


Responsibilities:

  • Design, develop, and deploy NLP, CV, and recommendation systems
  • Train and implement deep learning models
  • Research and explore novel ML architectures
  • Build and maintain end-to-end ML pipelines
  • Collaborate across product, design, and engineering teams
  • Work closely with business stakeholders to shape product features
  • Ensure high scalability and performance of AI solutions
  • Uphold best practices in engineering and contribute to a culture of excellence
  • Actively participate in R&D and innovation within the team


Interview Process

  1. Technical interview (coding, ML concepts, project walkthrough)
  2. System design and architecture round
  3. Culture fit and leadership interaction
  4. Final offer discussion
Read more
ClanX

at ClanX

2 candid answers
Ariba Khan
Posted by Ariba Khan
Bengaluru (Bangalore)
3 - 4.5 yrs
Upto ₹25L / yr (Varies
)
skill iconMachine Learning (ML)
skill iconPython
Computer Vision
Natural Language Processing (NLP)
TensorFlow

This opportunity through ClanX is for Parspec (direct payroll with Parspec)


Please note you would be interviewed for this role at Parspec while ClanX is a recruitment partner helping Parspec find the right candidates and manage the hiring process.


Parspec is hiring a Machine Learning Engineer to build state-of-the-art AI systems, focusing on NLP, LLMs, and computer vision in a high-growth, product-led environment.


Company Details:

Parspec is a fast-growing technology startup revolutionizing the construction supply chain. By digitizing and automating critical workflows, Parspec empowers sales teams with intelligent tools to drive efficiency and close more deals. With a mission to bring a data-driven future to the construction industry, Parspec blends deep domain knowledge with AI expertise.


Requirements:

  • 3 to 4 years of relevant experience in ML and AI roles
  • Strong grasp of ML, deep learning, and model deployment
  • Proficient in Python and libraries like numpy, pandas, sklearn, etc.
  • Experience with TensorFlow/Keras or PyTorch
  • Familiar with AWS/GCP platforms
  • Strong coding skills and ability to ship production-ready solutions
  • Bachelor's/Master's in Engineering or related field
  • Curious, self-driven, and a fast learner
  • Passionate about NLP, LLMs, and state-of-the-art AI technologies
  • Comfortable with collaboration across globally distributed teams

Preferred (Not Mandatory):

  • Experience with Django, databases, and full-stack environments
  • Familiarity with OCR and PDF processing
  • Competitive programming or Kaggle participation
  • Prior work with distributed teams across time zones


Responsibilities:

  • Design, develop, and deploy NLP, CV, and recommendation systems
  • Train and implement deep learning models
  • Research and explore novel ML architectures
  • Build and maintain end-to-end ML pipelines
  • Collaborate across product, design, and engineering teams
  • Work closely with business stakeholders to shape product features
  • Ensure high scalability and performance of AI solutions
  • Uphold best practices in engineering and contribute to a culture of excellence
  • Actively participate in R&D and innovation within the team


Interview Process

  1. Technical interview (coding, ML concepts, project walkthrough)
  2. System design and architecture round
  3. Culture fit and leadership interaction
  4. Final offer discussion
Read more
Binocs Labs Pvt Ltd
Bengaluru (Bangalore)
3 - 6 yrs
₹25L - ₹40L / yr
skill iconPython
skill iconNodeJS (Node.js)

Apply- https://lnkd.in/gVHVTMG6

About Us

Binocs.co empowers institutional lenders with next-generation loan management software, streamlining the entire loan lifecycle and facilitating seamless interaction among stakeholders.

Team: Binocs.co is led by a passionate team with extensive experience in financial technology, lending, AI, and software development.

Investors: Our journey is backed by renowned investors who share our vision for transforming the loan management landscape: Beenext, Arkam Ventures, Accel, Saison Capital, Blume Ventures, Premji Invest, and Better Capital.

What we're looking for

We seek a motivated, talented, and intelligent individual who shares our vision of being a changemaker. We value individuals dissatisfied with the status quo, strive to make improvements, and envision making significant contributions. We look for those who embrace challenges and dedicate themselves to solutions. We seek individuals who push for data-driven decisions, are unconstrained by titles, and value collaboration. We are building a fast-paced team to shape various business and technology aspects.

Responsibilities

  • Be a part of the initial team to define and set up a best-in-class digital platform for the Private Credit industry, and take full ownership of the components of the digital platform
  • You will build robust and scalable web-based applications and need to think of platforms & reuse
  • Driving and active contribution to High-Level Designs(HLDs) and Low-Level Designs (LLDs).
  • Collaborate with frontend developers, product managers, and other stakeholders to understand requirements and translate them into technical specifications.
  • Mentor team members in adopting effective coding practices. Conduct comprehensive code reviews, focusing on both functional and non-functional aspects.
  • Ensure the security, performance, and reliability of backend systems through proper testing, monitoring, and optimization.
  • Participate in code reviews, sprint planning, and agile development processes to maintain code quality and project timelines.
  • Simply, be an owner of the platform and do whatever it takes to get the required output for customers
  • Be curious about product problems and have an open mind to dive into new domains eg: gen-AI.
  • Stay up-to-date with the latest development trends, tools, and technologies.

Qualifications

  • 3-5 years of experience in backend development, with a strong track record of successfully architecting and implementing scalable and high-performance backend systems.
  • Proficiency in at least one backend programming language (e.g.,Python, Golang, Node.js, Java) & tech stack to write maintainable, scalable, unit-tested code.
  • Good understanding of databases (e.g. MySQL, PostgreSQL) and NoSQL (e.g. MongoDB, Elasticsearch, etc)
  • Solid understanding of RESTful API design principles and best practices.
  • Experience with multi-threading and concurrency programming
  • Extensive experience in object-oriented design skills, knowledge of design patterns, and huge passion and ability to design intuitive module and class-level interfaces.
  • Experience of cloud computing platforms and services (e.g., AWS, Azure, Google Cloud Platform)
  • Knowledge of Test Driven Development

Good to have

  • Experience with microservices architecture
  • Knowledge of serverless computing and event-driven architectures (e.g., AWS Lambda, Azure Functions)
  • Understanding of DevOps practices and tools for continuous integration and deployment (CI/CD).
  • Contributions to open-source projects or active participation in developer communities.
  • Experience working LLMs and AI technologies

Benefits

By joining Binocs, you’ll become part of a vibrant and dynamic team dedicated to disrupting the fintech space with cutting-edge solutions. We offer a stimulating work environment where innovation is at the heart of everything we do. Our competitive compensation package, inclusive of equity, is designed to reward your contributions to our success.






Read more
Srijan Technologies
Bengaluru (Bangalore), Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5 - 11 yrs
₹18L - ₹30L / yr
skill iconPython
skill iconDjango
FastAPI
skill iconReact.js
skill iconMongoDB

About US:-

We turn customer challenges into growth opportunities.

Material is a global strategy partner to the world’s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences.

We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve.

Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners.

 

Experience Range: 5-10 Years

Role: Full Stack Developer


Key Responsibilities:

  • Develop and maintain scalable web applications using React for frontend and Python (fast API/Flask/Django) for backend.
  • Work with databases such as SQL, Postgres and MongoDB to design and manage robust data structures.
  • Collaborate with cross-functional teams to define, design, and ship new features.
  • Ensure the performance, quality, and responsiveness of applications.
  • Identify and fix bottlenecks and bugs.

·      Others: AWS, Snowflake, Azure, JIRA, CI/CD pipelines 


Key Requirements:

  • React: Extensive experience in building complex frontend applications.
  • Must to Have: Experience with Python (FAST API/ FLASK/ DJANGO).
  • Required cloud experience – AWS OR Azure
  • Experience with databases like SQL Postgres and MongoDB.
  • Basic understanding of Data Fabric – Good to have
  • Ability to work independently and as part of a team.
  • Excellent problem-solving skills and attention to detail.


Frontend Technology Stack (Developers)

  • Framework: React 18+ with TypeScript, Next.js
  • State Management: Redux Toolkit with RTK Query for efficient data fetching
  • UI Components: Material-UI v5 with custom enterprise design system
  • Visualization: D3.js, Chart.js, and Plotly for advanced analytics dashboards
  • Accessibility: WCAG 2.1 AA compliance with automated testing


Backend Technology Stack (Developers)

  • Microservices: Python, Node.js ( latest) with Express.js and TypeScript
  • API Gateway: API Gateway with rate limiting and authentication
  • Message Queuing: Apache Kafka for real-time data streaming
  • Caching: Redis Cluster for high-performance caching layer
  • Search Engine: Elasticsearch for full-text search and analytics
  • Authentication: OAuth 2.0 / OpenID Connect with enterprise SSO integration


 What We Offer 

  •  Professional Development and Mentorship.
  •  Hybrid work mode with remote friendly workplace. (6 times in a row Great Place To Work Certified).
  •  Health and Family Insurance.
  •  40+ Leaves per year along with maternity & paternity leaves.
  •  Wellness, meditation and Counselling sessions.


Read more
Suventure Services Private Limited
Husnara Begum Shaik
Posted by Husnara Begum Shaik
Bengaluru (Bangalore)
5 - 7 yrs
₹10L - ₹15L / yr
skill iconPython
skill iconDjango
Data Structures

Hello Candidate,

Greetings from Suventure!

PLease find the Job description below.


Job Title: Senior Python Developer (with or without Rust API experience)

Location: Bangalore, India

Company: Suventure Services Pvt. Ltd.

Work Mode: Work From Office (WFO)

Experience: 5+ Years


About Suventure Services Pvt. Ltd.

Suventure is a technology-driven organization delivering end-to-end solutions across Product Development, Cloud, AI, Analytics, and Mobility. We work with global clients to build innovative, scalable, and secure applications that power digital transformation and business growth.

Job Summary

We are seeking a highly skilled and motivated Python Developer with over 5 years of hands-on experience in backend development, API design, and scalable application architecture. Candidates with exposure to Rust API development will have an added advantage, though it is not mandatory. You’ll collaborate closely with cross-functional teams to deliver high-quality, performant, and maintainable code.

Key Responsibilities

  • Design, develop, and maintain scalable backend applications using Python (Flask / FastAPI / Django).
  • Develop RESTful or GraphQL APIs and ensure seamless data integration between systems.
  • Work on microservices architecture and implement clean, modular, and testable code.
  • Optimize application performance, ensuring high availability and responsiveness.
  • Collaborate with DevOps, frontend, and product teams for feature development and release cycles.
  • Participate in code reviews, troubleshooting, debugging, and system enhancements.
  • (Optional) Integrate and maintain APIs written in Rust or other high-performance languages.
  • Write automated unit tests and follow best practices for CI/CD and version control (Git).

Required Skills & Experience

  • Minimum 5 years of professional experience in Python development.
  • Strong understanding of OOPsdesign patterns, Data structures and software engineering principles.
  • Hands-on experience with frameworks such as FlaskDjango, or FastAPI.
  • Experience with RESTful APIsmicroservices, and asynchronous programming.
  • Good understanding of SQL/NoSQL databases (MySQL, PostgreSQL, MongoDB, Redis).
  • Knowledge of DockerKubernetes, or AWS Cloud Services is a plus.
  • Familiarity with Rust and its API ecosystem is an added advantage.
  • Excellent problem-solving skills and ability to work in a fast-paced environment.


Read more
Newpage Solutions

at Newpage Solutions

2 candid answers
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
8yrs+
Upto ₹45L / yr (Varies
)
skill iconPython
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Generative AI
skill iconDjango
+7 more

About Newpage Solutions

Newpage Solutions is a global digital health innovation company helping people live longer, healthier lives. We partner with life sciences organizations—including pharmaceutical, biotech, and healthcare leaders—to build transformative AI and data-driven technologies addressing real-world health challenges.

From strategy and research to UX design and agile development, we deliver and validate impactful solutions using lean, human-centered practices.

We are proud to be Great Place to Work® certified for three consecutive years, hold a top Glassdoor rating, and were named among the "Top 50 Most Promising Healthcare Solution Providers" by CIOReview.

We foster creativity, continuous learning, and inclusivity, creating an environment where bold ideas thrive and make a measurable difference in people’s lives.

Newpage looks for candidates who are invested in long-term impact. Applications with a pattern of frequent job changes may not align with the values we prioritize.


Your Mission

We’re seeking a highly experienced, technically exceptional AI Development Lead to architect and deliver next-generation Generative AI and Agentic systems. You will drive end-to-end innovation—from model selection and orchestration design to scalable backend implementation—while collaborating with cross-functional teams to transform AI research into production-ready solutions.

This is an individual-contributor leadership role for someone who thrives on ownership, fast execution, and technical excellence. You will define the standards for quality, scalability, and innovation across all AI initiatives.


What You’ll Do

  • Architect, build, and optimize production-grade Generative AI applications using modern frameworks such as LangChain, LlamaIndex, Semantic Kernel, or custom orchestration layers.
  • Lead the design of Agentic AI frameworks (Agno, AutoGen, CrewAI, etc.), enabling intelligent, goal-driven workflows with memory, reasoning, and contextual awareness.
  • Develop and deploy Retrieval-Augmented Generation (RAG) systems integrating LLMs, vector databases, and real-time data pipelines.
  • Design robust prompt engineering and refinement frameworks to improve reasoning quality, adaptability, and user relevance.
  • Deliver high-performance backend systems using Python (FastAPI, Flask, or similar) aligned with SOLID principles, OOP, and clean architecture.
  • Own the complete SDLC, including design, implementation, code reviews, testing, CI/CD, observability, and post-deployment monitoring.
  • Use AI-assisted environments (e.g., Cursor, GitHub Copilot, Claude Code) to accelerate development while maintaining code quality and maintainability.
  • Collaborate closely with MLOps engineers to containerize, scale, and deploy models using Docker, Kubernetes, and modern CI/CD pipelines.
  • Integrate APIs from OpenAI, Anthropic, Cohere, Mistral, or open-source LLMs (Llama 3, Mixtral, etc.).
  • Leverage VectorDB such as FAISS, Pinecone, Weaviate, or Chroma for semantic search, RAG, and context retrieval.
  • Develop custom tools, libraries, and frameworks that improve development velocity and reliability across AI teams.
  • Partner with Product, Design, and ML teams to translate conceptual AI features into scalable user-facing products.
  • Provide technical mentorship and guide team members in system design, architecture reviews, and AI best practices.
  • Lead POCs, internal research experiments, and innovation sprints to explore and validate emerging AI techniques.

What You Bring

  • 8+ years of total experience in software development, with at least 3 years in AI/ML systems engineering or Generative AI.
  • Python experience with strong grasp of OOP, SOLID, and scalable microservice architecture.
  • Proven track record developing and deploying GenAI/LLM-based systems in production.
  • Hands-on work with LangChain, LlamaIndex, or custom orchestration frameworks.
  • Deep familiarity with OpenAI, Anthropic, Hugging Face, or open-source LLM APIs.
  • Advanced understanding of prompt construction, optimization, and evaluation techniques.
  • End-to-end implementation experience using vector databases and retrieval pipelines.
  • Understanding of MLOps, model serving, scaling, and monitoring workflows (e.g., BentoML, MLflow, Vertex AI, AWS Sagemaker).
  • Experience with GitHub Actions, Docker, Kubernetes, and cloud-native deployments.
  • Are obsessed with clean code, system scalability, and performance optimization.
  • Can balance rapid prototyping with long-term maintainability.
  • Excel at working independently while collaborating effectively across teams.
  • Stay ahead of the curve on new AI models, frameworks, and best practices.
  • Have a founder’s mindset and love solving ambiguous, high-impact technical challenges.
  • Bachelor’s or Master’s in Computer Science, Machine Learning, or a related technical discipline.


What We Offer

At Newpage, we’re building a company that works smart and grows with agility—where driven individuals come together to do work that matters. We offer:

  • A people-first culture – Supportive peers, open communication, and a strong sense of belonging.
  • Smart, purposeful collaboration – Work with talented colleagues to create technologies that solve meaningful business challenges.
  • Balance that lasts – We respect your time and support a healthy integration of work and life.
  • Room to grow – Opportunities for learning, leadership, and career development, shaped around you.
  • Meaningful rewards – Competitive compensation that recognizes both contribution and potential.
Read more
Hashone Careers
Bengaluru (Bangalore), Pune, Hyderabad
5 - 10 yrs
₹12L - ₹25L / yr
DevOps
skill iconPython
cicd
skill iconKubernetes
skill iconDocker
+1 more

Job Description

Experience: 5 - 9 years

Location: Bangalore/Pune/Hyderabad

Work Mode: Hybrid(3 Days WFO)


Senior Cloud Infrastructure Engineer for Data Platform 


The ideal candidate will play a critical role in designing, implementing, and maintaining cloud infrastructure and CI/CD pipelines to support scalable, secure, and efficient data and analytics solutions. This role requires a strong understanding of cloud-native technologies, DevOps best practices, and hands-on experience with Azure and Databricks.


Key Responsibilities:


Cloud Infrastructure Design & Management

Architect, deploy, and manage scalable and secure cloud infrastructure on Microsoft Azure.

Implement best practices for Azure Resource Management, including resource groups, virtual networks, and storage accounts.

Optimize cloud costs and ensure high availability and disaster recovery for critical systems


Databricks Platform Management

Set up, configure, and maintain Databricks workspaces for data engineering, machine learning, and analytics workloads.

Automate cluster management, job scheduling, and monitoring within Databricks.

Collaborate with data teams to optimize Databricks performance and ensure seamless integration with Azure services.


CI/CD Pipeline Development

Design and implement CI/CD pipelines for deploying infrastructure, applications, and data workflows using tools like Azure DevOps, GitHub Actions, or similar.

Automate testing, deployment, and monitoring processes to ensure rapid and reliable delivery of updates.


Monitoring & Incident Management

Implement monitoring and alerting solutions using tools like Dynatrace, Azure Monitor, Log Analytics, and Databricks metrics.

Troubleshoot and resolve infrastructure and application issues, ensuring minimal downtime.


Security & Compliance

Enforce security best practices, including identity and access management (IAM), encryption, and network security.

Ensure compliance with organizational and regulatory standards for data protection and cloud operations.


Collaboration & Documentation

Work closely with cross-functional teams, including data engineers, software developers, and business stakeholders, to align infrastructure with business needs.

Maintain comprehensive documentation for infrastructure, processes, and configurations.


Required Qualifications

Education: Bachelor’s degree in Computer Science, Engineering, or a related field.


Must Have Experience:

6+ years of experience in DevOps or Cloud Engineering roles.

Proven expertise in Microsoft Azure services, including Azure Data Lake, Azure Databricks, Azure Data Factory (ADF), Azure Functions, Azure Kubernetes Service (AKS), and Azure Active Directory.

Hands-on experience with Databricks for data engineering and analytics.


Technical Skills:

Proficiency in Infrastructure as Code (IaC) tools like Terraform, ARM templates, or Bicep.

Strong scripting skills in Python, or Bash.

Experience with containerization and orchestration tools like Docker and Kubernetes.

Familiarity with version control systems (e.g., Git) and CI/CD tools (e.g., Azure DevOps, GitHub Actions).


Soft Skills:

Strong problem-solving and analytical skills.

Excellent communication and collaboration abilities.

Read more
Tradelab Technologies
Aakanksha Yadav
Posted by Aakanksha Yadav
Bengaluru (Bangalore)
3 - 5 yrs
₹10L - ₹15L / yr
skill iconPython
RestAPI
FastAPI
RabbitMQ
Apache Kafka
+3 more

About Us:

Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry. If you are looking for just another backend role, this isn’t it. We want risk-takers, relentless learners, and those who find joy in pushing their limits

every day. If you thrive in high-stakes environments and have a deep passion for performance driven backend systems, we want you.


What We Expect:

• We’re looking for a Backend Developer (Python) with a strong foundation in backend technologies and

a deep interest in scalable, low-latency systems.

• You should have 3–4 years of experience in Python-based development and be eager to solve complex

performance and scalability challenges in trading and fintech applications.

• You measure success by your own growth, not external validation.

• You thrive on challenges, not on perks or financial rewards.

• Taking calculated risks excites you—you’re here to build, break, and learn.

• You don’t clock in for a paycheck; you clock in to outperform yourself in a high-frequency trading

environment.

• You understand the stakes—milliseconds can make or break trades, and precision is everything.


What You Will Do:

• Develop and maintain scalable backend systems using Python.

• Design and implement REST APIs and socket-based communication.

• Optimize code for speed, performance, and reliability.

• Collaborate with frontend teams to integrate server-side logic.

• Work with RabbitMQ, Kafka, Redis, and Elasticsearch for robust backend design.

• Build fault-tolerant, multi-producer/consumer systems.


Must-Have Skills:

• 3–4 years of experience in Python and backend development.

• Strong understanding of REST APIs, sockets, and network protocols (TCP/UDP/HTTP).

• Experience with RabbitMQ/Kafka, SQL & NoSQL databases, Redis, and Elasticsearch.

• Bachelor’s degree in Computer Science or related field.


Nice-to-Have Skills:

• Past experience in fintech, trading systems, or algorithmic trading.

• Experience with GoLang, C/C++, Erlang, or Elixir.

• Exposure to trading, fintech, or low-latency systems.

• Familiarity with microservices and CI/CD pipelines.



Read more
HealthAsyst

at HealthAsyst

1 product
1 recruiter
Ariba Khan
Posted by Ariba Khan
Bengaluru (Bangalore)
12 - 14 yrs
Upto ₹45L / yr (Varies
)
Automation
skill iconJava
skill iconPython
Selenium

Experience:

  • 12-14 years in software testing, with 5+ years in automation testing.
  • Minimum 3+ years of experience managing QA teams.
  • Experience in automating complex product which is highly configurable and multiple external integrations
  • Experience in BDD automation framework
  • Experience in developing/enhancing and managing automation framework
  • Strong expertise in test automation tools (Selenium, Playwright, Appium, JMeter).
  • Preferably experience with US healthcare standards (HIPAA, HL7, FHIR).
  • Ability to get hands-on and find technical solutions for complex scenarios

Technical Skills:

  • Proficiency in programming languages like Java, JavaScript.
  • Solid understanding of DevOps practices, CI/CD pipelines, and tools like Jenkins or Azure DevOps.
  • Expertise in API testing using tools like Postman or RestAssured.

Soft Skills:

  • Strong problem-solving and analytical abilities.
  • Excellent communication and collaboration skills.
  • Ability to prioritize and manage multiple projects in a fast-paced environment.


Employee Benefits: 

HealthAsyst provides the following health, and wellness benefits to cover a range of physical and mental well-being for its employees. 

  • Bi-Annual Salary Reviews 
  • GMC (Group Mediclaim): Provides Insurance coverage of Rs. 3 lakhs + a corporate buffer of 2 Lakhs per family. This is a family floater policy, and the company covers all the employees, spouse, and up to two children 
  • Employee Wellness Program- HealthAsyst offers unlimited online doctor consultations for self and family from a range of 31 specialties for no cost to employees. And OPD consultations with GP Doctors are available in person for No Cost to employees 
  • GPA (Group Personal Accident): Provides insurance coverage of Rs. 20 lakhs to the employee against the risk of death/injury during the policy period sustained due to an accident 
  • GTL (Group Term Life): Provides life term insurance protection to employees in case of death. The coverage is one time of the employee’s CTC 
  • Employee Assistance Program: HealthAsyst offers complete confidential counselling services to employees & family members for mental wellbeing 
  • Sponsored upskills program for certifications/higher education up to 1 lakh 
  • Flexible Benefits Plan – covering a range of components like 
  • National Pension System. 
  • Internet/Mobile Reimbursements. 
  • Fuel Reimbursements. 
  • Professional Education Reimbursements. 
  • Flexible working hours 
  • 3 Day Hybrid Model 
Read more
IndArka Energy Pvt Ltd

at IndArka Energy Pvt Ltd

3 recruiters
Mita Hemant
Posted by Mita Hemant
Bengaluru (Bangalore)
3 - 4 yrs
₹18L - ₹20L / yr
skill iconPython
skill iconDjango
Data Structures
Algorithms

About us

Arka energy is focussed on changing the paradigm on energy. Arka focusses on creating innovative renewable energy solutions for residential customers. With its custom product design and an innovative approach to market the product solution, Arka aims to be a leading provider of energy solutions in the residential solar segment. Arka designs and develops end to end renewable energy solutions with teams in Bangalore and in the Bay area

This product is a 3d simulation software, to replicate rooftops/commercial sites, place solar panels and generate the estimation of solar energy.

What are we looking for?

·        As a backend developer you will be responsible for developing solutions that will enable Arka solutions to be easily adopted by customers.

·        Attention to detail and willingness to learn is a big part of this position.

·        Commitment to problem solving, and innovative design approaches are important.

Role and responsibilities

●       Develop cloud-based Python Django software products

●       Working closely with UX and Front-end Developers

●       Participating in architectural, design and product discussions Designing and creating RESTful APIs for internal and partner consumption

●       Working in an agile environment with an excellent team of engineers

●       Own/maintain code everything from development to fixing bugs/issues.

●       Deliver clean, reusable high-quality code

●       Facilitate problem diagnosis and resolution for issues reported by Customers

●       Deliver to schedule and timelines based on an Agile/Scrum-based approach

●       Develop new features and ideas to make product better and user centric.

●       Must be able to independently write code and test major features, as well as work jointly with other team members to deliver complex changes

●       Create algorithms from scratch and implement them in the software.

●       Code Review, End to End Unit Testing.

●       Guiding and monitoring Junior Engineers.



SKILL REQUIREMENTS

●       Solid database skills in a relational database (i.e. PostgresSQL, MySQL, etc.) Knowledge of how to build and use with RESTful APIs

●        Strong knowledge of version control (i.e. git, svn, etc.)

●        Experience deploying Python applications into production

●        Azure or Google cloud infrastructure knowledge is a plus

●       Strong drive to learn new technologies

●       Ability to learn new technologies quickly

●       Continuous look-out for new and creative solutions to implement new features or improve old ones

●       Data Structures, Algorithms, Django and Python

 

 

 

Good To have

·        Knowledge on GenAI Applications.

 

 

Key Benefits

·        Competitive development environment

·        Engagement into full scale systems development

·        Competitive Salary

·        Flexible working environment

·        Equity in an early-stage start-up

·        Patent Filing Bonuses

·        Health Insurance for Employee + Family

 

Read more
Bengaluru (Bangalore)
5 - 10 yrs
₹15L - ₹20L / yr
yaml
Artificial Intelligence (AI)
Azure devops
Large Language Models (LLM) tuning
skill iconJava
+3 more

AI/LLM Test Automation Engineer (SDET)

Location: Bangalore (Hybrid preferred)

Experience: 5-8 Years


Job Summary


We are seeking a Senior Test Automation Engineer (SDET) with expertise in AI/LLM testing and Azure DevOps CI/CD to build robust automation frameworks for cutting-edge AI applications. The role combines deep programming skills (Java/Python), modern DevOps practices, and specialized LLM testing to ensure high-quality AI product delivery.


Key Responsibilities

  • Design, develop, and maintain automation frameworks using Java/Python for web, mobile, API, and backend testing.
  • Create and manage YAML-based CI/CD pipelines in Azure DevOps for end-to-end testing workflows.
  • Perform AI/LLM testing including prompt validation, content generation evaluation, model behavior analysis, and bias detection.
  • Write and maintain BDD Cucumber feature files integrated with automation suites.
  • Execute manual + automated testing across diverse application layers with focus on edge cases.
  • Implement Git branching strategies, code reviews, and repository best practices.
  • Track defects and manage test lifecycle using ServiceNow or similar tools.
  • Conduct root-cause analysis, troubleshoot complex issues, and drive continuous quality improvements.


Mandatory Skills & Experience

✅ 5+ years SDET/Automation experience

✅ Java/Python scripting for test frameworks (Selenium, REST Assured, Playwright)

✅ Azure DevOps YAML pipelines (CI/CD end-to-end)

✅ AI/LLM testing (prompt engineering, model validation, RAG testing)

✅ Cucumber BDD (Gherkin feature files + step definitions)

✅ Git (branching, PRs, GitFlow)

✅ ServiceNow/Jira defect tracking

✅ Manual + Automation testing (web/mobile/API/backend)


Technical Stack


Programming: Java, Python, JavaScript

CI/CD: Azure DevOps, YAML Pipelines

Testing: Selenium, Playwright, REST Assured, Postman

BDD: Cucumber (Gherkin), JBehave

AI/ML: Prompt validation, LLM APIs (OpenAI, LangChain)

Version Control: Git, GitHub/GitLabDefect

Tracking: ServiceNow, Jira, Azure Boards


Preferred Qualifications

  • Exposure to AI testing frameworks (LangSmith, Promptfoo)
  • Experience with containerization (Docker) and Kubernetes
  • Knowledge of performance testing for AI workloads
  • AWS/GCP cloud testing experience
  • ISTQB or relevant QA certifications


What We Offer

  • Work on next-gen AI/LLM products with global impact
  • Modern tech stack with Azure-native DevOps
  • Flexible hybrid/remote work model
  • Continuous learning opportunities in AI testing
Read more
Clink

at Clink

2 candid answers
1 product
Hari Krishna
Posted by Hari Krishna
Hyderabad, Bengaluru (Bangalore)
0 - 2 yrs
₹4L - ₹8L / yr
Artificial Intelligence (AI)
Large Language Models (LLM)
skill iconPython
skill iconMachine Learning (ML)
FastAPI
+2 more

Role Overview

Join our core tech team to build the intelligence layer of Clink's platform. You'll architect AI agents, design prompts, build ML models, and create systems powering personalized offers for thousands of restaurants. High-growth opportunity working directly with founders, owning critical features from day one.


Why Clink?

Clink revolutionizes restaurant loyalty using AI-powered offer generation and customer analytics:

  • ML-driven customer behavior analysis (Pattern detection)
  • Personalized offers via LLMs and custom AI agents
  • ROI prediction and forecasting models
  • Instagram marketing rewards integration


Tech Stack:

  • Python,
  • FastAPI,
  • PostgreSQL,
  • Redis,
  • Docker,
  • LLMs


You Will Work On:

AI Agents: Design and optimize AI agents

ML Models: Build redemption prediction, customer segmentation, ROI forecasting

Data & Analytics: Analyze data, build behavior pattern pipelines, create product bundling matrices

System Design: Architect scalable async AI pipelines, design feedback loops, implement A/B testing

Experimentation: Test different LLM approaches, explore hybrid LLM+ML architectures, prototype new capabilities


Must-Have Skills

Technical: 0-2 years AI/ML experience (projects/internships count), strong Python, LLM API knowledge, ML fundamentals (supervised learning, clustering), Pandas/NumPy proficiency

Mindset: Extreme curiosity, logical problem-solving, builder mentality (side projects/hackathons), ownership mindset

Nice to Have: Pydantic, FastAPI, statistical forecasting, PostgreSQL/SQL, scikit-learn, food-tech/loyalty domain interest

Read more
Inferigence Quotient

at Inferigence Quotient

1 recruiter
Neeta Trivedi
Posted by Neeta Trivedi
Bengaluru (Bangalore)
3 - 5 yrs
₹12L - ₹15L / yr
skill iconPython
skill iconNodeJS (Node.js)
FastAPI
skill iconDocker
skill iconJavascript
+16 more

3-5 years of experience as full stack developer with essential requirements on the following technologies: FastAPI, JavaScript, React.js-Redux, Node.js, Next.js, MongoDB, Python, Microservices, Docker, and MLOps.


Experience in Cloud Architecture using Kubernetes (K8s), Google Kubernetes Engine, Authentication and Authorisation Tools, DevOps Tools and Scalable and Secure Cloud Hosting is a significant plus.


Ability to manage a hosting environment, ability to scale applications to handle the load changes, knowledge of accessibility and security compliance.

 

Testing of API endpoints.

 

Ability to code and create functional web applications and optimising them for increasing response time and efficiency. Skilled in performance tuning, query plan/ explain plan analysis, indexing, table partitioning.

 

Expert knowledge of Python and corresponding frameworks with their best practices, expert knowledge of relational databases, NoSQL.


Ability to create acceptance criteria, write test cases and scripts, and perform integrated QA techniques.

 

Must be conversant with Agile software development methodology. Must be able to write technical documents, coordinate with test teams. Proficiency using Git version control.

Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Bengaluru (Bangalore)
1 - 8 yrs
₹12L - ₹34L / yr
skill iconPython
skill iconReact.js
skill iconDjango
FastAPI
TypeScript
+7 more

Please note that salary will be based on experience.


Job Title: Full Stack Engineer

Location: Bengaluru (Indiranagar) – Work From Office (5 Days)

Job Summary

We are seeking a skilled Full Stack Engineer with solid hands-on experience across frontend and backend development. You will work on mission-critical features, ensuring seamless performance, scalability, and reliability across our products.

Responsibilities

  • Design, develop, and maintain scalable full-stack applications.
  • Build responsive, high-performance UIs using Typescript & Next.js.
  • Develop backend services and APIs using Python (FastAPI/Django).
  • Work closely with product, design, and business teams to translate requirements into intuitive solutions.
  • Contribute to architecture discussions and drive technical best practices.
  • Own features end-to-end — design, development, testing, deployment, and monitoring.
  • Ensure robust security, code quality, and performance optimization.

Tech Stack

Frontend: Typescript, Next.js, React, Tailwind CSS

Backend: Python, FastAPI, Django

Databases: PostgreSQL, MongoDB, Redis

Cloud & Infra: AWS/GCP, Docker, Kubernetes, CI/CD

Other Tools: Git, GitHub, Elasticsearch, Observability tools

Requirements

Must-Have:

  • 2+ years of professional full-stack engineering experience.
  • Strong expertise in either frontend (Typescript/Next.js) or backend (Python/FastAPI/Django) with familiarity in both.
  • Experience building RESTful services and microservices.
  • Hands-on experience with Git, CI/CD, and cloud platforms (AWS/GCP/Azure).
  • Strong debugging, problem-solving, and optimization skills.
  • Ability to thrive in fast-paced, high-ownership startup environments.

Good-to-Have:

  • Exposure to Docker, Kubernetes, and observability tools.
  • Experience with message queues or event-driven architecture.


Perks & Benefits

  • Upskilling support – courses, tools & learning resources.
  • Fun team outings, hackathons, demos & engagement initiatives.
  • Flexible Work-from-Home: 12 WFH days every 6 months.
  • Menstrual WFH: up to 3 days per month.
  • Mobility benefits: relocation support & travel allowance.
  • Parental support: maternity, paternity & adoption leave.
Read more
NeoGenCode Technologies Pvt Ltd
Bengaluru (Bangalore)
1 - 8 yrs
₹8L - ₹35L / yr
skill iconPython
FastAPI
skill iconDjango
TypeScript
skill iconNextJs (Next.js)
+11 more

Job Title : Full Stack Engineer (Python + React.js/Next.js)

Experience : 1 to 6+ Years

Location : Bengaluru (Indiranagar)

Employment : Full-Time

Working Days : 5 Days WFO

Notice Period : Immediate to 30 Days


Role Overview :

We are seeking Full Stack Engineers to build scalable, high-performance fintech products.

You will work on both frontend (Typescript/Next.js) and backend (Python/FastAPI/Django), owning features end-to-end and contributing to architecture, performance, and product innovation.


Main Tech Stack :

Frontend : Typescript, Next.js, React

Backend : Python, FastAPI, Django

Database : PostgreSQL, MongoDB, Redis

Cloud : AWS/GCP, Docker, Kubernetes

Tools : Git, GitHub, CI/CD, Elasticsearch


Key Responsibilities :

  • Develop full-stack applications with clean, scalable code.
  • Build fast, responsive UIs using Typescript, Next.js, React.
  • Develop backend APIs using Python, FastAPI, Django.
  • Collaborate with product/design to implement solutions.
  • Own development lifecycle: design → build → deploy → monitor.
  • Ensure performance, reliability, and security.


Requirements :

Must-Have :

  • 1–6+ years of full-stack experience.
  • Product-based company background.
  • Strong DSA + problem-solving skills.
  • Proficiency in either frontend or backend with familiarity in both.
  • Hands-on experience with APIs, microservices, Git, CI/CD, cloud.
  • Strong communication & ownership mindset.

Good-to-Have :

  • Experience with containers, system design, observability tools.

Interview Process :

  1. Coding Round : DSA + problem solving
  2. System Design : LLD + HLD, scalability, microservices
  3. CTO Round : Technical deep dive + cultural fit
Read more
Upsurge Labs

at Upsurge Labs

5 candid answers
2 recruiters
Bisman Gill
Posted by Bisman Gill
Bengaluru (Bangalore)
2yrs+
Upto ₹80L / yr (Varies
)
skill iconPython
skill iconNodeJS (Node.js)
skill iconGo Programming (Golang)
FastAPI
skill iconDjango
+3 more

Job Description: Senior Backend

Location: Bangalore (Onsite)


Skills Required:

  • Deep expertise in backend architecture using Python (FastAPI, Django), Node.js (NestJS, Express), or GoLang.
  • Strong experience with cloud infrastructure - AWS, GCP, Azure, and containerization (Docker, Kubernetes).
  • Proficiency in infrastructure-as-code (Terraform, Pulumi, Ansible).
  • Mastery in CI/CD pipelines, GitOps workflows, and deployment automation (GitHub Actions, Jenkins, ArgoCD, Flux).
  • Experience building high-performance distributed systems, APIs, and microservices architectures.
  • Understanding of event-driven systems, message queues, and streaming platforms (Kafka, RabbitMQ, Redis Streams).
  • Familiarity with database design and scaling - PostgreSQL, MongoDB, DynamoDB, TimescaleDB.
  • Deep understanding of system observability, tracing, and performance tuning (Prometheus, Grafana, OpenTelemetry).
  • Familiarity with AI integration stacks - deploying and scaling LLMs, vector databases (Pinecone, Weaviate, Milvus), and inference APIs (vLLM, Ollama, TensorRT).
  • Awareness of DevSecOps practices, zero-trust architecture, and cloud cost optimization.
  • Bonus: Hands-on with Rust, WebAssembly, or edge computing platforms (Fly.io, Cloudflare Workers, AWS Greengrass).


Who We Are Looking For:

Upsurge Labs builds across robotics, biotech, AI, and creative tech, each product running on the backbone of precision-engineered software.

We are looking for a Senior Backend / DevOps Engineer who can architect scalable, resilient systems that power machines, minds, and media.

You should be someone who is :

  • Disciplined and detail-oriented, thriving in complex systems without compromising reliability.
  • Organized enough to manage chaos and gritty enough to debug at 3 a.m. if that’s what the mission demands.
  • Obsessed with clean code, system resilience, and real-world impact.
  • Finds satisfaction in building infrastructure where reliability, scalability, and performance are central.
  • Comfortable working at the intersection of AI, automation, and distributed systems.
  • Understands that this work is challenging and fast-paced, but rewarding for those who push boundaries.


At Upsurge Labs, only the best minds build the foundations for the future. If you’ve ever dreamed of engineering systems that enable breakthroughs in AI and robotics, this is your arena.

Read more
 is a leading Data & Analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage.

is a leading Data & Analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage.

Agency job
via HyrHub by Shwetha Naik
Bengaluru (Bangalore), Mangalore
5 - 8 yrs
₹12L - ₹22L / yr
skill iconPython
FastAPI
Artificial Intelligence (AI)
ETL
skill iconReact.js
+1 more

Skills - Python (Pandas, NumPy) and backend frameworks (FastAPI, Flask), ETL processes, LLM, React.js or Angular, JavaScript or TypeScript.



• Strong proficiency in Python, with experience in data manipulation libraries (e.g., Pandas,

NumPy) and backend frameworks (e.g., FastAPI, Flask).

• Hands-on experience with data engineering and analytics, including data pipelines, ETL

processes, and working with structured/unstructured data.

• Understanding of React.js/ Angular, JavaScript/TypeScript for building responsive user

interfaces.

• Familiarity with AI/ML concepts and eagerness to grow into a deeper AI-focused role.

• Ability to work in cross-functional teams and adapt quickly to evolving technologies.

Read more
Planview

at Planview

3 candid answers
3 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
6yrs+
Upto ₹45L / yr (Varies
)
skill iconPython
Large Language Models (LLM)
Retrieval Augmented Generation (RAG)
Generative AI
MCP
+3 more

Planview is seeking a passionate Sr Software Engineer I to lead the development of internal AI tools and connectors, enabling seamless integration with internal and third-party data sources. This role will drive internal AI enablement and productivity across engineering and customer teams by consulting with business stakeholders, setting technical direction, and delivering scalable solutions.


Responsibilities:

  • Work with business stakeholders to enable successful AI adoption.
  • Develop connectors leveraging MCP or third-party APIs to enable new integrations.
  • Prioritize and execute integrations with internal and external data platforms.
  • Collaborate with other engineers to expand AI capabilities.
  • Establish and monitor uptime metrics, set up alerts, and follow a proactive maintenance schedule.
  • Exposure to operations, including Docker-based and serverless deployments and troubleshooting.
  • Work with DevOps engineers to manage and deploy new tools as required.

Required Qualifications:

  • Bachelor’s degree in computer science, Data Science, or related field.
  • 4+ years of experience in infrastructure engineering, data integration, or AI operations.
  • Strong Python coding skills.
  • Experience configuring and scaling infrastructure for large user bases.
  • Proficiency with monitoring tools, alerting systems, and maintenance best practices.
  • Hands-on experience with containerized and serverless deployments.
  • Ability to code connectors using MCP or third-party APIs.
  • Strong troubleshooting and support skills.

Preferred Qualifications:

  • Experience with building RAG knowledge bases, MCP Servers, and API integration patterns. Experience leveraging AI (LLMs) to boost productivity and streamline workflows.
  • Exposure to working with business stakeholders to drive AI adoption and feature expansion. Familiarity with MCP server support and resilient feature design.
  • Skilled at working as part of a global, diverse workforce.
  • AWS Certification is a plus.
Read more
Ekloud INC
Kratika Agarwal
Posted by Kratika Agarwal
Bengaluru (Bangalore)
4 - 6 yrs
₹5L - ₹14L / yr
skill iconMachine Learning (ML)
skill iconPython
gpu framework
TensorFlow
Keras
+2 more

We are looking for enthusiastic engineers passionate about building and maintaining solutioning platform components on cloud and Kubernetes infrastructure. The ideal candidate will go beyond traditional SRE responsibilities by collaborating with stakeholders, understanding the applications hosted on the platform, and designing automation solutions that enhance platform efficiency, reliability, and value.

[Technology and Sub-technology]

• ML Engineering / Modelling

• Python Programming

• GPU frameworks: TensorFlow, Keras, Pytorch etc.

• Cloud Based ML development and Deployment AWS or Azure


[Qualifications]

• Bachelor’s Degree in Computer Science, Computer Engineering or equivalent technical degree

• Proficient programming knowledge in Python or Java and ability to read and explain open source codebase.

• Good foundation of Operating Systems, Networking and Security Principles

• Exposure to DevOps tools, with experience integrating platform components into Sagemaker/ECR and AWS Cloud environments.

• 4-6 years of relevant experience working on AI/ML projects


[Primary Skills]:

• Excellent analytical & problem solving skills.

• Exposure to Machine Learning and GenAI technologies.

• Understanding and hands-on experience with AI/ML Modeling, Libraries, frameworks, and Tools (TensorFlow, Keras, Pytorch etc.)

• Strong knowledge of Python, SQL/NoSQL

• Cloud Based ML development and Deployment AWS or Azure

Read more
AI Industry

AI Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai, Bengaluru (Bangalore), Hyderabad, Gurugram
5 - 12 yrs
₹20L - ₹46L / yr
skill iconData Science
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Generative AI
skill iconDeep Learning
+14 more

Review Criteria

  • Strong Senior Data Scientist (AI/ML/GenAI) Profile
  • 5+ years of experience in designing, developing, and deploying Machine Learning / Deep Learning (ML/DL) systems in production
  • Must have strong hands-on experience in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.
  • 1+ years of experience in fine-tuning Large Language Models (LLMs) using techniques like LoRA/QLoRA, and building RAG (Retrieval-Augmented Generation) pipelines.
  • Must have experience with MLOps and production-grade systems including Docker, Kubernetes, Spark, model registries, and CI/CD workflows

 

Preferred

  • Prior experience in open-source GenAI contributions, applied LLM/GenAI research, or large-scale production AI systems
  • Preferred (Education) – B.S./M.S./Ph.D. in Computer Science, Data Science, Machine Learning, or a related field.

 

Job Specific Criteria

  • CV Attachment is mandatory
  • Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
  • Are you okay with 3 Days WFO?
  • Virtual Interview requires video to be on, are you okay with it?


Role & Responsibilities

Company is hiring a Senior Data Scientist with strong expertise in AI, machine learning engineering (MLE), and generative AI. You will play a leading role in designing, deploying, and scaling production-grade ML systems — including large language model (LLM)-based pipelines, AI copilots, and agentic workflows. This role is ideal for someone who thrives on balancing cutting-edge research with production rigor and loves mentoring while building impact-first AI applications.

 

Responsibilities:

  • Own the full ML lifecycle: model design, training, evaluation, deployment
  • Design production-ready ML pipelines with CI/CD, testing, monitoring, and drift detection
  • Fine-tune LLMs and implement retrieval-augmented generation (RAG) pipelines
  • Build agentic workflows for reasoning, planning, and decision-making
  • Develop both real-time and batch inference systems using Docker, Kubernetes, and Spark
  • Leverage state-of-the-art architectures: transformers, diffusion models, RLHF, and multimodal pipelines
  • Collaborate with product and engineering teams to integrate AI models into business applications
  • Mentor junior team members and promote MLOps, scalable architecture, and responsible AI best practices


Ideal Candidate

  • 5+ years of experience in designing, deploying, and scaling ML/DL systems in production
  • Proficient in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX
  • Experience with LLM fine-tuning, LoRA/QLoRA, vector search (Weaviate/PGVector), and RAG pipelines
  • Familiarity with agent-based development (e.g., ReAct agents, function-calling, orchestration)
  • Solid understanding of MLOps: Docker, Kubernetes, Spark, model registries, and deployment workflows
  • Strong software engineering background with experience in testing, version control, and APIs
  • Proven ability to balance innovation with scalable deployment
  • B.S./M.S./Ph.D. in Computer Science, Data Science, or a related field
  • Bonus: Open-source contributions, GenAI research, or applied systems at scale


Read more
IAI solution
Anajli Kanojiya
Posted by Anajli Kanojiya
Bengaluru (Bangalore)
4 - 7 yrs
₹15L - ₹20L / yr
skill iconNextJs (Next.js)
skill iconPython
skill iconReact.js
skill iconDocker
skill iconMongoDB

Job Title: Full-Stack developer

Experience: 5 to 8+ Years

ASAP Start Immediately


Key Responsibilities

Develop and maintain end-to-end web applications, including frontend interfaces and backend services.

Build responsive and scalable UIs using React.js and Next.js.

Design and implement robust backend APIs using Python, FastAPI, Django, or Node.js.

Work with cloud platforms such as Azure (preferred) or AWS for application deployment and scaling.

Manage DevOps tasks, including containerization with Docker, orchestration with Kubernetes, and infrastructure as code with Terraform.

Set up and maintain CI/CD pipelines using tools like GitHub Actions or Azure DevOps.

Design and optimize database schemas using PostgreSQL, MongoDB, and Redis.

Collaborate with cross-functional teams in an agile environment to deliver high-quality features on time.

Troubleshoot, debug, and improve application performance and security.

Take full ownership of assigned modules/features and contribute to technical planning and architecture discussions.


Must-Have Qualifications

Strong hands-on experience with Python and at least one backend framework such as FastAPI, Django, or Flask, Node.js .

Proficiency in frontend development using React.js and Next.js

Experience in building and consuming RESTful APIs

Solid understanding of database design and queries using PostgreSQL, MongoDB, and Redis

Practical experience with cloud platforms, preferably Azure, or AWS

Familiarity with containerization and orchestration tools like Docker and Kubernetes

Working knowledge of Infrastructure as Code (IaC) using Terraform

Experience with CI/CD pipelines using GitHub Actions or Azure DevOps

Ability to work in an agile development environment with cross-functional teams

Strong problem-solving, debugging, and communication skills

Start-up experience preferred – ability to manage ambiguity, rapid iterations, and hands-on leadership.


Technical Stack

Frontend: React.js, Next.js

Backend: Python, FastAPI, Django, Spring Boot, Node.js

DevOps & Cloud: Azure (preferred), AWS, Docker, Kubernetes, Terraform

CI/CD: GitHub Actions, Azure DevOps

Databases: PostgreSQL, MongoDB, Redis


Read more
IAI solution
Anajli Kanojiya
Posted by Anajli Kanojiya
Bengaluru (Bangalore)
2 - 3 yrs
₹7L - ₹8L / yr
skill iconPython
skill iconReact.js
skill iconNextJs (Next.js)

Job Title: Full-Stack developer

Experience: 5 to 8+ Years

ASAP Start Immediately


Key Responsibilities

Develop and maintain end-to-end web applications, including frontend interfaces and backend services.

Build responsive and scalable UIs using React.js and Next.js.

Design and implement robust backend APIs using Python, FastAPI, Django, or Node.js.

Work with cloud platforms such as Azure (preferred) or AWS for application deployment and scaling.

Manage DevOps tasks, including containerization with Docker, orchestration with Kubernetes, and infrastructure as code with Terraform.

Set up and maintain CI/CD pipelines using tools like GitHub Actions or Azure DevOps.

Design and optimize database schemas using PostgreSQL, MongoDB, and Redis.

Collaborate with cross-functional teams in an agile environment to deliver high-quality features on time.

Troubleshoot, debug, and improve application performance and security.

Take full ownership of assigned modules/features and contribute to technical planning and architecture discussions.


Must-Have Qualifications

Strong hands-on experience with Python and at least one backend framework such as FastAPI, Django, or Flask, Node.js .

Proficiency in frontend development using React.js and Next.js

Experience in building and consuming RESTful APIs

Solid understanding of database design and queries using PostgreSQL, MongoDB, and Redis

Practical experience with cloud platforms, preferably Azure, or AWS

Familiarity with containerization and orchestration tools like Docker and Kubernetes

Working knowledge of Infrastructure as Code (IaC) using Terraform

Experience with CI/CD pipelines using GitHub Actions or Azure DevOps

Ability to work in an agile development environment with cross-functional teams

Strong problem-solving, debugging, and communication skills

Start-up experience preferred – ability to manage ambiguity, rapid iterations, and hands-on leadership.


Technical Stack

Frontend: React.js, Next.js

Backend: Python, FastAPI, Django, Spring Boot, Node.js

DevOps & Cloud: Azure (preferred), AWS, Docker, Kubernetes, Terraform

CI/CD: GitHub Actions, Azure DevOps

Databases: PostgreSQL, MongoDB, Redis



Read more
Appknox

at Appknox

1 video
6 recruiters
Vasudha Srivastav
Posted by Vasudha Srivastav
Bengaluru (Bangalore)
3 - 5 yrs
Best in industry
skill iconPython
LangChain
LLMs
Retrieval Augmented Generation (RAG)
Prompt engineering

A BIT ABOUT US


Appknox is one of the top Mobile Application security companies recognized by Gartner and G2. A profitable B2B SaaS startup headquartered in Singapore & working from Bengaluru.

The primary goal of Appknox is to help businesses and mobile developers secure their mobile applications with a focus on delivery speed and high-quality security audits.


Appknox has helped secure mobile apps at Fortune 500 companies with major brands spread across regions like India, South-East Asia, Middle-East, US, and expanding rapidly. We have secured 300+ Enterprises globally.


We are a 60+ incredibly passionate team working to make an impact and helping some of the biggest companies globally. We work in a highly collaborative, very fast-paced work environment. If you have what it takes to be part of the team, we are excited and let’s speak further.


The Opportunity


Appknox AI is building next-generation AI-powered security analysis tools for mobile applications. We use multi-agent systems and large language models to automate complex security workflows that traditionally require manual expert analysis.


We're looking for an AI/ML Engineer who will focus on improving our AI system quality, optimizing prompts, and building evaluation frameworks. You'll work with our engineering team to make our AI systems more accurate, efficient, and reliable.


This is NOT a data scientist role. We need someone who builds production AI systems with LLMs and agent frameworks.


 

Key Focus:


Primary Focus: AI System Quality 


  • Prompt Engineering: Design and optimize prompts for complex reasoning tasks
  • Quality Improvement: Reduce false positives and improve accuracy of AI-generated outputs
  • Evaluation Frameworks: Build systems to measure and monitor AI quality metrics
  • Tool Development: Create utilities and tools that enhance AI capabilities


  Secondary Focus: Performance & Optimization 


  • Cost Optimization: Implement strategies to reduce LLM API costs (caching, batching, model  selection)
  •  Metrics & Monitoring: Track system performance, latency, accuracy, and cost
  •  Research & Experimentation: Evaluate new models and approaches
  •  Documentation: Create best practices and guidelines for the team


 

Requirements:


  • 2-4 years of professional software engineering experience with Python as primary language
  • 1+ years working with LangChain, LangGraph, or similar agent frameworks (AutoGPT, CrewAI, etc.)
  • Production LLM experience: You've shipped products using OpenAI, Anthropic, Google Gemini, or similar APIs
  • Prompt engineering skills: You understand how to structure prompts for complex multi-step reasoning
  • Strong Python: Async/await, type hints, Pydantic, modern Python practices
  • Problem-solving mindset: You debug systematically and iterate based on data


Good to Have skill-set:


  •   Experience with vector search (LanceDB, Pinecone, Weaviate, Qdrant)
  •   Knowledge of retrieval-augmented generation (RAG) patterns
  •   Background in security or mobile application development
  •   Understanding of static/dynamic analysis tools


 What We're NOT Looking For:


  •   Only academic/tutorial LLM experience (we need production systems)
  •   Pure ML research focus (we're not training foundation models)
  •   Data analyst/BI background without engineering depth
  •   No experience with LLM APIs or agent frameworks


  

  Our Tech Stack:


  AI/ML Infrastructure:

  •   Agent Frameworks: LangChain, LangGraph
  •   LLMs: Google Gemini (primary), with multi-model support
  •   Observability: Langfuse, DeepEval
  •   Vector Search: LanceDB, Tantivy
  •   Embeddings: Hybrid approach (local + cloud APIs)


  Platform & Infrastructure:

  •   Orchestration: Prefect 3.x, Docker, Kubernetes
  •   Storage: S3-compatible object storage, PostgreSQL
  •   Languages: Python 3.11+
  •   Testing: pytest with parallel execution support


Work Expectations & Success Metrics:


Within 1 Month (Onboarding)

- Understand AI system architecture and workflows

- Review existing prompts and evaluation methods 

- Run analyses and identify improvement areas 

- Collaborate on initial optimizations


Within 3 Months (Initial Contributions)

- Own prompt engineering for specific components 

- Build evaluation datasets and quality metrics 

- Implement tools that extend AI capabilities 

- Contribute to performance optimization experiments


Within 6 Months (Independent Ownership)

- Lead quality metrics implementation and monitoring 

- Drive prompt optimization initiatives 

- Improve evaluation frameworks

- Research and prototype new capabilities


Within 1 Year (Expanded Scope)

- Mentor team members on best practices 

- Lead optimization projects (caching, batching, cost reduction)

- Influence architectural decisions 

- Build reusable libraries and internal frameworks


 Interview Process:

  • Round 0 Interview - Profile Evaluation (15 min)
  • Round 1 Interview - Take Home Assignment
  • Round 2 Interview - Technical Deep-Dive (90 min)
  • Round 3 Interview- Team Fit (45 min)
  • Round 4 Interview- HR Round ( 30 min)


Why Join Appknox AI?


Impact & Growth

Work on cutting-edge AI agent systems that power real-world enterprise security. You’ll collaborate with experienced engineers across AI, security, and infrastructure while gaining deep expertise in LangGraph, agent systems, and prompt engineering. As we scale, you’ll have clear opportunities to grow into senior and staff-level roles.


Team & Culture

Join a small, focused product-engineering team that values code quality, collaboration, and knowledge sharing. We’re in a hybrid set-up - based out of Banaglore, flexible, and committed to a sustainable pace - no crunch, no chaos.


Technology

Built with a modern Python stack (3.11+, async, type hints, Pydantic) and the latest AI/ML tools including LangChain, LangGraph, DeepEval, and Langfuse. Ship production-grade features that make a real impact for customers.


Compensation & Benefits:

Competitive Package

We offer strong compensation designed to reward impact.


Flexibility & Lifestyle

Hybrid work setup and enjoy generous time off. You’ll get top-tier hardware and the tools you need to do your best work.


Learning & Development

Access a substantial learning budget, attend major AI/ML conferences, explore new approaches during dedicated research time, and share your knowledge with the team.


Health & Wellness

Comprehensive health coverage, fitness subscription, and family-friendly policies.


Early-Stage Advantages

Help shape the culture, influence product direction, and work directly with founders. Move fast, ship quickly, and see your impact immediately.

Read more
Tradelab Technologies
Aakanksha Yadav
Posted by Aakanksha Yadav
Bengaluru (Bangalore)
2 - 4 yrs
₹7L - ₹18L / yr
CI/CD
skill iconJenkins
gitlab
ArgoCD
skill iconAmazon Web Services (AWS)
+8 more

About Us:

Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry.


Key Responsibilities

CI/CD and Infrastructure Automation

  • Design, implement, and maintain CI/CD pipelines to support fast and reliable releases
  • Automate deployments using tools such as Terraform, Helm, and Kubernetes
  • Improve build and release processes to support high-performance and low-latency trading applications
  • Work efficiently with Linux/Unix environments

Cloud and On-Prem Infrastructure Management

  • Deploy, manage, and optimize infrastructure on AWS, GCP, and on-premises environments
  • Ensure system reliability, scalability, and high availability
  • Implement Infrastructure as Code (IaC) to standardize and streamline deployments

Performance Monitoring and Optimization

  • Monitor system performance and latency using Prometheus, Grafana, and ELK stack
  • Implement proactive alerting and fault detection to ensure system stability
  • Troubleshoot and optimize system components for maximum efficiency

Security and Compliance

  • Apply DevSecOps principles to ensure secure deployment and access management
  • Maintain compliance with financial industry regulations such as SEBI
  • Conduct vulnerability assessments and maintain logging and audit controls


Required Skills and Qualifications

  • 2+ years of experience as a DevOps Engineer in a software or trading environment
  • Strong expertise in CI/CD tools (Jenkins, GitLab CI/CD, ArgoCD)
  • Proficiency in cloud platforms such as AWS and GCP
  • Hands-on experience with Docker and Kubernetes
  • Experience with Terraform or CloudFormation for IaC
  • Strong Linux administration and networking fundamentals (TCP/IP, DNS, firewalls)
  • Familiarity with Prometheus, Grafana, and ELK stack
  • Proficiency in scripting using Python, Bash, or Go
  • Solid understanding of security best practices including IAM, encryption, and network policies


Good to Have (Optional)

  • Experience with low-latency trading infrastructure or real-time market data systems
  • Knowledge of high-frequency trading environments
  • Exposure to FIX protocol, FPGA, or network optimization techniques
  • Familiarity with Redis or Nginx for real-time data handling


Why Join Us?

  • Work with a team that expects and delivers excellence.
  • A culture where risk-taking is rewarded, and complacency is not.
  • Limitless opportunities for growth—if you can handle the pace.
  • A place where learning is currency, and outperformance is the only metric that matters.
  • The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech.


This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.


Read more
Metron Security Private Limited
Prathamesh Shinde
Posted by Prathamesh Shinde
Pune, Bengaluru (Bangalore)
2 - 5 yrs
₹4L - ₹10L / yr
skill iconPython

Job Description:


We are looking for a skilled Backend Developer with 2–5 years of experience in software development, specializing in Python and/or Golang. If you have strong programming skills, enjoy solving problems, and want to work on secure and scalable systems, we'd love to hear from you!


Location - Pune, Baner.

Interview Rounds - In Office


Key Responsibilities:

Design, build, and maintain efficient, reusable, and reliable backend services using Python and/or Golang

Develop and maintain clean and scalable code following best practices

Apply Object-Oriented Programming (OOP) concepts in real-world development

Collaborate with front-end developers, QA, and other team members to deliver high-quality features

Debug, optimize, and improve existing systems and codebase

Participate in code reviews and team discussions

Work in an Agile/Scrum development environment


Required Skills: Strong experience in Python or Golang (working knowledge of both is a plus)


Good understanding of OOP principles

Familiarity with RESTful APIs and back-end frameworks

Experience with databases (SQL or NoSQL)

Excellent problem-solving and debugging skills

Strong communication and teamwork abilities


Good to Have:

Prior experience in the security industry

Familiarity with cloud platforms like AWS, Azure, or GCP

Knowledge of Docker, Kubernetes, or CI/CD tools

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort