Cutshort logo
AWS (Amazon Web Services) Jobs in Bangalore (Bengaluru)

50+ AWS (Amazon Web Services) Jobs in Bangalore (Bengaluru) | AWS (Amazon Web Services) Job openings in Bangalore (Bengaluru)

Apply to 50+ AWS (Amazon Web Services) Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest AWS (Amazon Web Services) Job opportunities across top companies like Google, Amazon & Adobe.

icon
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
7 - 10 yrs
₹21L - ₹30L / yr
Perforce
DevOps
skill iconGit
skill iconGitHub
skill iconPython
+7 more

JOB DETAILS:

* Job Title: Specialist I - DevOps Engineering

* Industry: Global Digital Transformation Solutions Provider

* Salary: Best in Industry

* Experience: 7-10 years

* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram

 

Job Description

Job Summary:

As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.

The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.

 

Key Responsibilities:

  • Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
  • Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
  • Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
  • Define migration scope — determine how much history to migrate and plan the repository structure.
  • Manage branch renaming and repository organization for optimized post-migration workflows.
  • Collaborate with development teams to determine migration points and finalize migration strategies.
  • Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.

 

Required Qualifications:

  • Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
  • Hands-on experience with P4-Fusion.
  • Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
  • Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
  • Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
  • Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
  • Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
  • Familiarity with CI/CD pipeline integration to validate workflows post-migration.
  • Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
  • Excellent communication and collaboration skills for cross-team coordination and migration planning.
  • Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.

 

Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools

 

Must-Haves

Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)

Read more
Auxo AI
kusuma Gullamajji
Posted by kusuma Gullamajji
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
3 - 5 yrs
₹10L - ₹25L / yr
python
PySpark
skill iconAmazon Web Services (AWS)
glue

AuxoAI is seeking a skilled and experienced Data Engineer to join our dynamic team. The ideal candidate will have 3- 5 years of prior experience in data engineering, with a strong background in AWS (Amazon Web Services) technologies. This role offers an exciting opportunity to work on diverse projects, collaborating with cross-functional teams to design, build, and optimize data pipelines and infrastructure.

Experience : 3 - 5years

Notice : Immediate to 15days

Responsibilities :

Design, develop, and maintain scalable data pipelines and ETL processes leveraging AWS services such as S3, Glue, EMR, Lambda, and Redshift.

Collaborate with data scientists and analysts to understand data requirements and implement solutions that support analytics and machine learning initiatives.

Optimize data storage and retrieval mechanisms to ensure performance, reliability, and cost-effectiveness.

Implement data governance and security best practices to ensure compliance and data integrity.

Troubleshoot and debug data pipeline issues, providing timely resolution and proactive monitoring.

Stay abreast of emerging technologies and industry trends, recommending innovative solutions to enhance data engineering capabilities.

Qualifications :

Bachelor's or Master's degree in Computer Science, Engineering, or a related field.

3 - 5 years of prior experience in data engineering, with a focus on designing and building data pipelines.

Proficiency in AWS services, particularly S3, Glue, EMR, Lambda, and Redshift.

Strong programming skills in languages such as Python, Java, or Scala.

Experience with SQL and NoSQL databases, data warehousing concepts, and big data technologies.

Familiarity with containerization technologies (e.g., Docker, Kubernetes) and orchestration tools (e.g., Apache Airflow) is a plus.

Read more
CAW.Tech

at CAW.Tech

5 recruiters
Ranjana Singh
Posted by Ranjana Singh
Bengaluru (Bangalore)
2 - 3 yrs
Best in industry
Apache Airflow
azkaban
skill iconAmazon Web Services (AWS)
skill iconPython
Pipeline management
+7 more

Responsibilities:

  • Design, develop, and maintain efficient and reliable data pipelines.
  • Identify and implement process improvements, automating manual tasks and optimizing data delivery.
  • Build and maintain the infrastructure for data extraction, transformation, and loading (ETL) from diverse sources using SQL and AWS cloud technologies.
  • Develop data tools and solutions to empower our analytics and data science teams, contributing to product innovation.


Qualifications:

Must Have:

  • 2-3 years of experience in a Data Engineering role.
  • Familiarity with data pipeline and workflow management tools (e.g., Airflow, Luigi, Azkaban).
  • Experience with AWS cloud services.
  • Working knowledge of object-oriented/functional scripting in Python
  • Experience building and optimizing data pipelines and datasets.
  • Strong analytical skills and experience working with structured and unstructured data.
  • Understanding of data transformation, data structures, dimensional modeling, metadata management, schema evolution, and workload management.
  • A passion for building high-quality, scalable data solutions.


Good to have:

  • Experience with stream-processing systems (e.g., Spark Streaming, Flink).
  • Working knowledge of message queuing, stream processing, and scalable data stores.
  • Proficiency in SQL and experience with NoSQL databases like Elasticsearch and Cassandra/MongoDB.


Experience with big data tools such as HDFS/S3, Spark/Flink, Hive, HBase, Kafka/Kinesis.

Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Pune, Delhi, Kolkata, Bengaluru (Bangalore), Kochi (Cochin), Hosur, Trivandrum
7 - 9 yrs
₹5.5L - ₹20L / yr
skill icon.NET
skill iconAmazon Web Services (AWS)
skill iconC#
skill iconReact.js
SQL

Job Description -

Profile: .Net Full Stack Lead

Experience Required: 7–12 Years

Location: Pune, Bangalore, Chennai, Coimbatore, Delhi, Hosur, Hyderabad, Kochi, Kolkata, Trivandrum

Work Mode: Hybrid

Shift: Normal Shift

Key Responsibilities:

  • Design, develop, and deploy scalable microservices using .NET Core and C#
  • Build and maintain serverless applications using AWS services (Lambda, SQS, SNS)
  • Develop RESTful APIs and integrate them with front-end applications
  • Work with both SQL and NoSQL databases to optimize data storage and retrieval
  • Implement Entity Framework for efficient database operations and ORM
  • Lead technical discussions and provide architectural guidance to the team
  • Write clean, maintainable, and testable code following best practices
  • Collaborate with cross-functional teams to deliver high-quality solutions
  • Participate in code reviews and mentor junior developers
  • Troubleshoot and resolve production issues in a timely manner

Required Skills & Qualifications:

  • 7–12 years of hands-on experience in .NET development
  • Strong proficiency in .NET Framework.NET Core, and C#
  • Proven expertise with AWS services (Lambda, SQS, SNS)
  • Solid understanding of SQL and NoSQL databases (SQL Server, MongoDB, DynamoDB, etc.)
  • Experience building and deploying Microservices architecture
  • Proficiency in Entity Framework or EF Core
  • Strong knowledge of RESTful API design and development
  • Experience with React or Angular is a good to have
  • Understanding of CI/CD pipelines and DevOps practices
  • Strong debugging, performance optimization, and problem-solving skills
  • Experience with design patterns, SOLID principles, and best coding practices
  • Excellent communication and team leadership skills


Read more
Truetech solutions

Truetech solutions

Agency job
via TrueTech Solutions by Meimozhi balu
Bengaluru (Bangalore), Kochi (Cochin)
4 - 15 yrs
₹10L - ₹25L / yr
skill icon.NET
ASP.NET
skill iconAmazon Web Services (AWS)
Amazon EC2
AWS Lambda
+2 more

• Minimum 4+ years of years

• Experience in designing, developing, and maintain backend services using C# 12 and .NET 8 or .NET 9

• Experience in building and operating cloud native and serverless applications on AWS

• Experience in developing and integrating services using AWS lambda, API Gateway , dynamo DB, Eventbridge, CloudWatch, SQS, SNS, Kinesis, Secret Manager, S3 storage, server architectural models etc.

Experience in integrating services using AWS SDK

• Should be cognizant of the OMS paradigms including Inventory Management, Inventory publish, supply feed processing, control mechanisms, ATP publish, Order Orchestration, workflow set up and customizations, integrations with tax, AVS, payment engines, sourcing algorithms and managing reservations with back orders, schedule mechanisms, flash sales management etc.

• Should have a decent End to End knowledge of various Commerce subsystems which include Storefront, Core Commerce back end, Post Purchase processing, OMS, Store / Warehouse Management processes, Supply Chain and Logistic processes. This is to ascertain candidates knowhow on the overall Retail landscape of any customer.

• Strong knowledge in Querying in Oracle DB and SQL Server

• Able to read, write and manage PLSQL procedures in oracle.

• Strong debugging, performance tuning and problem solving skills

• Experience with event driven and micro services architectures

Read more
Technology Industry

Technology Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
5 - 8 yrs
₹38L - ₹50L / yr
skill iconJava
skill iconSpring Boot
CI/CD
Spring
Microservices
+16 more

Job Details

- Job Title: SDE-3

Industry: Technology

Domain - Information technology (IT)

Experience Required: 5-8 years

Employment Type: Full Time

Job Location: Bengaluru

CTC Range: Best in Industry

 

Role & Responsibilities

As a Software Development Engineer - 3, Backend Engineer at company, you will play a critical role in architecting, designing, and delivering robust backend systems that power our platform. You will lead by example, driving technical excellence and mentoring peers while solving complex engineering problems. This position offers the opportunity to work with a highly motivated team in a fast-paced and innovative environment.

 

Key Responsibilities:

Technical Leadership-

  • Design and develop highly scalable, fault-tolerant, and maintainable backend systems using Java and related frameworks.
  • Provide technical guidance and mentorship to junior developers, fostering a culture of learning and growth.
  • Review code and ensure adherence to best practices, coding standards, and security guidelines.

System Architecture and Design-

  • Collaborate with cross-functional teams, including product managers and frontend engineers, to translate business requirements into efficient technical solutions.
  • Own the architecture of core modules and contribute to overall platform scalability and reliability.
  • Advocate for and implement microservices architecture, ensuring modularity and reusability.

Problem Solving and Optimization-

  • Analyze and resolve complex system issues, ensuring high availability and performance of the platform.
  • Optimize database queries and design scalable data storage solutions.
  • Implement robust logging, monitoring, and alerting systems to proactively identify and mitigate issues.

Innovation and Continuous Improvement-

  • Stay updated on emerging backend technologies and incorporate relevant advancements into our systems.
  • Identify and drive initiatives to improve codebase quality, deployment processes, and team productivity.
  • Contribute to an advocate for a DevOps culture, supporting CI/CD pipelines and automated testing.

Collaboration and Communication-

  • Act as a liaison between the backend team and other technical and non-technical teams, ensuring smooth communication and alignment.
  • Document system designs, APIs, and workflows to maintain clarity and knowledge transfer across the team.

 

Ideal Candidate

  • Strong Java Backend Engineer.
  • Must have 5+ years of backend development with strong focus on Java (Spring / Spring Boot)
  • Must have been SDE-2 for at least 2.5 years
  • Hands-on experience with RESTful APIs and microservices architecture
  • Strong understanding of distributed systems, multithreading, and async programming
  • Experience with relational and NoSQL databases
  • Exposure to Kafka/RabbitMQ and Redis/Memcached
  • Experience with AWS / GCP / Azure, Docker, and Kubernetes
  • Familiar with CI/CD pipelines and modern DevOps practices
  • Product companies (B2B SAAS preferred)
  • have stayed for at least 2 years with each of the previous companies
  • (Education): B.Tech in computer science from Tier 1, Tier 2 colleges


Read more
Technology Industry

Technology Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
9 - 12 yrs
₹50L - ₹70L / yr
skill iconJava
Microservices
CI/CD
MySQL
MySQL DBA
+9 more

Job Details

- Job Title: Staff Engineer

Industry: Technology

Domain - Information technology (IT)

Experience Required: 9-12 years

Employment Type: Full Time

Job Location: Bengaluru

CTC Range: Best in Industry

 

Role & Responsibilities

As a Staff Engineer at company, you will play a critical role in defining and driving our backend architecture as we scale globally. You’ll own key systems that handle high volumes of data and transactions, ensuring performance, reliability, and maintainability across distributed environments.

 

Key Responsibilities-

  • Own one or more core applications end-to-end, ensuring reliability, performance, and scalability.
  • Lead the design, architecture, and development of complex, distributed systems, frameworks, and libraries aligned with company’s technical strategy.
  • Drive engineering operational excellence by defining robust roadmaps for system reliability, observability, and performance improvements.
  • Analyze and optimize existing systems for latency, throughput, and efficiency, ensuring they perform at scale.
  • Collaborate cross-functionally with Product, Data, and Infrastructure teams to translate business requirements into technical deliverables.
  • Mentor and guide engineers, fostering a culture of technical excellence, ownership, and continuous learning.
  • Establish and uphold coding standards, conduct design and code reviews, and promote best practices across teams.
  • Stay ahead of the curve on emerging technologies, frameworks, and patterns to strengthen company’s technology foundation.
  • Contribute to hiring by identifying and attracting top-tier engineering talent.

 

Ideal Candidate

  • Strong staff engineer profile
  • Must have 9+ years in backend engineering with Java, Spring/Spring Boot, and microservices building large and schalable systems
  • Must have been SDE-3 / Tech Lead / Lead SE for at least 2.5 years
  • Strong in DSA, system design, design patterns, and problem-solving
  • Proven experience building scalable, reliable, high-performance distributed systems
  • Hands-on with SQL/NoSQL databases, REST/gRPC APIs, concurrency & async processing
  • Experience in AWS/GCP, CI/CD pipelines, and observability/monitoring
  • Excellent ability to explain complex technical concepts to varied stakeholders
  • Product companies (B2B SAAS preferred)
  • Must have stayed for at least 2 years with each of the previous companies
  • (Education): B.Tech in computer science from Tier 1, Tier 2 colleges


Read more
BlogVault

at BlogVault

3 candid answers
1 recruiter
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
4 - 6 yrs
Upto ₹35L / yr (Varies
)
skill iconRuby
skill iconNodeJS (Node.js)
skill iconGo Programming (Golang)
skill iconReact.js
skill iconAngular (2+)
+3 more

We’re building a suite of SaaS products for WordPress professionals—each with a clear product-market fit and the potential to become a $100M+ business. As we grow, we need engineers who go beyond feature delivery. We’re looking for someone who wants to build enduring systems, make practical decisions, and help us ship great products with high velocity.


What You’ll Do

  • Work with product, design, and support teams to turn real customer problems into thoughtful, scalable solutions.
  • Design and build robust backend systems, services, and APIs that prioritize long-term maintainability and performance.
  • Use AI-assisted tooling (where appropriate) to explore solution trees, accelerate development, and reduce toil.
  • Improve velocity across the team by building reusable tools, abstractions, and internal workflows—not just shipping isolated features.
  • Dig into problems deeply—whether it's debugging a performance issue, streamlining a process, or questioning a product assumption.
  • Document your decisions clearly and communicate trade-offs with both technical and non-technical stakeholders.


What Makes You a Strong Fit

  • You’ve built and maintained real-world software systems, ideally at meaningful scale or complexity.
  • You think in systems and second-order effects—not just in ticket-by-ticket outputs.
  • You prefer well-reasoned defaults over overengineering.
  • You take ownership—not just of code, but of the outcomes it enables.
  • You work cleanly, write clear code, and make life easier for those who come after you.
  • You’re curious about the why, not just the what—and you’re comfortable contributing to product discussions.


Bonus if You Have Experience With

  • Building tools or workflows that accelerate other developers.
  • Working with AI coding tools and integrating them meaningfully into your workflow.
  • Building for SaaS products, especially those with large user bases or self-serve motions.
  • Working in small, fast-moving product teams with a high bar for ownership.


Why Join Us

  • A small team that values craftsmanship, curiosity, and momentum.
  • A product-driven culture where engineering decisions are informed by customer outcomes.
  • A chance to work on multiple zero-to-one opportunities with strong PMF.
  • No vanity perks—just meaningful work with people who care.
Read more
MaterialPlus
Pratibha Adhikari
Posted by Pratibha Adhikari
Bengaluru (Bangalore), Gurugram
5 - 9 yrs
₹20L - ₹30L / yr
skill iconGo Programming (Golang)
skill iconReact.js
skill iconAmazon Web Services (AWS)
Azure
CI/CD

Role- Sr Software Developer (Fullstack)

Location- Gurgaon/ Bangalore

Mode- Hybrid


Job Description: Sr Software Developer (Golang expertise) – 5+ years 


Role Summary:-

We are seeking an experienced Senior Engineer with strong expertise in Golang and cloud-based web applications. The ideal candidate will work across multiple backend services, contribute to architectural decisions, ensure quality through best practices, and collaborate effectively with cross-functional teams. 


Key Responsibilities:- 

• Design, develop, and maintain backend services using Golang. 

• Work across multiple Golang applications and microservices. 

• Have good understanding of internal enterprise services such as SSO, authorization, and user management. 

• Collaborate with cloud engineering teams to build and manage applications on AWS or Azure. 

• Follow and enforce backend development best practices including CI/CD, coding guidelines, and code reviews. 

• Use tools like SonarQube for static and dynamic code analysis. 

• Write high‑quality unit tests and maintain test coverage. 

• Document system designs, APIs, and technical workflows. 

• Mentor junior team members and contribute to overall team maturity.

 

Required Skills:-

• Strong, hands‑on experience with Golang across multiple real‑world projects.

• Good experience with dell cloud 

• Good understanding of cloud services (AWS or Azure) for web application development. 

• Knowledge of SSO, authorization services, and internal service integrations. 

• Excellent communication and collaboration skills. 

• Experience with CI/CD pipelines, coding standards, and automated testing. 

• Familiarity with code quality tools such as SonarQube. 

• Strong documentation skills. 

 

Good-to-Have Skills:-

• Knowledge of Python or JavaScript. 

• Understanding of frontend technologies (react.js). 

• Experience mentoring or guiding team members.


Note- We are looking out immediate joiners.

Read more
ProductNova
Vidhya Vijay
Posted by Vidhya Vijay
Bengaluru (Bangalore)
5 - 8 yrs
₹15L - ₹18L / yr
Large Language Models (LLM) tuning
Prompt engineering
Chatbot
Artificial Intelligence (AI)
skill iconPython
+6 more

ROLE: Ai ML Senior Developer

Exp: 5 to 8 Years

Location: Bangalore (Onsite)

About ProductNova

ProductNova is a fast-growing product development organization that partners with

ambitious companies to build, modernize, and scale high-impact digital products. Our teams

of product leaders, engineers, AI specialists, and growth experts work at the intersection of

strategy, technology, and execution to help organizations create differentiated product

portfolios and accelerate business outcomes.

Founded in early 2023, ProductNova has successfully designed, built, and launched 20+

large-scale, AI-powered products and platforms across industries. We specialize in solving

complex business problems through thoughtful product design, robust engineering, and

responsible use of AI.

 

Product Development

We design and build user-centric, scalable, AI-native B2B SaaS products that are deeply

aligned with business goals and long-term value creation.

Our end-to-end product development approach covers the full lifecycle:

 

1. Product discovery and problem definition

2. User research and product strategy

3. Experience design and rapid prototyping

4. AI-enabled engineering, testing, and platform architecture

5. Product launch, adoption, and continuous improvement

 

From early concepts to market-ready solutions, we focus on building products that are

resilient, scalable, and ready for real-world adoption. Post-launch, we work closely with

customers to iterate based on user feedback and expand products across new use cases,

customer segments, and markets.

 

Growth & Scale

For early-stage companies and startups, we act as product partners—shaping ideas into

viable products, identifying target customers, achieving product-market fit, and supporting

go-to-market execution, iteration, and scale.

For established organizations, we help unlock the next phase of growth by identifying

opportunities to modernize and scale existing products, enter new geographies, and build

entirely new product lines. Our teams enable innovation through AI, platform re-

architecture, and portfolio expansion to support sustained business growth.

 

 

 

 

Role Overview:

We are seeking an experienced AI / ML Senior Developer with strong hands-on expertise in large language models (LLMs) and AI-driven application development. The ideal candidate will have practical experience working with GPT and Anthropic models, building and training B2B products powered by AI, and leveraging AI-assisted development tools to deliver scalable and intelligent solutions.

 

Key Responsibilities:

1. Model Analysis & Optimization

Analyze, customize, and optimize GPT and Anthropic-based models to ensure reliability, scalability, and performance for real-world business use cases.

2. AI Product Design & Development

Design and build AI-powered products, including model training, fine-tuning, evaluation, and performance optimization across development lifecycles.

3. Prompt Engineering & Response Quality

Develop and refine prompt engineering strategies to improve model accuracy, consistency, relevance, and contextual understanding.

4. AI Service Integration

Build, integrate, and deploy AI services into applications using modern development practices, APIs, and scalable architectures.

5. AI-Assisted Development Productivity

Leverage AI-enabled coding tools such as Cursor and GitHub Copilot to accelerate development, improve code quality, and enhance efficiency.

6. Cross-Functional Collaboration

Work closely with product, business, and engineering teams to translate business requirements into effective AI-driven solutions.

7. Model Monitoring & Continuous Improvement

Monitor model performance, analyze outputs, and iteratively improve accuracy, safety, and overall system effectiveness.

 

Qualifications:

1. Hands-on experience analyzing, developing, fine-tuning, and optimizing GPT and Anthropic-based large language models.

2. Strong expertise in prompt design, experimentation, and optimization to enhance response accuracy and reliability.

3. Proven experience building, training, and deploying chatbots or conversational AI systems.

4. Practical experience using AI-assisted coding tools such as Cursor or GitHub Copilot in production environments.

5. Solid programming experience in Python, with strong problem-solving and development fundamentals.

6. Experience working with embeddings, similarity search, and vector databases for retrieval-augmented generation (RAG).

7. Knowledge of MLOps practices, including model deployment, versioning, monitoring, and lifecycle management.

8. Experience with cloud environments such as Azure, AWS for deploying and managing AI solutions.

9. Experience with APIs, microservices architecture, and system integration for scalable AI applications.

 

Why Join Us

• Build cutting-edge AI-powered B2B SaaS products

• Own architecture and technology decisions end-to-end

• Work with highly skilled ML and Full Stack teams

• Be part of a fast-growing, innovation-driven product organization

 

If you are a results-driven AiML Senior Developer with a passion for developing innovative products that drive business growth, we invite you to join our dynamic team at ProductNova.

 

 

 

Read more
ProductNova
Vidhya Vijay
Posted by Vidhya Vijay
Bengaluru (Bangalore)
10 - 12 yrs
₹28L - ₹32L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
skill iconPython
skill iconNodeJS (Node.js)
skill icon.NET
+9 more

ROLE - TECH LEAD/ARCHITECT with AI Expertise

 

Experience: 10–15 Years

Location: Bangalore (Onsite)

Company Type: Product-based | AI B2B SaaS

 

About ProductNova

ProductNova is a fast-growing product development organization that partners with

ambitious companies to build, modernize, and scale high-impact digital products. Our teams

of product leaders, engineers, AI specialists, and growth experts work at the intersection of

strategy, technology, and execution to help organizations create differentiated product

portfolios and accelerate business outcomes.

 

Founded in early 2023, ProductNova has successfully designed, built, and launched 20+

large-scale, AI-powered products and platforms across industries. We specialize in solving

complex business problems through thoughtful product design, robust engineering, and

responsible use of AI.

 

Product Development

We design and build user-centric, scalable, AI-native B2B SaaS products that are deeply

aligned with business goals and long-term value creation.

Our end-to-end product development approach covers the full lifecycle:

 

1. Product discovery and problem definition

2. User research and product strategy

3. Experience design and rapid prototyping

4. AI-enabled engineering, testing, and platform architecture

5. Product launch, adoption, and continuous improvement

 

From early concepts to market-ready solutions, we focus on building products that are

resilient, scalable, and ready for real-world adoption. Post-launch, we work closely with

customers to iterate based on user feedback and expand products across new use cases,

customer segments, and markets.

 

Growth & Scale

For early-stage companies and startups, we act as product partners—shaping ideas into

viable products, identifying target customers, achieving product-market fit, and supporting

go-to-market execution, iteration, and scale.

For established organizations, we help unlock the next phase of growth by identifying

opportunities to modernize and scale existing products, enter new geographies, and build

entirely new product lines. Our teams enable innovation through AI, platform re-

architecture, and portfolio expansion to support sustained business growth.

 

 

 

Role Overview

We are looking for a Tech Lead / Architect to drive the end-to-end technical design and

development of AI-powered B2B SaaS products. This role requires a strong hands-on

technologist who can work closely with ML Engineers and Full Stack Development teams,

own the product architecture, and ensure scalability, security, and compliance across the

platform.

 

Key Responsibilities

• Lead the end-to-end architecture and development of AI-driven B2B SaaS products

• Collaborate closely with ML Engineers, Data Scientists, and Full Stack Developers to

integrate AI/ML models into production systems

• Define and own the overall product technology stack, including backend, frontend,

data, and cloud infrastructure

• Design scalable, resilient, and high-performance architectures for multi-tenant SaaS

platforms

• Drive cloud-native deployments (Azure) using modern DevOps and CI/CD practices

• Ensure data privacy, security, compliance, and governance (SOC2, GDPR, ISO, etc.)

across the product

• Take ownership of application security, access controls, and compliance

requirements

• Actively contribute hands-on through coding, code reviews, complex feature development and architectural POCs

• Mentor and guide engineering teams, setting best practices for coding, testing, and

system design

• Work closely with Product Management and Leadership to translate business

requirements into technical solutions

 

Qualifications:

• 10–15 years of overall experience in software engineering and product

development

• Strong experience building B2B SaaS products at scale

• Proven expertise in system architecture, design patterns, and distributed systems

• Hands-on experience with cloud platforms (Azure, AWS/GCP)

• Solid background in backend technologies (Python/ .NET / Node.js / Java) and

modern frontend frameworks like (React, JS, etc.)

• Experience working with AI/ML teams in deploying and tuning ML models into production

environments

• Strong understanding of data security, privacy, and compliance frameworks

• Experience with microservices, APIs, containers, Kubernetes, and cloud-native

architectures

• Strong working knowledge of CI/CD pipelines, DevOps, and infrastructure as code

• Excellent communication and leadership skills with the ability to work cross-

functionally

• Experience in AI-first or data-intensive SaaS platforms

• Exposure to MLOps frameworks and model lifecycle management

• Experience with multi-tenant SaaS security models

• Prior experience in product-based companies or startups

 

Why Join Us

• Build cutting-edge AI-powered B2B SaaS products

• Own architecture and technology decisions end-to-end

• Work with highly skilled ML and Full Stack teams

• Be part of a fast-growing, innovation-driven product organization

 

If you are a results-driven Technical Lead with a passion for developing innovative products that drives business growth, we invite you to join our dynamic team at ProductNova.

 

Read more
Auxo AI
Kritika Dhingra
Posted by Kritika Dhingra
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
2 - 8 yrs
₹10L - ₹30L / yr
skill iconAmazon Web Services (AWS)
Data Transformation Tool (DBT)
SQL
skill iconPython
Spark
+1 more

AuxoAI is seeking a skilled and experienced Data Engineer to join our dynamic team. The ideal candidate will have 3-7 years of prior experience in data engineering, with a strong background in working on modern data platforms. This role offers an exciting opportunity to work on diverse projects, collaborating with cross-functional teams to design, build, and optimize data pipelines and infrastructure.


Location : Bangalore, Hyderabad, Mumbai, and Gurgaon


Responsibilities:

· Designing, building, and operating scalable on-premises or cloud data architecture

· Analyzing business requirements and translating them into technical specifications

· Design, develop, and implement data engineering solutions using DBT on cloud platforms (Snowflake, Databricks)

· Design, develop, and maintain scalable data pipelines and ETL processes

· Collaborate with data scientists and analysts to understand data requirements and implement solutions that support analytics and machine learning initiatives.

· Optimize data storage and retrieval mechanisms to ensure performance, reliability, and cost-effectiveness

· Implement data governance and security best practices to ensure compliance and data integrity

· Troubleshoot and debug data pipeline issues, providing timely resolution and proactive monitoring

· Stay abreast of emerging technologies and industry trends, recommending innovative solutions to enhance data engineering capabilities.


Requirements


· Bachelor's or Master's degree in Computer Science, Engineering, or a related field.

· Overall 3+ years of prior experience in data engineering, with a focus on designing and building data pipelines

· Experience of working with DBT to implement end-to-end data engineering processes on Snowflake and Databricks

· Comprehensive understanding of the Snowflake and Databricks ecosystem

· Strong programming skills in languages like SQL and Python or PySpark.

· Experience with data modeling, ETL processes, and data warehousing concepts.

· Familiarity with implementing CI/CD processes or other orchestration tools is a plus.


Read more
A real time Customer Data Platform and cross channel marketing automation delivers superior experiences that result in an increased revenue for some of the largest enterprises in the world.

A real time Customer Data Platform and cross channel marketing automation delivers superior experiences that result in an increased revenue for some of the largest enterprises in the world.

Agency job
via HyrHub by Neha Koshy
Bengaluru (Bangalore)
1 - 2 yrs
₹7L - ₹14L / yr
skill iconJava
skill iconSpring Boot
Apache Kafka
SQL
skill iconPostgreSQL
+3 more

Key Responsibilities:

  • Design and develop backend components and sub-systems for large-scale platforms under guidance from senior engineers.
  • Contribute to building and evolving the next-generation customer data platform.
  • Write clean, efficient, and well-tested code with a focus on scalability and performance.
  • Explore and experiment with modern technologies—especially open-source frameworks—
  • and build small prototypes or proof-of-concepts.
  • Use AI-assisted development tools to accelerate coding, testing, debugging, and learning while adhering to engineering best practices.
  • Participate in code reviews, design discussions, and continuous improvement of the platform.

Qualifications:

  • 0–2 years of experience (or strong academic/project background) in backend development with Java.
  • Good fundamentals in algorithms, data structures, and basic performance optimizations.
  • Bachelor’s or Master’s degree in Computer Science or IT (B.E / B.Tech / M.Tech / M.S) from premier institutes.

Technical Skill Set:

  • Strong aptitude and analytical skills with emphasis on problem solving and clean coding.
  • Working knowledge of SQL and NoSQL databases.
  • Familiarity with unit testing frameworks and writing testable code is a plus.
  • Basic understanding of distributed systems, messaging, or streaming platforms is a bonus.

AI-Assisted Engineering (LLM-Era Skills):

  • Familiarity with modern AI coding tools such as Cursor, Claude Code, Codex, Windsurf, Opencode, or similar.
  • Ability to use AI tools for code generation, refactoring, test creation, and learning new systems responsibly.
  • Willingness to learn how to combine human judgment with AI assistance for high-quality engineering outcomes.

Soft Skills & Nice to Have

  • Appreciation for technology and its ability to create real business value, especially in data and marketing platforms.
  • Clear written and verbal communication skills.
  • Strong ownership mindset and ability to execute in fast-paced environments.
  • Prior internship or startup experience is a plus.
Read more
Appiness Interactive
Shashirekha S
Posted by Shashirekha S
Bengaluru (Bangalore)
5 - 13 yrs
₹10L - ₹23L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Large Language Models (LLM)
Large Language Models (LLM) tuning
Vector database
+4 more

Required Skills & Qualifications

● Strong hands-on experience with LLM frameworks and models, including LangChain,

OpenAI (GPT-4), and LLaMA

● Proven experience in LLM orchestration, workflow management, and multi-agent

system design using frameworks such as LangGraph

● Strong problem-solving skills with the ability to propose end-to-end solutions and

contribute at an architectural/system design level

● Experience building scalable AI-backed backend services using FastAPI and

asynchronous programming patterns

● Solid experience with cloud infrastructure on AWS, including EC2, S3, and Load

Balancers

● Hands-on experience with Docker and containerization for deploying and managing

AI/ML applications

● Good understanding of Transformer-based architectures and how modern LLMs work

internally

● Strong skills in data processing and analysis using NumPy and Pandas

● Experience with data visualization tools such as Matplotlib and Seaborn for analysis

and insights

● Hands-on experience with Retrieval-Augmented Generation (RAG), including

document ingestion, embeddings, and vector search pipelines

● Experience in model optimization and training techniques, including fine-tuning,

LoRA, and QLoRA


Nice to Have / Preferred

● Experience designing and operating production-grade AI systems


● Familiarity with cost optimization, observability, and performance tuning for

LLM-based applications

● Exposure to multi-cloud or large-scale AI platforms

Read more
Solarsquare energy pvt ltd
Mumbai, Bengaluru (Bangalore)
8 - 12 yrs
₹30L - ₹55L / yr
Team leadership
Deployment management
Fullstack Developer
skill iconAmazon Web Services (AWS)
skill iconMongoDB

Description :


About the Role :


We're seeking a dynamic and technically strong Engineering Manager to lead, grow, and inspire our high-performing engineering team. In this role, you'll drive technical strategy, deliver scalable systems, and ensure SolarSquare's platforms continue to delight users at scale. You'll combine hands-on technical expertise with a passion for mentoring engineers, shaping culture, and collaborating across functions to bring bold ideas to life in a fast-paced startup environment.


Responsibilities :


- Lead and manage a team of full stack developers (SDE1 to SDE3), fostering a culture of ownership, technical excellence, and continuous learning.


- Drive the technical vision and architectural roadmap for the MERN stack platform, ensuring scalability, security, and high performance.


- Collaborate closely with product, design, and business teams to align engineering priorities with business goals and deliver impactful products.


- Ensure engineering best practices across code reviews, testing strategies, and deployment pipelines (CI/CD).


- Implement robust observability and monitoring systems to proactively identify and resolve issues in production environments.


- Optimize system performance and cost-efficiency in cloud infrastructure (AWS, Azure, GCP).


- Manage technical debt effectively, balancing long-term engineering health with short-term product needs.


- Recruit, onboard, and develop top engineering talent, creating growth paths for team members.


- Drive delivery excellence by setting clear goals, metrics, and expectations, and ensuring timely execution of projects.


- Advocate for secure coding practices and compliance with data protection standards (e.g., OWASP, GDPR).


Requirements :


- 8 to 12 years of experience in full stack development, with at least 2+ years in a technical leadership or people management role.


- Proven expertise in the MERN stack (MongoDB, Express.js, React.js, Node.js) and strong understanding of distributed systems and microservices.


- Hands-on experience designing and scaling high-traffic web applications.


- Deep knowledge of cloud platforms (AWS, Azure, GCP), containerization (Docker), and orchestration tools (Kubernetes).


- Strong understanding of observability practices and tools (Prometheus, Grafana, ELK, Datadog) for maintaining production-grade systems.


- Track record of building and leading high-performing engineering teams in agile environments.


- Excellent communication and stakeholder management skills, with the ability to align technical efforts with business objectives.


- Experience with cost optimization, security best practices, and performance tuning in cloud-native environments.


Bonus : Prior experience in established Product companies or experience with scaling teams in early stage startup and designing systems from scratch.


Work Arrangement :


- Flexible work setup, including hybrid options. Monday to Friday.



Read more
Mindreams Infotech Pvt Ltd
Bengaluru (Bangalore)
6 - 10 yrs
₹20L - ₹30L / yr
skill iconPython
skill iconAmazon Web Services (AWS)
skill iconReact.js

What you’ll do


  • Build and scale backend services and APIs using Python
  • Work on cross-language integrations (Python ↔ PHP)
  • Develop frontend features using React (Angular is a plus)
  • Deploy, monitor, and manage applications on AWS
  • Own features end-to-end: development, performance, and reliability
  • Collaborate closely with product, QA, and engineering teams


Tech Stack


  • Backend: Python (working knowledge of PHP is a strong plus)
  • Frontend: React (Angular is a plus)
  • Cloud: AWS
  • Version Control: Git / GitHub


Experience


  • 5–10 years of professional software development experience
  • Strong hands-on experience with Python
  • Hands-on experience deploying and managing applications on AWS
  • Working knowledge of modern frontend frameworks


Read more
Kanerika Software

at Kanerika Software

3 candid answers
2 recruiters
Ariba Khan
Posted by Ariba Khan
Bengaluru (Bangalore)
3 - 5 yrs
Upto ₹22L / yr (Varies
)
skill iconJava
skill iconAmazon Web Services (AWS)
Apache Kafka
skill iconSpring Boot
Microservices

About Kanerika:

Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI.

We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth.


Awards and Recognitions:

Kanerika has won several awards over the years, including:

1. Best Place to Work 2023 by Great Place to Work®

2. Top 10 Most Recommended RPA Start-Ups in 2022 by RPA Today

3. NASSCOM Emerge 50 Award in 2014

4. Frost & Sullivan India 2021 Technology Innovation Award for its Kompass composable solution architecture

5. Kanerika has also been recognized for its commitment to customer privacy and data security, having achieved ISO 27701, SOC2, and GDPR compliances.


Working for us:

Kanerika is rated 4.6/5 on Glassdoor, for many good reasons. We truly value our employees' growth, well-being, and diversity, and people’s experiences bear this out. At Kanerika, we offer a host of enticing benefits that create an environment where you can thrive both personally and professionally. From our inclusive hiring practices and mandatory training on creating a safe work environment to our flexible working hours and generous parental leave, we prioritize the well-being and success of our employees.


Our commitment to professional development is evident through our mentorship programs, job training initiatives, and support for professional certifications. Additionally, our company-sponsored outings and various time-off benefits ensure a healthy work-life balance. Join us at Kanerika and become part of a vibrant and diverse community where your talents are recognized, your growth is nurtured, and your contributions make a real impact. See the benefits section below for the perks you’ll get while working for Kanerika.


About the role:

As a Senior Java Developer, you will utilize your extensive Java programming skills and expertise to design and develop robust and scalable applications. You will collaborate with cross- functional teams, provide technical leadership, and contribute to the entire software development life cycle. With your deep understanding of Java technologies and frameworks, you will ensure the delivery of high-quality solutions that meet the project requirements and adhere to coding standards.


Role Responsibilities:

  • Discuss new features and collaborate with the development and UX team, commercial product manager and product owner to get functionalities specified and implemented.
  • Agree the technical implementation with involved component owners and estimate its work effort.
  • Write great code and do code reviews for other engineers
  • Implement, test and demonstrate new product features in an agile process.
  • Develop complete sets of functionalities including the backend and frontend.
  • Create new microservices, or migrate existing services, to run on a cloud infrastructure
  • Work on further usability, performance improvements or quality assurance, including bug fixes and test automation.
  • Watch out for potential security issues and fix them, or better avoid them altogether 


Role requirements:

  • BTech computer science or equivalent
  • Java development skills with at least 3 to 5 years of experience. Knowledge of the most popular java libraries and frameworks: JPA, Spring, Kafka, etc
  • Have a degree in computer science, or a similar background, or you just have enough professional experience to blow right through all your challenges
  • Are a great communicator, analytic, goal-oriented, quality-focused, yet still agile person who likes to work with software engineers; you will interact a lot with architects, developers from other teams, component owners and system engineers
  • Have a clear overview of all layers in computer software development, including REST APIs and how to make and integrate them in our products
  • Have Java server-side development and SQL (PostgreSQL) and nice to have NOSQL (Mongo Db or Dynamo DB) database knowledge.
  • Are open to pick-up innovative technologies as needed by the team. Have or want to build experience with cloud and DevOps infrastructure (like Kubernetes, Docker, Terraform, Concourse, etc.)
  • Speak English fluently


Employee Benefits:

1. Culture:

  1. Open Door Policy: Encourages open communication and accessibility to management.
  2. Open Office Floor Plan: Fosters a collaborative and interactive work environment.
  3. Flexible Working Hours: Allows employees to have flexibility in their work schedules.
  4. Employee Referral Bonus: Rewards employees for referring qualified candidates.
  5. Appraisal Process Twice a Year: Provides regular performance evaluations and feedback.


2. Inclusivity and Diversity:

  1. Hiring practices that promote diversity: Ensures a diverse and inclusive workforce.
  2. Mandatory POSH training: Promotes a safe and respectful work environment.


3. Health Insurance and Wellness Benefits:

  1. GMC and Term Insurance: Offers medical coverage and financial protection.
  2. Health Insurance: Provides coverage for medical expenses.
  3. Disability Insurance: Offers financial support in case of disability.


4. Child Care & Parental Leave Benefits:

  1. Company-sponsored family events: Creates opportunities for employees and their       families to bond.
  2. Generous Parental Leave: Allows parents to take time off after the birth or adoption of a child.
  3. Family Medical Leave: Offers leave for employees to take care of family members' medical needs.


5. Perks and Time-Off Benefits:

  1. Company-sponsored outings: Organizes recreational activities for employees.
  2. Gratuity: Provides a monetary benefit as a token of appreciation.
  3. Provident Fund: Helps employees save for retirement.
  4. Generous PTO: Offers more than the industry standard for paid time off.
  5. Paid sick days: Allows employees to take paid time off when they are unwell.
  6. Paid holidays: Gives employees paid time off for designated holidays.
  7. Bereavement Leave: Provides time off for employees to grieve the loss of a loved one.


6. Professional Development Benefits:

  1. L&D with FLEX- Enterprise Learning Repository: Provides access to a learning repository for professional development.
  2. Mentorship Program: Offers guidance and support from experienced professionals.
  3. Job Training: Provides training to enhance job-related skills.
  4. Professional Certification Reimbursements: Assists employees in obtaining professional      certifications.
  5. Promote from Within: Encourages internal growth and advancement opportunities.
Read more
Technology Industry

Technology Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 9 yrs
₹36L - ₹48L / yr
skill iconPython
TypeScript
skill iconNodeJS (Node.js)
ReAct (Reason + Act)
skill iconReact Native
+13 more

Review Criteria:

  • Strong Software Engineer fullstack profile using NodeJS / Python and React
  • 6+ YOE in Software Development using Python OR NodeJS (For backend) & React (For frontend)
  • Must have strong experience in working on Typescript
  • Must have experience in message-based systems like Kafka, RabbitMq, Redis
  • Databases - PostgreSQL & NoSQL databases like MongoDB
  • Product Companies Only
  • Tier 1 Engineering Institutes preferred (IIT, NIT, BITS, IIIT, DTU or equivalent)

 

Preferred:

  • Experience in Fin-Tech, Payment, POS and Retail products is highly preferred
  • Experience in mentoring, coaching the team.


Role & Responsibilities:

We are currently seeking a Senior Engineer to join our Financial Services team, contributing to the design and development of scalable system.

 

The Ideal Candidate Will Be Able To-

  • Take ownership of delivering performant, scalable and high-quality cloud-based software, both frontend and backend side.
  • Mentor team members to develop in line with product requirements.
  • Collaborate with Senior Architect for design and technology choices for product development roadmap.
  • Do code reviews.


Ideal Candidate:

  • Thorough knowledge of developing cloud-based software including backend APIs and react based frontend.
  • Thorough knowledge of scalable design patterns and message-based systems such as Kafka, RabbitMq, Redis, MongoDB, ORM, SQL etc.
  • Experience with AWS services such as S3, IAM, Lambda etc.
  • Expert level coding skills in Python FastAPI/Django, NodeJs, TypeScript, ReactJs.
  • Eye for user responsive designs on the frontend.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Bengaluru (Bangalore)
4 - 8 yrs
Best in industry
skill iconJava
skill iconSpring Boot
Microservices
skill iconAngular (2+)
skill iconAmazon Web Services (AWS)
+1 more

JOB DESCRIPTION:


Location: bangalore

Mode of Work : 3 days from Office


DSA(Collections, Hash maps, trees, Linkedlist and Arrays, etc), Core Oops concepts(Multithreading, Multi Processing, Polymorphism, Inheritance etc) Annotations in Spring and Spring boot, Java 8 Vital features, database Optimization, Microsevices and Rest API

  • Design, develop, and maintain low-latency, high-performance enterprise applications using Core Java (Java 5.0 and above).
  • Implement and integrate APIs using Spring Framework and Apache CXF.
  • Build microservices-based architecture for scalable and distributed systems.
  • Collaborate with cross-functional teams for high/low-level design, development, and deployment of software solutions.
  • Optimize performance through efficient multithreading, memory management, and algorithm design.
  • Ensure best coding practices, conduct code reviews, and perform unit/integration testing.
  • Work with RDBMS (preferably Sybase) for backend data integration.
  • Analyze complex business problems and deliver innovative technology solutions in the financial/trading domain.
  • Work in Unix/Linux environments for deployment and troubleshooting.
  • Angular and AWS must skill



Read more
Planview

at Planview

3 candid answers
3 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
8yrs+
Upto ₹72L / yr (Varies
)
skill iconPython
SQL
skill iconAmazon Web Services (AWS)
skill iconMachine Learning (ML)
Large Language Models (LLM) tuning
+2 more

The Opportunity

Planview is looking for a passionate Sr Data Scientist to join our team tasked with developing innovative tools for connected work. You are an experienced expert in supporting enterprise

 applications using Data Analytics, Machine Learning, and Generative AI.

You will use this experience to lead other data scientists and data engineers. You will also effectively engage with product teams to specify, validate, prototype, scale, and deploy features with a consistent customer experience across the Planview product suite.

     

Responsibilities (What you'll do)

  • Enable Data Science features within Planview applications by working in a fast-paced start-up mindset.
  • Collaborate closely with product management to enable Data Science features that deliver significant value to customers, ensuring that these features are optimized for operational efficiency.
  • Manage every stage of the AI/ML development lifecycle, from initial concept through deployment in a production environment.
  • Provide leadership to other Data Scientists by exemplifying exceptional quality in work, nurturing a culture of continuous learning, and offering daily guidance in their research endeavors.
  • Effectively communicate ideas drawn from complex data with clarity and insight.


Qualifications (What you'll bring)

  • Master’s in operations research, Statistics, Computer Science, Data Science, or related field.
  • 8+ years of experience as a data scientist, data engineer, or ML engineer.
  • Demonstrable history for bringing Data Science features to Enterprise applications.
  • Exceptional Python and SQL coding skills.
  • Experience with Optimization, Machine Learning, Generative AI, NLP, Statistics, and Simulation.
  • Experience with AWS Data and ML Technologies (Sagemaker, Glue, Athena, Redshift)


Preferred qualifications:

  • Experience working with datasets in the domains of project management, software development, and resource planning.
  • Experience with common libraries and frameworks in data science (Scikit Learn, TensorFlow, PyTorch).
  • Experience with ML platform tools (AWS SageMaker).
  • Skilled at working as part of a global, diverse workforce of high-performing individuals.
  • AWS Certification is a plus
Read more
AryuPay Technologies
Bhavana Chaudhari
Posted by Bhavana Chaudhari
Bengaluru (Bangalore)
4 - 8 yrs
₹4L - ₹9L / yr
skill iconDjango
skill iconPython
RESTful APIs
skill iconFlask
skill iconPostgreSQL
+7 more

We are seeking a highly skilled and experienced Python Developer with a strong background in fintech to join our dynamic team. The ideal candidate will have at least 7+ years of professional experience in Python development, with a proven track record of delivering high-quality software solutions in the fintech industry.

Responsibilities:

Design, build, and maintain RESTful APIs using Django and Django Rest Framework.

Integrate AI/ML models into existing applications to enhance functionality and provide data-driven insights.

Collaborate with cross-functional teams, including product managers, designers, and other developers, to define and implement new features and functionalities.

Manage deployment processes, ensuring smooth and efficient delivery of applications.

Implement and maintain payment gateway solutions to facilitate secure transactions.

Conduct code reviews, provide constructive feedback, and mentor junior members of the development team.

Stay up-to-date with emerging technologies and industry trends, and evaluate their potential impact on our products and services.

Maintain clear and comprehensive documentation for all development processes and integrations.

Requirements:

Proficiency in Python and Django/Django Rest Framework.

Experience with REST API development and integration.

Knowledge of AI/ML concepts and practical experience integrating AI/ML models.

Hands-on experience with deployment tools and processes.

Familiarity with payment gateway integration and management.

Strong understanding of database systems (SQL, PostgreSQL, MySQL).

Experience with version control systems (Git).

Strong problem-solving skills and attention to detail.

Excellent communication and teamwork skills.

Job Types: Full-time, Permanent

Work Location: In person

Read more
JobTwine

at JobTwine

2 candid answers
Ariba Khan
Posted by Ariba Khan
Bengaluru (Bangalore)
4 - 5 yrs
Upto ₹25L / yr (Varies
)
skill iconAmazon Web Services (AWS)
Windows Azure
skill iconDocker
skill iconKubernetes
Shell Scripting
+2 more

About JobTwine

JobTwine is an AI-powered platform offering Interview as a Service, helping companies hire 50% faster while doubling the quality of hire. AI Interviews, Human Decisions, Zero Compromises We leverage AI with human expertise to discover, assess, and hire top talent. JobTwine automates scheduling, uses an AI Copilot to guide human interviewers for consistency and generates structured, high-quality automated feedback.


Role Overview

We are looking for a Senior DevOps Engineer with 4–5 years of experience, a product-based mindset, and the ability to thrive in a startup environment


Key Skills & Requirements

  • 4–5 years of hands-on DevOps experience
  • Experience in product-based companies and startups
  • Strong expertise in CI/CD pipelines
  • Hands-on experience with AWS / GCP / Azure
  • Experience with Docker & Kubernetes
  • Strong knowledge of Linux and Shell scripting
  • Infrastructure as Code: Terraform / CloudFormation
  • Monitoring & logging: Prometheus, Grafana, ELK stack
  • Experience in scalability, reliability and automation


What You Will Do

  • Work closely with Sandip, CTO of JobTwine on Gen AI DevOps initiatives
  • Build, optimize, and scale infrastructure supporting AI-driven products
  • Ensure high availability, security and performance of production systems
  • Collaborate with engineering teams to improve deployment and release processes


Why Join JobTwine ?

  • Direct exposure to leadership and real product decision-making
  • Steep learning curve with high ownership and accountability
  • Opportunity to build and scale a core B2B SaaS produc
Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Bengaluru (Bangalore)
3 - 6 yrs
₹10L - ₹25L / yr
skill iconNodeJS (Node.js)
skill iconPython
skill iconJava
skill iconAmazon Web Services (AWS)
skill iconDocker
+1 more

Job Description

Key Responsibilities

  • API & Service Development:
  • Build RESTful and GraphQL APIs for e-commerce, order management, inventory, pricing, and promotions.
  • Database Management:
  • Design efficient schemas and optimize performance across SQL and NoSQL data stores.
  • Integration Development:
  • Implement and maintain integrations with ERP (SAP B1, ERPNext), CRM, logistics, and third-party systems.
  • System Performance & Reliability:
  • Write scalable, secure, and high-performance code to support real-time retail operations.
  • Collaboration:
  • Work closely with frontend, DevOps, and product teams to ship new features end-to-end.
  • Testing & Deployment:
  • Contribute to CI/CD pipelines, automated testing, and observability improvements.
  • Continuous Improvement:
  • Participate in architecture discussions and propose improvements to scalability and code quality.



Requirements

Required Skills & Experience

  • 3–5 years of hands-on backend development experience in Node.jsPython, or Java.
  • Strong understanding of microservicesREST APIs, and event-driven architectures.
  • Experience with databases such as MySQL/PostgreSQL (SQL) and MongoDB/Redis (NoSQL).
  • Hands-on experience with AWS / GCP and containerization (Docker, Kubernetes).
  • Familiarity with GitCI/CD, and code review workflows.
  • Good understanding of API securitydata protection, and authentication frameworks.
  • Strong problem-solving skills and attention to detail.


Nice to Have

  • Experience in e-commerce or omnichannel retail platforms.
  • Exposure to ERP / OMS / WMS integrations.
  • Familiarity with GraphQLServerless, or Kafka / RabbitMQ.
  • Understanding of multi-brand or multi-country architecture challenges.


Read more
Albert Invent

at Albert Invent

4 candid answers
3 recruiters
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
4 - 8 yrs
Upto ₹30L / yr (Varies
)
Automation
Terraform
skill iconPython
skill iconNodeJS (Node.js)
skill iconAmazon Web Services (AWS)

Drive the design, automation, and reliability of Albert Invent’s core platform to support scalable, high-performance AI applications.

You will partner closely with Product Engineering and SRE teams to ensure security, resiliency, and developer productivity while owning end-to-end service operability.


Key Responsibilities

  • Own the design, reliability, and operability of Albert’s mission-critical platform.
  • Work closely with Product Engineering and SRE to build scalable, secure, and high-performance services.
  • Plan and deliver core platform capabilities that improve developer velocity, system resilience, and scalability.
  • Maintain a deep understanding of microservices topology, dependencies, and behavior.
  • Act as the technical authority for performance, reliability, and availability across services.
  • Drive automation and orchestration across infrastructure and operations.
  • Serve as the final escalation point for complex or undocumented production issues.
  • Lead root-cause analysis, mitigation strategies, and long-term system improvements.
  • Mentor engineers in building robust, automated, and production-grade systems.
  • Champion best practices in SRE, reliability, and platform engineering.

Must-Have Requirements

  • Bachelor’s degree in Computer Science, Engineering, or equivalent practical experience.
  • 4+ years of strong backend coding in Python or Node.js.
  • 4+ years of overall software engineering experience, including 2+ years in an SRE / automation-focused role.
  • Strong hands-on experience with Infrastructure as Code (Terraform preferred).
  • Deep experience with AWS cloud infrastructure and distributed systems (microservices, APIs, service-to-service communication).
  • Experience with observability systems – logs, metrics, and tracing.
  • Experience using CI/CD pipelines (e.g., CircleCI).
  • Performance testing experience using K6 or similar tools.
  • Strong focus on automation, standards, and operational excellence.
  • Experience building low-latency APIs (< 200ms response time).
  • Ability to work in fast-paced, high-ownership environments.
  • Proven ability to lead technically, mentor engineers, and influence engineering quality.

Good-to-Have Skills

  • Kubernetes and container orchestration experience.
  • Observability tools such as Prometheus, Grafana, OpenTelemetry, Datadog.
  • Experience building Internal Developer Platforms (IDPs) or reusable engineering frameworks.
  • Exposure to ML infrastructure or data engineering pipelines.
  • Experience working in compliance-driven environments (SOC2, HIPAA, etc.).


Read more
Ride-hailing Industry

Ride-hailing Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
5 - 7 yrs
₹42L - ₹45L / yr
DevOps
skill iconPython
Shell Scripting
Infrastructure
Terraform
+16 more

JOB DETAILS:

- Job Title: Senior Devops Engineer 2

- Industry: Ride-hailing

- Experience: 5-7 years

- Working Days: 5 days/week

- Work Mode: ONSITE

- Job Location: Bangalore

- CTC Range: Best in Industry


Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)

 

Criteria:

1.   Candidate must be from a product-based or scalable app-based start-ups company with experience handling large-scale production traffic.

2.   Minimum 5 yrs of experience working as a DevOps/Infrastructure Consultant

3.   Own end-to-end infrastructure right from non-prod to prod environment including self-managed

4.   Candidate must have experience in database migration from scratch 

5.   Must have a firm hold on the container orchestration tool Kubernetes

6.   Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet

7.   Understanding programming languages like GO/Python, and Java

8.   Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

9.   Working experience on Cloud platform - AWS

10. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.

 

Description 

Job Summary:

As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.

 

Job Responsibilities:

● Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs

● Codify our infrastructure

● Do what it takes to keep the uptime above 99.99%

● Understand the bigger picture and sail through the ambiguities

● Scale technology considering cost and observability and manage end-to-end processes

● Understand DevOps philosophy and evangelize the principles across the organization

● Strong communication and collaboration skills to break down the silos

 

Job Requirements:

● B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience

● Minimum 5 yrs of experience working as a DevOps/Infrastructure Consultant

● Must have a firm hold on the container orchestration tool Kubernetes

● Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet

● Strong problem-solving skills, and ability to write scripts using any scripting language

● Understanding programming languages like GO/Python, and Java

● Comfortable working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

 

What’s there for you?

Company’s team handles everything – infra, tooling, and self-manages a bunch of databases, such as

● 150+ microservices with event-driven architecture across different tech stacks Golang/ java/ node

● More than 100,000 Request per second on our edge gateways

● ~20,000 events per second on self-managed Kafka

● 100s of TB of data on self-managed databases

● 100s of real-time continuous deployment to production

● Self-managed infra supporting

● 100% OSS

Read more
Ride-hailing Industry

Ride-hailing Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 9 yrs
₹47L - ₹50L / yr
DevOps
skill iconPython
Shell Scripting
skill iconKubernetes
Terraform
+15 more

JOB DETAILS:

- Job Title: Lead DevOps Engineer

- Industry: Ride-hailing

- Experience: 6-9 years

- Working Days: 5 days/week

- Work Mode: ONSITE

- Job Location: Bangalore

- CTC Range: Best in Industry


Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)

 

Criteria:

1.   Candidate must be from a product-based or scalable app-based start-ups company with experience handling large-scale production traffic.

2.   Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant

3.   Candidate must have 2 years of experience as an lead (handling team of 3 to 4 members at least)

4.   Own end-to-end infrastructure right from non-prod to prod environment including self-managed

5.   Candidate must have Self experience in database migration from scratch 

6.   Must have a firm hold on the container orchestration tool Kubernetes

7.   Should have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet

8.   Understanding programming languages like GO/Python, and Java

9.   Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

10.   Working experience on Cloud platform -AWS

11. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.

 

Description

Job Summary:

As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.

 

Job Responsibilities:

● Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs

● Codify our infrastructure

● Do what it takes to keep the uptime above 99.99%

● Understand the bigger picture and sail through the ambiguities

● Scale technology considering cost and observability and manage end-to-end processes

● Understand DevOps philosophy and evangelize the principles across the organization

● Strong communication and collaboration skills to break down the silos

 

Job Requirements:

● B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience

● Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant

● Must have a firm hold on the container orchestration tool Kubernetes

● Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet

● Strong problem-solving skills, and ability to write scripts using any scripting language

● Understanding programming languages like GO/Python, and Java

● Comfortable working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

 

What’s there for you?

Company’s team handles everything – infra, tooling, and self-manages a bunch of databases, such as

● 150+ microservices with event-driven architecture across different tech stacks Golang/ java/ node

● More than 100,000 Request per second on our edge gateways

● ~20,000 events per second on self-managed Kafka

● 100s of TB of data on self-managed databases

● 100s of real-time continuous deployment to production

● Self-managed infra supporting

● 100% OSS

Read more
Ride-hailing Industry

Ride-hailing Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
4 - 6 yrs
₹34L - ₹37L / yr
DevOps
skill iconPython
Shell Scripting
skill iconKubernetes
Monitoring
+18 more

JOB DETAILS:

- Job Title: Senior Devops Engineer 1

- Industry: Ride-hailing

- Experience: 4-6 years

- Working Days: 5 days/week

- Work Mode: ONSITE

- Job Location: Bangalore

- CTC Range: Best in Industry


Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)

 

Criteria:

1. Candidate must be from a product-based or scalable app-based startups company with experience handling large-scale production traffic.

2. Candidate must have strong Linux expertise with hands-on production troubleshooting and working knowledge of databases and middleware (Mongo, Redis, Cassandra, Elasticsearch, Kafka).

3. Candidate must have solid experience with Kubernetes.

4. Candidate should have strong knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.

5. Candidate must be an individual contributor with strong ownership.

6. Candidate must have hands-on experience with DATABASE MIGRATIONS and observability tools such as Prometheus and Grafana.

7. Candidate must have working knowledge of Go/Python and Java.

8. Candidate should have working experience on Cloud platform - AWS

9. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.

 

Description 

Job Summary:

As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.

 

Job Responsibilities:

- Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs.

- Understanding the needs of stakeholders and conveying this to developers.

- Working on ways to automate and improve development and release processes.

- Identifying technical problems and developing software updates and ‘fixes’.

- Working with software developers to ensure that development follows established processes and works as intended.

- Do what it takes to keep the uptime above 99.99%.

- Understand DevOps philosophy and evangelize the principles across the organization.

- Strong communication and collaboration skills to break down the silos

 

Job Requirements:

- B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience.

- Minimum 4 yrs of experience working as a DevOps/Infrastructure Consultant.

- Strong background in operating systems like Linux.

- Understands the container orchestration tool Kubernetes.

- Proficient Knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.

- Problem-solving attitude, and ability to write scripts using any scripting language.

- Understanding programming languages like GO/Python, and Java.

- Basic understanding of databases and middlewares like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

- Should be able to take ownership of tasks, and must be responsible. - Good communication skills

 

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Shikha Nagar
Posted by Shikha Nagar
Bengaluru (Bangalore)
5 - 8 yrs
Best in industry
skill iconJava
Multithreading
skill iconAmazon Web Services (AWS)
  • Strong expertise in Java 8+, Spring Boot, REST APIs.
  • Strong front-end experience with Angular 8+, TypeScript, HTML, CSS.
  • Experience with SQL/NoSQL databases (MySQL, PostgreSQL, MongoDB, etc.).
  • Hands-on with Git, Maven/Gradle, Jenkins, CI/CD.
  • Knowledge of cloud platforms (AWS) is an added advantage.
  • Experience with Agile/Scrum methodologies.
  • Domain Expertise (ADDED): Proven experience working on Auto-Loan Management Systems (LMS), Vehicle Finance, or related banking/NBFC solutions.


Read more
Bengaluru (Bangalore)
5 - 10 yrs
₹25L - ₹50L / yr
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconPython
skill iconJava
Data engineering
+10 more

Job Title : Senior Software Engineer (Full Stack — AI/ML & Data Applications)

Experience : 5 to 10 Years

Location : Bengaluru, India

Employment Type : Full-Time | Onsite


Role Overview :

We are seeking a Senior Full Stack Software Engineer with strong technical leadership and hands-on expertise in AI/ML, data-centric applications, and scalable full-stack architectures.

In this role, you will design and implement complex applications integrating ML/AI models, lead full-cycle development, and mentor engineering teams.


Mandatory Skills :

Full Stack Development (React/Angular/Vue + Node.js/Python/Java), Data Engineering (Spark/Kafka/ETL), ML/AI Model Integration (TensorFlow/PyTorch/scikit-learn), Cloud & DevOps (AWS/GCP/Azure, Docker, Kubernetes, CI/CD), SQL/NoSQL Databases (PostgreSQL/MongoDB).


Key Responsibilities :

  • Architect, design, and develop scalable full-stack applications for data and AI-driven products.
  • Build and optimize data ingestion, processing, and pipeline frameworks for large datasets.
  • Deploy, integrate, and scale ML/AI models in production environments.
  • Drive system design, architecture discussions, and API/interface standards.
  • Ensure engineering best practices across code quality, testing, performance, and security.
  • Mentor and guide junior developers through reviews and technical decision-making.
  • Collaborate cross-functionally with product, design, and data teams to align solutions with business needs.
  • Monitor, diagnose, and optimize performance issues across the application stack.
  • Maintain comprehensive technical documentation for scalability and knowledge-sharing.

Required Skills & Experience :

  • Education : B.E./B.Tech/M.E./M.Tech in Computer Science, Data Science, or equivalent fields.
  • Experience : 5+ years in software development with at least 2+ years in a senior or lead role.
  • Full Stack Proficiency :
  • Front-end : React / Angular / Vue.js
  • Back-end : Node.js / Python / Java
  • Data Engineering : Experience with data frameworks such as Apache Spark, Kafka, and ETL pipeline development.
  • AI/ML Expertise : Practical exposure to TensorFlow, PyTorch, or scikit-learn and deploying ML models at scale.
  • Databases : Strong knowledge of SQL & NoSQL systems (PostgreSQL, MongoDB) and warehousing tools (Snowflake, BigQuery).
  • Cloud & DevOps : Working knowledge of AWS, GCP, or Azure; containerization & orchestration (Docker, Kubernetes); CI/CD; MLflow/SageMaker is a plus.
  • Visualization : Familiarity with modern data visualization tools (D3.js, Tableau, Power BI).

Soft Skills :

  • Excellent communication and cross-functional collaboration skills.
  • Strong analytical mindset with structured problem-solving ability.
  • Self-driven with ownership mentality and adaptability in fast-paced environments.

Preferred Qualifications (Bonus) :

  • Experience deploying distributed, large-scale ML or data-driven platforms.
  • Understanding of data governance, privacy, and security compliance.
  • Exposure to domain-driven data/AI use cases in fintech, healthcare, retail, or e-commerce.
  • Experience working in Agile environments (Scrum/Kanban).
  • Active open-source contributions or a strong GitHub technical portfolio.
Read more
Ganit Business Solutions

at Ganit Business Solutions

3 recruiters
Agency job
via hirezyai by HR Hirezyai
Bengaluru (Bangalore), Chennai, Mumbai
5.5 - 12 yrs
₹15L - ₹25L / yr
skill iconAmazon Web Services (AWS)
PySpark
SQL

Roles & Responsibilities

  • Data Engineering Excellence: Design and implement data pipelines using formats like JSON, Parquet, CSV, and ORC, utilizing batch and streaming ingestion.
  • Cloud Data Migration Leadership: Lead cloud migration projects, developing scalable Spark pipelines.
  • Medallion Architecture: Implement Bronze, Silver, and gold tables for scalable data systems.
  • Spark Code Optimization: Optimize Spark code to ensure efficient cloud migration.
  • Data Modeling: Develop and maintain data models with strong governance practices.
  • Data Cataloging & Quality: Implement cataloging strategies with Unity Catalog to maintain high-quality data.
  • Delta Live Table Leadership: Lead the design and implementation of Delta Live Tables (DLT) pipelines for secure, tamper-resistant data management.
  • Customer Collaboration: Collaborate with clients to optimize cloud migrations and ensure best practices in design and governance.

Educational Qualifications

  • Experience: Minimum 5 years of hands-on experience in data engineering, with a proven track record in complex pipeline development and cloud-based data migration projects.
  • Education: Bachelor’s or higher degree in Computer Science, Data Engineering, or a related field.
  • Skills
  • Must-have: Proficiency in Spark, SQL, Python, and other relevant data processing technologies. Strong knowledge of Databricks and its components, including Delta Live Table (DLT) pipeline implementations. Expertise in on-premises to cloud Spark code optimization and Medallion Architecture.

Good to Have

  • Familiarity with AWS services (experience with additional cloud platforms like GCP or Azure is a plus).

Soft Skills

  • Excellent communication and collaboration skills, with the ability to work effectively with clients and internal teams.
  • Certifications
  • AWS/GCP/Azure Data Engineer Certification.


Read more
Inflectionio

at Inflectionio

1 candid answer
Renu Philip
Posted by Renu Philip
Bengaluru (Bangalore)
3 - 5 yrs
₹20L - ₹30L / yr
skill iconAmazon Web Services (AWS)
skill iconKubernetes
skill iconJenkins
Chef
CI/CD
+6 more

We are looking for a DevOps Engineer with hands-on experience in managing production infrastructure using AWS, Kubernetes, and Terraform. The ideal candidate will have exposure to CI/CD tools and queueing systems, along with a strong ability to automate and optimize workflows.


Responsibilities: 

* Manage and optimize production infrastructure on AWS, ensuring scalability and reliability.

* Deploy and orchestrate containerized applications using Kubernetes.

* Implement and maintain infrastructure as code (IaC) using Terraform.

* Set up and manage CI/CD pipelines using tools like Jenkins or Chef to streamline deployment processes.

* Troubleshoot and resolve infrastructure issues to ensure high availability and performance.

* Collaborate with cross-functional teams to define technical requirements and deliver solutions.

* Nice-to-have: Manage queueing systems like Amazon SQS, Kafka, or RabbitMQ.



Requirements: 

* 2+ years of experience with AWS, including practical exposure to its services in production environments.

* Demonstrated expertise in Kubernetes for container orchestration.

* Proficiency in using Terraform for managing infrastructure as code.

* Exposure to at least one CI/CD tool, such as Jenkins or Chef.

* Nice-to-have: Experience managing queueing systems like SQS, Kafka, or RabbitMQ.

Read more
Appiness Interactive Pvt. Ltd.
Bengaluru (Bangalore)
4 - 10 yrs
₹6L - ₹12L / yr
skill iconPython
skill iconDjango
skill iconReact.js
skill iconNextJs (Next.js)
skill iconPostgreSQL
+2 more

Location : Bengaluru, India

Type : Full-time

Experience :4-7 Years

Mode :Hybrid


The Role

We're looking for a Full Stack Engineer who thrives on building high-performance applications at scale. You'll work across our entire stack—from optimizing PostgreSQL queries on 140M+ records to crafting intuitive React interfaces. This is a high-impact role where your code directly influences how sales teams discover and engage with prospects worldwide.

What You'll Do

  • Build and optimize REST APIs using Django REST Framework handling millions of records
  • Design and implement complex database queries, indexes, and caching strategies for PostgreSQL
  • Develop responsive, high-performance front-end interfaces with Next.js and React
  • Implement Redis caching layers and optimize query performance for sub-second response times
  • Design and implement smart search/filter systems with complex logic
  • Collaborate on data pipeline architecture for processing large datasets
  • Write clean, testable code with comprehensive unit and integration tests
  • Participate in code reviews, architecture discussions, and technical planning

Required Skills

  • 4-7 years of professional experience in full stack development
  • Strong proficiency in Python and Django/Django REST Framework
  • Expert-level PostgreSQL knowledge: query optimization, indexing, EXPLAIN ANALYZE, partitioning
  • Solid experience with Next.js, React, and modern JavaScript/TypeScript
  • Experience with state management (Zustand, Redux, or similar)
  • Working knowledge of Redis for caching and session management
  • Familiarity with AWS services (RDS, EC2, S3, CloudFront)
  • Understanding of RESTful API design principles and best practices
  • Experience with Git, CI/CD pipelines, and agile development workflows

Nice to Have

  • Experience with Elasticsearch for full-text search at scale
  • Knowledge of data scraping, ETL pipelines, or data enrichment
  • Experience with Celery for async task processing
  • Familiarity with Tailwind CSS and modern UI/UX practices
  • Previous work on B2B SaaS or data-intensive applications
  • Understanding of security best practices and anti-scraping measures


Our Tech Stack

Backend

Python, Django REST Framework

Frontend

Next.js, React, Zustand, Tailwind CSS

Database

PostgreSQL 17, Redis

Infrastructure

AWS (RDS, EC2, S3, CloudFront), Docker

Tools

GitHub, pgBouncer


Why Join Us

  • Work on a product processing 140M+ records—real scale, real challenges
  • Direct impact on product direction and technical decisions
  • Modern tech stack with room to experiment and innovate
  • Collaborative team environment with a focus on growth
  • Competitive compensation and flexible hybrid work model


Read more
Vola Finance

at Vola Finance

1 video
2 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
3yrs+
Upto ₹12L / yr (Varies
)
skill iconPHP
skill iconLaravel
MySQL
skill iconAmazon Web Services (AWS)
skill iconGit

Senior PHP Laravel Backend Developer


Requirements:

• Proficiency in MySQL, AWS, Git, PHP, HTML.

• Minimum 2 years of experience in Laravel framework.

• Minimum 3 years of experience in PHP development.

• Overall professional experience of 3+ years.

• Basic knowledge of JavaScript, TypeScript, Node.js, and Express framework.

• Education: Graduation with an aggregate of 70%.


Roles and Responsibilities:

• The primary role will be development, quality check and maintenance of the platform to

ensure improvement and stability.

• Contribute to the development of effective functions and systems that can meet the

overall objectives of the company.

• Understanding of performance engineering and optimization.

• Ability to design and code complex programs.

Read more
Appsforbharat
Pooja V
Posted by Pooja V
Bengaluru (Bangalore)
6 - 13 yrs
Best in industry
skill iconGo Programming (Golang)
skill iconPython
skill iconAmazon Web Services (AWS)
SQL

About the role


We are seeking a seasoned Backend Tech Lead with deep expertise in Golang and Python to lead our backend team. The ideal candidate has 6+ years of experience in backend technologies and 2–3 years of proven engineering mentoring experience, having successfully scaled systems and shipped B2C applications in collaboration with product teams.

Responsibilities

Technical & Product Delivery

● Oversee design and development of backend systems operating at 10K+ RPM scale.

● Guide the team in building transactional systems (payments, orders, etc.) and behavioral systems (analytics, personalization, engagement tracking).

● Partner with product managers to scope, prioritize, and release B2C product features and applications.

● Ensure architectural best practices, high-quality code standards, and robust testing practices.

● Own delivery of projects end-to-end with a focus on scalability, reliability, and business impact.

Operational Excellence

● Champion observability, monitoring, and reliability across backend services.

● Continuously improve system performance, scalability, and resilience.

● Streamline development workflows and engineering processes for speed and quality.

Requirements

Experience:

7+ years of professional experience in backend technologies.

2-3 years as Tech lead and driving delivery.

● Technical Skills:

Strong hands-on expertise in Golang and Python.

Proven track record with high-scale systems (≥10K RPM).

Solid understanding of distributed systems, APIs, SQL/NoSQL databases, and cloud platforms.

Leadership Skills:

Demonstrated success in managing teams through 2–3 appraisal cycles.

Strong experience working with product managers to deliver consumer-facing applications.

● Excellent communication and stakeholder management abilities.

Nice-to-Have

● Familiarity with containerization and orchestration (Docker, Kubernetes).

● Experience with observability tools (Prometheus, Grafana, OpenTelemetry).

● Previous leadership experience in B2C product companies operating at scale.

What We Offer

● Opportunity to lead and shape a backend engineering team building at scale.

● A culture of ownership, innovation, and continuous learning.

● Competitive compensation, benefits, and career growth opportunities.

Read more
AI Powered Software Development (Product Company)

AI Powered Software Development (Product Company)

Agency job
via Recruiting Bond by Pavan Kumar
Bengaluru (Bangalore), Delhi, Gurugram, Noida, Hyderabad, Pune, Mumbai, India
3 - 8 yrs
₹15L - ₹30L / yr
DevOps
Reliability engineering
CloudOps
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+20 more

🚀 RECRUITING BOND HIRING


Role: CLOUD OPERATIONS & MONITORING ENGINEER - (THE GUARDIAN OF UPTIME)


⚡ THIS IS NOT A MONITORING ROLE


THIS IS A COMMAND ROLE

You don’t watch dashboards.

You control outcomes.


You don’t react to incidents.

You eliminate them before they escalate.


This role powers an AI-driven SaaS + IoT platform where:

---> Uptime is non-negotiable

---> Latency is hunted

---> Failures are never allowed to repeat


Incidents don’t grow.

Problems don’t hide.

Uptime is enforced.


🧠 WHAT YOU’LL OWN

(Real Work. Real Impact.)


🔍 Total Observability

---> Real-time visibility across cloud, application, database & infrastructure

---> High-signal dashboards (Grafana + cloud-native tools)

---> Performance trends tracked before growth breaks systems

🚨 Smart Alerting (No Noise)

---> Alerts that fire only when action is required

---> Zero false positives. Zero alert fatigue

Right signal → right person → right time


⚙ Automation as a Weapon

---> End-to-end automation of operational tasks

---> Standardized logging, metrics & alerting

---> Systems that scale without human friction


🧯 Incident Command & Reliability

---> First responder for critical incidents (on-call rotation)

---> Root cause analysis across network, app, DB & storage

Fix fast — then harden so it never breaks the same way again

📘 Operational Excellence

---> Battle-tested runbooks

---> Documentation that actually works under pressure

Every incident → a stronger platform


🛠️ TECHNOLOGIES YOU’LL MASTER

☁ Cloud: AWS | Azure | Google Cloud

📊 Monitoring: Grafana | Metrics | Traces | Logs

📡 Alerting: Production-grade alerting systems

🌐 Networking: DNS | Routing | Load Balancers | Security

🗄 Databases: Production systems under real pressure

⚙ DevOps: Automation | Reliability Engineering


🎯 WHO WE’RE LOOKING FOR

Engineers who take uptime personally.


You bring:

---> 3+ years in Cloud Ops / DevOps / SRE

---> Live production SaaS experience

---> Deep AWS / Azure / GCP expertise

---> Strong monitoring & alerting experience

---> Solid networking fundamentals

---> Calm, methodical incident response

---> Bonus (Highly Preferred):

---> B2B SaaS + IoT / hybrid platforms

---> Strong automation mindset

---> Engineers who think in systems, not tickets


💼 JOB DETAILS

📍 Bengaluru

🏢 Hybrid (WFH)

💰 (Final CTC depends on experience & interviews)


🌟 WHY THIS ROLE?

Most cloud teams manage uptime. We weaponize it.

Your work won’t just keep systems running — it will keep customers confident, operations flawless, and competitors wondering how it all works so smoothly.


📩 APPLY / REFER : 🔗 Know someone who lives for reliability, observability & cloud excellence?

Read more
AI Powered - Software Development (Product Company)

AI Powered - Software Development (Product Company)

Agency job
via Recruiting Bond by Pavan Kumar
India, Bengaluru (Bangalore), Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Hyderabad, Mumbai, Pune
3 - 7 yrs
₹15L - ₹30L / yr
DevOps
Reliability engineering
CloudOps
Cloud Operations
Monitoring
+25 more

🚀 RECRUITING BOND HIRING


Role: CLOUD OPERATIONS & MONITORING ENGINEER - (THE GUARDIAN OF UPTIME)


⚡ THIS IS NOT A MONITORING ROLE


THIS IS A COMMAND ROLE

You don’t watch dashboards.

You control outcomes.


You don’t react to incidents.

You eliminate them before they escalate.


This role powers an AI-driven SaaS + IoT platform where:

---> Uptime is non-negotiable

---> Latency is hunted

---> Failures are never allowed to repeat


Incidents don’t grow.

Problems don’t hide.

Uptime is enforced.


🧠 WHAT YOU’LL OWN

(Real Work. Real Impact.)


🔍 Total Observability

---> Real-time visibility across cloud, application, database & infrastructure

---> High-signal dashboards (Grafana + cloud-native tools)

---> Performance trends tracked before growth breaks systems

🚨 Smart Alerting (No Noise)

---> Alerts that fire only when action is required

---> Zero false positives. Zero alert fatigue

Right signal → right person → right time


⚙ Automation as a Weapon

---> End-to-end automation of operational tasks

---> Standardized logging, metrics & alerting

---> Systems that scale without human friction


🧯 Incident Command & Reliability

---> First responder for critical incidents (on-call rotation)

---> Root cause analysis across network, app, DB & storage

Fix fast — then harden so it never breaks the same way again

📘 Operational Excellence

---> Battle-tested runbooks

---> Documentation that actually works under pressure

Every incident → a stronger platform


🛠️ TECHNOLOGIES YOU’LL MASTER

☁ Cloud: AWS | Azure | Google Cloud

📊 Monitoring: Grafana | Metrics | Traces | Logs

📡 Alerting: Production-grade alerting systems

🌐 Networking: DNS | Routing | Load Balancers | Security

🗄 Databases: Production systems under real pressure

⚙ DevOps: Automation | Reliability Engineering


🎯 WHO WE’RE LOOKING FOR

Engineers who take uptime personally.


You bring:

---> 3+ years in Cloud Ops / DevOps / SRE

---> Live production SaaS experience

---> Deep AWS / Azure / GCP expertise

---> Strong monitoring & alerting experience

---> Solid networking fundamentals

---> Calm, methodical incident response

---> Bonus (Highly Preferred):

---> B2B SaaS + IoT / hybrid platforms

---> Strong automation mindset

---> Engineers who think in systems, not tickets


💼 JOB DETAILS

📍 Bengaluru

🏢 Hybrid (WFH)

💰 (Final CTC depends on experience & interviews)


🌟 WHY THIS ROLE?

Most cloud teams manage uptime. We weaponize it.

Your work won’t just keep systems running — it will keep customers confident, operations flawless, and competitors wondering how it all works so smoothly.


📩 APPLY / REFER : 🔗 Know someone who lives for reliability, observability & cloud excellence?

Read more
Bidgely

at Bidgely

4 candid answers
2 recruiters
Bisman Gill
Posted by Bisman Gill
Bengaluru (Bangalore)
6yrs+
Upto ₹65L / yr (Varies
)
skill iconJava
skill iconSpring Boot
SQL
NOSQL Databases
skill iconAmazon Web Services (AWS)

Lead Software Engineer

Bidgely is seeking an exceptional and visionary Lead Software Engineer to join its core team in Bangalore. As a Lead Software Engineer, you will be working closely with EMs and org heads in shaping the roadmap and planning and set the technical direction for the team, influence architectural decisions, and mentor other engineers while delivering highly reliable, scalable products powered by large data, advanced machine learning models, and responsive user interfaces. Renowned for your deep technical expertise, you are capable of deconstructing any system, solving complex problems creatively, and elevating those around you. Join our innovative and dynamic team that thrives on creativity, technical excellence, and a belief that nothing is impossible with collaboration and hard work.


Responsibilities

  • Lead the design and delivery of complex, scalable web services, APIs, and backend data modules.
  • Define and drive adoption of best practices in system architecture, component reusability, and software design patterns across teams.
  • Provide technical leadership in product, architectural, and strategic engineering discussions.
  • Mentor and guide engineers at all levels, fostering a culture of learning and growth.
  • Collaborate with cross-functional teams (engineering, product management, data science, and UX) to translate business requirements into scalable, maintainable solutions.
  • Champion and drive continuous improvement initiatives for code quality, performance, security, and reliability.
  • Evaluate and implement emerging technologies, tools, and methodologies to ensure competitive advantage.
  • Present technical concepts and results clearly to both technical and non-technical stakeholders; influence organizational direction and recommend key technical investments.


Requirements

  • 6+ years of experience in designing and developing highly scalable backend and middle tier systems.
  • BS/MS/PhD in Computer Science or a related field from a leading institution.
  • Demonstrated mastery of data structures, algorithms, and system design; experience architecting large-scale distributed systems and leading significant engineering projects.
  • Deep fluency in Java, Spring, Hibernate, J2EE, RESTful services; expertise in at least one additional backend language/framework.
  • Strong hands-on experience with both SQL (e.g., MySQL, PostgreSQL) and NoSQL (e.g., MongoDB, Cassandra, Redis) databases, including schema design, optimization, and performance tuning for large data sets.
  • Experience with Distributed Systems, Cloud Architectures, CI/CD, and DevOps principles.
  • Strong leadership, mentoring, and communication skills; proven ability to drive technical vision and alignment across teams.
  • Track record of delivering solutions in fast-paced and dynamic start-up environments.
  • Commitment to quality, attention to detail, and a passion for coaching others.


Read more
Vola Finance

at Vola Finance

1 video
2 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
4yrs+
Upto ₹20L / yr (Varies
)
skill iconPython
FastAPI
RESTful APIs
GraphQL
skill iconAmazon Web Services (AWS)
+7 more

Python Backend Developer

We are seeking a skilled Python Backend Developer responsible for managing the interchange of data between the server and the users. Your primary focus will be on developing server-side logic to ensure high performance and responsiveness to requests from the front end. You will also be responsible for integrating front-end elements built by your coworkers into the application, as well as managing AWS resources.


Roles & Responsibilities

  • Develop and maintain scalable, secure, and robust backend services using Python
  • Design and implement RESTful APIs and/or GraphQL endpoints
  • Integrate user-facing elements developed by front-end developers with server-side logic
  • Write reusable, testable, and efficient code
  • Optimize components for maximum performance and scalability
  • Collaborate with front-end developers, DevOps engineers, and other team members
  • Troubleshoot and debug applications
  • Implement data storage solutions (e.g., PostgreSQL, MySQL, MongoDB)
  • Ensure security and data protection

Mandatory Technical Skill Set

  • Implementing optimal data storage (e.g., PostgreSQL, MySQL, MongoDB, S3)
  • Python backend development experience
  • Design, implement, and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, or GitHub Actions
  • Implemented and managed containerization platforms such as Docker and orchestration tools like Kubernetes
  • Previous hands-on experience in:
  • EC2, S3, ECS, EMR, VPC, Subnets, SQS, CloudWatch, CloudTrail, Lambda, SageMaker, RDS, SES, SNS, IAM, S3, Backup, AWS WAF
  • SQL
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Bengaluru (Bangalore), Mumbai, Pune
4 - 7 yrs
Best in industry
skill iconPython
pandas
NumPy
SQL
skill iconHTML/CSS
+4 more

Specific Knowledge/Skills


  1. 4-6 years of experience
  2. Proficiency in Python programming.
  3. Basic knowledge of front-end development.
  4. Basic knowledge of Data manipulation and analysis libraries
  5. Code versioning and collaboration. (Git)
  6. Knowledge for Libraries for extracting data from websites.
  7. Knowledge of SQL and NoSQL databases
  8. Familiarity with RESTful APIs
  9. Familiarity with Cloud (Azure /AWS) technologies
Read more
Technology Industry

Technology Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 9 yrs
₹30L - ₹48L / yr
skill iconPython
skill iconReact.js
skill iconNodeJS (Node.js)
TypeScript
ReAct (Reason + Act)
+13 more

Review Criteria:

  • Strong Software Engineer fullstack profile using NodeJS / Python and React
  • 6+ YOE in Software Development using Python OR NodeJS (For backend) & React (For frontend)
  • Must have strong experience in working on Typescript
  • Must have experience in message-based systems like Kafka, RabbitMq, Redis
  • Databases - PostgreSQL & NoSQL databases like MongoDB
  • Product Companies Only
  • Tier 1 Engineering Institutes (IIT, NIT, BITS, IIIT, DTU or equivalent)

 

Preferred:

  • Experience in Fin-Tech, Payment, POS and Retail products is highly preferred
  • Experience in mentoring, coaching the team.


Role & Responsibilities:

We are currently seeking a Senior Engineer to join our Financial Services team, contributing to the design and development of scalable system.

 

The Ideal Candidate Will Be Able To-

  • Take ownership of delivering performant, scalable and high-quality cloud-based software, both frontend and backend side.
  • Mentor team members to develop in line with product requirements.
  • Collaborate with Senior Architect for design and technology choices for product development roadmap.
  • Do code reviews.


Ideal Candidate:

  • Thorough knowledge of developing cloud-based software including backend APIs and react based frontend.
  • Thorough knowledge of scalable design patterns and message-based systems such as Kafka, RabbitMq, Redis, MongoDB, ORM, SQL etc.
  • Experience with AWS services such as S3, IAM, Lambda etc.
  • Expert level coding skills in Python FastAPI/Django, NodeJs, TypeScript, ReactJs.
  • Eye for user responsive designs on the frontend.


Perks, Benefits and Work Culture:

  • We prioritize people above all else. While we're recognized for our innovative technology solutions, it's our people who drive our success. That’s why we offer a comprehensive and competitive benefits package designed to support your well-being and growth:
  • Medical Insurance with coverage up to INR 8,00,000 for the employee and their family
Read more
E-Commerce Industry

E-Commerce Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 10 yrs
₹30L - ₹50L / yr
Security Information and Event Management (SIEM)
Information security governance
ISO/IEC 27001:2005
Systems Development Life Cycle (SDLC)
Software Development
+67 more

SENIOR INFORMATION SECURITY ENGINEER (DEVSECOPS)

Key Skills: Software Development Life Cycle (SDLC), CI/CD

About Company: Consumer Internet / E-Commerce

Company Size: Mid-Sized

Experience Required: 6 - 10 years

Working Days: 5 days/week

Office Location: Bengaluru [Karnataka]


Review Criteria:

Mandatory:

  • Strong DevSecOps profile
  • Must have 5+ years of hands-on experience in Information Security, with a primary focus on cloud security across AWS, Azure, and GCP environments.
  • Must have strong practical experience working with Cloud Security Posture Management (CSPM) tools such as Prisma Cloud, Wiz, or Orca along with SIEM / IDS / IPS platforms
  • Must have proven experience in securing Kubernetes and containerized environments including image security,runtime protection, RBAC, and network policies.
  • Must have hands-on experience integrating security within CI/CD pipelines using tools such as Snyk, GitHub Advanced Security,or equivalent security scanning solutions.
  • Must have solid understanding of core security domains including network security, encryption, identity and access management key management, and security governance including cloud-native security services like GuardDuty, Azure Security Center etc
  • Must have practical experience with Application Security Testing tools including SAST, DAST, and SCA in real production environments
  • Must have hands-on experience with security monitoring, incident response, alert investigation, root-cause analysis (RCA), and managing VAPT / penetration testing activities
  • Must have experience securing infrastructure-as-code and cloud deployments using Terraform, CloudFormation, ARM, Docker, and Kubernetes
  • B2B SaaS Product companies
  • Must have working knowledge of globally recognized security frameworks and standards such as ISO 27001, NIST, and CIS with exposure to SOC2, GDPR, or HIPAA compliance environments


Preferred:

  • Experience with DevSecOps automation, security-as-code, and policy-as-code implementations
  • Exposure to threat intelligence platforms, cloud security monitoring, and proactive threat detection methodologies, including EDR / DLP or vulnerability management tools
  • Must demonstrate strong ownership mindset, proactive security-first thinking, and ability to communicate risks in clear business language


Roles & Responsibilities:

We are looking for a Senior Information Security Engineer who can help protect our cloud infrastructure, applications, and data while enabling teams to move fast and build securely.


This role sits deep within our engineering ecosystem. You’ll embed security into how we design, build, deploy, and operate systems—working closely with Cloud, Platform, and Application Engineering teams. You’ll balance proactive security design with hands-on incident response, and help shape a strong, security-first culture across the organization.


If you enjoy solving real-world security problems, working close to systems and code, and influencing how teams build securely at scale, this role is for you.


What You’ll Do-

Cloud & Infrastructure Security:

  • Design, implement, and operate cloud-native security controls across AWS, Azure, GCP, and Oracle.
  • Strengthen IAM, network security, and cloud posture using services like GuardDuty, Azure Security Center and others.
  • Partner with platform teams to secure VPCs, security groups, and cloud access patterns.


Application & DevSecOps Security:

  • Embed security into the SDLC through threat modeling, secure code reviews, and security-by-design practices.
  • Integrate SAST, DAST, and SCA tools into CI/CD pipelines.
  • Secure infrastructure-as-code and containerized workloads using Terraform, CloudFormation, ARM, Docker, and Kubernetes.


Security Monitoring & Incident Response:

  • Monitor security alerts and investigate potential threats across cloud and application layers.
  • Lead or support incident response efforts, root-cause analysis, and corrective actions.
  • Plan and execute VAPT and penetration testing engagements (internal and external), track remediation, and validate fixes.
  • Conduct red teaming activities and tabletop exercises to test detection, response readiness, and cross-team coordination.
  • Continuously improve detection, response, and testing maturity.


Security Tools & Platforms:

  • Manage and optimize security tooling including firewalls, SIEM, EDR, DLP, IDS/IPS, CSPM, and vulnerability management platforms.
  • Ensure tools are well-integrated, actionable, and aligned with operational needs.


Compliance, Governance & Awareness:

  • Support compliance with industry standards and frameworks such as SOC2, HIPAA, ISO 27001, NIST, CIS, and GDPR.
  • Promote secure engineering practices through training, documentation, and ongoing awareness programs.
  • Act as a trusted security advisor to engineering and product teams.


Continuous Improvement:

  • Stay ahead of emerging threats, cloud vulnerabilities, and evolving security best practices.
  • Continuously raise the bar on a company's security posture through automation and process improvement.


Endpoint Security (Secondary Scope):

  • Provide guidance on endpoint security tooling such as SentinelOne and Microsoft Defender when required.


Ideal Candidate:

  • Strong hands-on experience in cloud security across AWS and Azure.
  • Practical exposure to CSPM tools (e.g., Prisma Cloud, Wiz, Orca) and SIEM / IDS / IPS platforms.
  • Experience securing containerized and Kubernetes-based environments.
  • Familiarity with CI/CD security integrations (e.g., Snyk, GitHub Advanced Security, or similar).
  • Solid understanding of network security, encryption, identity, and access management.
  • Experience with application security testing tools (SAST, DAST, SCA).
  • Working knowledge of security frameworks and standards such as ISO 27001, NIST, and CIS.
  • Strong analytical, troubleshooting, and problem-solving skills.


Nice to Have:

  • Experience with DevSecOps automation and security-as-code practices.
  • Exposure to threat intelligence and cloud security monitoring solutions.
  • Familiarity with incident response frameworks and forensic analysis.
  • Security certifications such as CISSP, CISM, CCSP, or CompTIA Security+.


Perks, Benefits and Work Culture:

A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the comprehensive benefits that company offers.

Read more
Valuebound
Suchandni Verma
Posted by Suchandni Verma
Bengaluru (Bangalore)
3 - 8 yrs
₹5L - ₹10L / yr
skill iconPostgreSQL
skill iconAmazon Web Services (AWS)

We are looking for a hands-on PostgreSQL Lead / Senior DBA (L3) to join our production engineering team. This is not an architect role. The focus is on deep PostgreSQL expertise, real-world production ownership, and mentoring junior DBAs within an existing database ecosystem.

You will work as a senior individual contributor with technical leadership responsibilities, operating in a live, high-availability environment with guidance and support from a senior team.

Key Responsibilities

  • Own and manage PostgreSQL databases in production environments
  • Perform PostgreSQL installation, upgrades, migrations, and configuration
  • Handle L2/L3 production incidents, root cause analysis, and performance bottlenecks
  • Execute performance tuning and query optimization
  • Manage backup, recovery, replication, HA, and failover strategies
  • Support re-architecture and optimization initiatives led by senior stakeholders
  • Monitor database health, capacity, and reliability proactively
  • Collaborate with application, infra, and DevOps teams
  • Mentor and guide L1/L2 DBAs as part of the L3 role
  • Demonstrate ownership during night/weekend production issues (comp-offs provided)

Must-Have Skills (Non-Negotiable)

  • Very strong PostgreSQL expertise
  • Deep understanding of PostgreSQL internals and behavior
  • Proven experience with:
  • Performance tuning & optimization
  • Production troubleshooting (L2/L3)
  • Backup & recovery
  • Replication & High Availability
  • Ability to work independently in critical production scenarios
  • PostgreSQL-focused profiles are absolutely acceptable (no requirement to know other DBs)

Good-to-Have (Not Mandatory)

  • Exposure to AWS and/or Azure
  • Experience with cloud-managed or self-hosted Postgres
  • Knowledge of other databases (Oracle, MS SQL, DB2, ClickHouse, Neo4j, etc.) — purely a plus

Note: Strong on-prem PostgreSQL DBAs are welcome. Cloud gaps can be trained post-joining.

Work Model & Availability (Important – Please Read Carefully)

  • Work From Office only (Bangalore – Koramangala)
  • Regular day shift, but with a 24×7 production ownership mindset
  • Availability for night/weekend troubleshooting when required
  • No rigid shifts; expectation is responsible lead-level ownership
  • Comp-offs provided for off-hours work


Read more
TrumetricAI
Yashika Tiwari
Posted by Yashika Tiwari
Bengaluru (Bangalore)
3 - 7 yrs
₹12L - ₹20L / yr
skill iconAmazon Web Services (AWS)
CI/CD
skill iconGit
skill iconDocker
skill iconKubernetes

Key Responsibilities:

  • Design, implement, and maintain scalable, secure, and cost-effective infrastructure on AWS and Azure
  • Set up and manage CI/CD pipelines for smooth code integration and delivery using tools like GitHub Actions, Bitbucket Runners, AWS Code build/deploy, Azure DevOps, etc.
  • Containerize applications using Docker and manage orchestration with Kubernetes, ECS, Fargate, AWS EKS, Azure AKS.
  • Manage and monitor production deployments to ensure high availability and performance
  • Implement and manage CDN solutions using AWS CloudFront and Azure Front Door for optimal content delivery and latency reduction
  • Define and apply caching strategies at application, CDN, and reverse proxy layers for performance and scalability
  • Set up and manage reverse proxies and Cloudflare WAF to ensure application security and performance
  • Implement infrastructure as code (IaC) using Terraform, CloudFormation, or ARM templates
  • Administer and optimize databases (RDS, PostgreSQL, MySQL, etc.) including backups, scaling, and monitoring
  • Configure and maintain VPCs, subnets, routing, VPNs, and security groups for secure and isolated network setups
  • Implement monitoring, logging, and alerting using tools like CloudWatch, Grafana, ELK, or Azure Monitor
  • Collaborate with development and QA teams to align infrastructure with application needs
  • Troubleshoot infrastructure and deployment issues efficiently and proactively
  • Ensure cloud cost optimization and usage tracking


Required Skills & Experience:

  • 3-4 years of hands-on experience in a DevOps
  • Strong expertise with both AWS and Azure cloud platforms
  • Proficient in Git, branching strategies, and pull request workflows
  • Deep understanding of CI/CD concepts and experience with pipeline tools
  • Proficiency in Docker, container orchestration (Kubernetes, ECS/EKS/AKS)
  • Good knowledge of relational databases and experience in managing DB backups, performance, and migrations
  • Experience with networking concepts including VPC, subnets, firewalls, VPNs, etc.
  • Experience with Infrastructure as Code tools (Terraform preferred)
  • Strong working knowledge of CDN technologies: AWS CloudFront and Azure Front Door
  • Understanding of caching strategies: edge caching, browser caching, API caching, and reverse proxy-level caching
  • Experience with Cloudflare WAF, reverse proxy setups, SSL termination, and rate-limiting
  • Familiarity with Linux system administration, scripting (Bash, Python), and automation tools
  • Working knowledge of monitoring and logging tools
  • Strong troubleshooting and problem-solving skills


Good to Have (Bonus Points):

  • Experience with serverless architecture (e.g., AWS Lambda, Azure Functions)
  • Exposure to cost monitoring tools like CloudHealth, Azure Cost Management
  • Experience with compliance/security best practices (SOC2, ISO, etc.)
  • Familiarity with Service Mesh (Istio, Linkerd) and API gateways
  • Knowledge of Secrets Management tools (e.g., HashiCorp Vault, AWS Secrets Manager)


Read more
AI-First Company

AI-First Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
5 - 17 yrs
₹30L - ₹45L / yr
Data engineering
Data architecture
SQL
Data modeling
GCS
+47 more

ROLES AND RESPONSIBILITIES:

You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.


  • Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
  • Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
  • Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
  • Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
  • Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
  • Support self-service analytics by enabling governed data products and semantic layers.
  • Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
  • Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.


IDEAL CANDIDATE:

  • Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
  • 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
  • Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
  • Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
  • Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
  • Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
  • Excellent problem-solving, documentation, and stakeholder communication skills.


PREFERRED:

  • Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
  • Exposure to Snowflake, Databricks, or BigQuery environments.
  • Experience in high-tech, manufacturing, or enterprise data modernization programs.
Read more
Bengaluru (Bangalore)
5 - 7 yrs
₹5L - ₹8L / yr
HP LoadRunner
JMeter
skill iconJava
skill iconKubernetes
skill iconAmazon Web Services (AWS)

Immediately available, performance test engineer who is having real-time exposure to LoadRunner, JMeter and have tested Java Applications on AWS environments.

Read more
iMerit
Bengaluru (Bangalore)
6 - 9 yrs
₹10L - ₹15L / yr
DevOps
Terraform
Apache Kafka
skill iconPython
skill iconGo Programming (Golang)
+4 more

Exp: 7- 10 Years

CTC: up to 35 LPA


Skills:

  • 6–10 years DevOps / SRE / Cloud Infrastructure experience
  • Expert-level Kubernetes (networking, security, scaling, controllers)
  • Terraform Infrastructure-as-Code mastery
  • Hands-on Kafka production experience
  • AWS cloud architecture and networking expertise
  • Strong scripting in Python, Go, or Bash
  • GitOps and CI/CD tooling experience


Key Responsibilities:

  • Design highly available, secure cloud infrastructure supporting distributed microservices at scale
  • Lead multi-cluster Kubernetes strategy optimized for GPU and multi-tenant workloads
  • Implement Infrastructure-as-Code using Terraform across full infrastructure lifecycle
  • Optimize Kafka-based data pipelines for throughput, fault tolerance, and low latency
  • Deliver zero-downtime CI/CD pipelines using GitOps-driven deployment models
  • Establish SRE practices with SLOs, p95 and p99 monitoring, and FinOps discipline
  • Ensure production-ready disaster recovery and business continuity testing



If interested Kindly share your updated resume at 82008 31681

Read more
Bengaluru (Bangalore)
3 - 6 yrs
₹8L - ₹12L / yr
skill iconPython
skill iconReact.js
skill iconAmazon Web Services (AWS)
skill iconDjango

Role Description

This is a full-time on-site role in Bengaluru for a Full Stack Python Developer at Euphoric Thought Technologies Pvt. Ltd. The developer will be responsible for back-end and front-end web development, software development, full-stack development, and using Cascading Style Sheets (CSS) to build effective and efficient applications.

Qualifications

  • Back-End Web Development and Full-Stack Development skills
  • Front-End Development and Software Development skills
  • Proficiency in Cascading Style Sheets (CSS)
  • Experience with Python, Django, and Flask frameworks
  • Strong problem-solving and analytical skills
  • Ability to work collaboratively in a team environment
  • Bachelor's or Master's degree in Computer Science or relevant field
  • Agile Methodologies: Proven experience working in agile teams, demonstrating the application of agile principles with lean thinking.
  • Front end - React.js
  • Data Engineering: Useful experience blending data engineering with core software engineering.
  • Additional Programming Skills: Desirable experience with other programming languages (C++, .NET) and frameworks.
  • CI/CD Tools: Familiarity with Github Actions is a plus.
  • Cloud Platforms: Experience with cloud platforms (e.g., Azure, AWS,) and containerization technologies (e.g., Docker, Kubernetes).
  • Code Optimization: Proficient in profiling and optimizing Python code.


Read more
Bengaluru (Bangalore)
6 - 10 yrs
₹15L - ₹28L / yr
Business Analysis
Data integration
SQL
PMS
CRS
+2 more

Job Description: Business Analyst – Data Integrations

Location: Bangalore / Hybrid / Remote

Company: LodgIQ

Industry: Hospitality / SaaS / Machine Learning

About LodgIQ

Headquartered in New York, LodgIQ delivers a revolutionary B2B SaaS platform to the

travel industry. By leveraging machine learning and artificial intelligence, we enable precise

forecasting and optimized pricing for hotel revenue management. Backed by Highgate

Ventures and Trilantic Capital Partners, LodgIQ is a well-funded, high-growth startup with a

global presence.

About the Role

We’re looking for a skilled Business Analyst – Data Integrations who can bridge the gap

between business operations and technology teams, ensuring smooth, efficient, and scalable

integrations. If you’re passionate about hospitality tech and enjoy solving complex data

challenges, we’d love to hear from you!

What You’ll Do

Key Responsibilities

 Collaborate with vendors to gather requirements for API development and ensure

technical feasibility.

 Collect API documentation from vendors; document and explain business logic to

use external data sources effectively.

 Access vendor applications to create and validate sample data; ensure the accuracy

and relevance of test datasets.

 Translate complex business logic into documentation for developers, ensuring

clarity for successful integration.

 Monitor all integration activities and support tickets in Jira, proactively resolving

critical issues.

 Lead QA testing for integrations, overseeing pilot onboarding and ensuring solution

viability before broader rollout.

 Document onboarding processes and best practices to streamline future

integrations and improve efficiency.

 Build, train, and deploy machine learning models for forecasting, pricing, and

optimization, supporting strategic goals.

 Drive end-to-end execution of data integration projects, including scoping, planning,

delivery, and stakeholder communication.

 Gather and translate business requirements into actionable technical specifications,

liaising with business and technical teams.


 Oversee maintenance and enhancement of existing integrations, performing RCA

and resolving integration-related issues.

 Document workflows, processes, and best practices for current and future

integration projects.

 Continuously monitor system performance and scalability, recommending

improvements to increase efficiency.

 Coordinate closely with Operations for onboarding and support, ensuring seamless

handover and issue resolution.

Desired Skills & Qualifications

 Strong experience in API integration, data analysis, and documentation.

 Familiarity with Jira for ticket management and project workflow.

 Hands-on experience with machine learning model development and deployment.

 Excellent communication skills for requirement gathering and stakeholder

engagement.

 Experience with QA test processes and pilot rollouts.

 Proficiency in project management, data workflow documentation, and system

monitoring.

 Ability to manage multiple integrations simultaneously and work cross-functionally.

Required Qualifications

 Experience: Minimum 4 years in hotel technology or business analytics, preferably

handling data integration or system interoperability projects.

 Technical Skills:

 Basic proficiency in SQL or database querying.

 Familiarity with data integration concepts such as APIs or ETL workflows

(preferred but not mandatory).

 Eagerness to learn and adapt to new tools, platforms, and technologies.

 Hotel Technology Expertise: Understanding of systems such as PMS, CRS, Channel

Managers, or RMS.

 Project Management: Strong organizational and multitasking abilities.

 Problem Solving: Analytical thinker capable of troubleshooting and driving resolution.


 Communication: Excellent written and verbal skills to bridge technical and non-

technical discussions.


 Attention to Detail: Methodical approach to documentation, testing, and deployment.

Preferred Qualification

 Exposure to debugging tools and troubleshooting methodologies.

 Familiarity with cloud environments (AWS).

 Understanding of data security and privacy considerations in the hospitality industry.

Why LodgIQ?

 Join a fast-growing, mission-driven company transforming the future of hospitality.


 Work on intellectually challenging problems at the intersection of machine learning,

decision science, and human behavior.

 Be part of a high-impact, collaborative team with the autonomy to drive initiatives from

ideation to production.

 Competitive salary and performance bonuses.

 For more information, visit https://www.lodgiq.com

Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
7 - 9 yrs
₹15L - ₹28L / yr
databricks
skill iconPython
SQL
PySpark
skill iconAmazon Web Services (AWS)
+9 more

Role Proficiency:

This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required.


Skill Examples:

  1. Proficiency in SQL Python or other programming languages used for data manipulation.
  2. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF.
  3. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery).
  4. Conduct tests on data pipelines and evaluate results against data quality and performance specifications.
  5. Experience in performance tuning.
  6. Experience in data warehouse design and cost improvements.
  7. Apply and optimize data models for efficient storage retrieval and processing of large datasets.
  8. Communicate and explain design/development aspects to customers.
  9. Estimate time and resource requirements for developing/debugging features/components.
  10. Participate in RFP responses and solutioning.
  11. Mentor team members and guide them in relevant upskilling and certification.

 

Knowledge Examples:

  1. Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF.
  2. Proficient in SQL for analytics and windowing functions.
  3. Understanding of data schemas and models.
  4. Familiarity with domain-related data.
  5. Knowledge of data warehouse optimization techniques.
  6. Understanding of data security concepts.
  7. Awareness of patterns frameworks and automation practices.


 

Additional Comments:

# of Resources: 22 Role(s): Technical Role Location(s): India Planned Start Date: 1/1/2026 Planned End Date: 6/30/2026

Project Overview:

Role Scope / Deliverables: We are seeking highly skilled Data Engineer with strong experience in Databricks, PySpark, Python, SQL, and AWS to join our data engineering team on or before 1st week of Dec, 2025.

The candidate will be responsible for designing, developing, and optimizing large-scale data pipelines and analytics solutions that drive business insights and operational efficiency.

Design, build, and maintain scalable data pipelines using Databricks and PySpark.

Develop and optimize complex SQL queries for data extraction, transformation, and analysis.

Implement data integration solutions across multiple AWS services (S3, Glue, Lambda, Redshift, EMR, etc.).

Collaborate with analytics, data science, and business teams to deliver clean, reliable, and timely datasets.

Ensure data quality, performance, and reliability across data workflows.

Participate in code reviews, data architecture discussions, and performance optimization initiatives.

Support migration and modernization efforts for legacy data systems to modern cloud-based solutions.


Key Skills:

Hands-on experience with Databricks, PySpark & Python for building ETL/ELT pipelines.

Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).

Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).

Experience with data modeling, schema design, and performance optimization.

Familiarity with CI/CD pipelines, version control (Git), and workflow orchestration (Airflow preferred).

Excellent problem-solving, communication, and collaboration skills.

 

Skills: Databricks, Pyspark & Python, Sql, Aws Services

 

Must-Haves

Python/PySpark (5+ years), SQL (5+ years), Databricks (3+ years), AWS Services (3+ years), ETL tools (Informatica, Glue, DataProc) (3+ years)

Hands-on experience with Databricks, PySpark & Python for ETL/ELT pipelines.

Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).

Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).

Experience with data modeling, schema design, and performance optimization.

Familiarity with CI/CD pipelines, Git, and workflow orchestration (Airflow preferred).


******

Notice period - Immediate to 15 days

Location: Bangalore

Read more
Capace Software Private Limited
Bengaluru (Bangalore), Bhopal
5 - 10 yrs
₹4L - ₹10L / yr
skill iconDjango
CI/CD
Software deployment
RESTful APIs
skill iconFlask
+8 more

Senior Python Django Developer 

Experience: Back-end development: 6 years (Required)


Location:  Bangalore/ Bhopal

Job Description:

We are looking for a highly skilled Senior Python Django Developer with extensive experience in building and scaling financial or payments-based applications. The ideal candidate has a deep understanding of system design, architecture patterns, and testing best practices, along with a strong grasp of the start-up environment.

This role requires a balance of hands-on coding, architectural design, and collaboration across teams to deliver robust and scalable financial products.

Responsibilities:

  • Design and develop scalable, secure, and high-performance applications using Python (Django framework).
  • Architect system components, define database schemas, and optimize backend services for speed and efficiency.
  • Lead and implement design patterns and software architecture best practices.
  • Ensure code quality through comprehensive unit testing, integration testing, and participation in code reviews.
  • Collaborate closely with Product, DevOps, QA, and Frontend teams to build seamless end-to-end solutions.
  • Drive performance improvements, monitor system health, and troubleshoot production issues.
  • Apply domain knowledge in payments and finance, including transaction processing, reconciliation, settlements, wallets, UPI, etc.
  • Contribute to technical decision-making and mentor junior developers.

Requirements:

  • 6 to 10 years of professional backend development experience with Python and Django.
  • Strong background in payments/financial systems or FinTech applications.
  • Proven experience in designing software architecture in a microservices or modular monolith environment.
  • Experience working in fast-paced startup environments with agile practices.
  • Proficiency in RESTful APIs, SQL (PostgreSQL/MySQL), NoSQL (MongoDB/Redis).
  • Solid understanding of Docker, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure).
  • Hands-on experience with test-driven development (TDD) and frameworks like pytest, unittest, or factory_boy.
  • Familiarity with security best practices in financial applications (PCI compliance, data encryption, etc.).

Preferred Skills:

  • Exposure to event-driven architecture (Celery, Kafka, RabbitMQ).
  • Experience integrating with third-party payment gateways, banking APIs, or financial instruments.
  • Understanding of DevOps and monitoring tools (Prometheus, ELK, Grafana).
  • Contributions to open-source or personal finance-related projects.

Job Types: Full-time, Permanent


Schedule:

  • Day shift

Supplemental Pay:

  • Performance bonus
  • Yearly bonus

Ability to commute/relocate:

  • JP Nagar, 5th Phase, Bangalore, Karnataka or Indrapuri, Bhopal, Madhya Pradesh: Reliably commute or willing to relocate with an employer-provided relocation package (Preferred)


Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort