Cutshort logo
AWS (Amazon Web Services) Jobs in Bangalore (Bengaluru)

50+ AWS (Amazon Web Services) Jobs in Bangalore (Bengaluru) | AWS (Amazon Web Services) Job openings in Bangalore (Bengaluru)

Apply to 50+ AWS (Amazon Web Services) Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest AWS (Amazon Web Services) Job opportunities across top companies like Google, Amazon & Adobe.

icon
Oneture Technologies

at Oneture Technologies

1 recruiter
Eman Khan
Posted by Eman Khan
Pune, Mumbai, Bengaluru (Bangalore)
4 - 8 yrs
Upto ₹26L / yr (Varies
)
skill iconData Science
Demand forecasting
Predictive modelling
Forecasting
Time series
+6 more

About the Role

We are looking for a highly skilled Machine Learning Lead with proven expertise in demand forecasting to join our team. The ideal candidate will have 4-8 years of experience building and deploying ML models, strong knowledge of AWS ML services and MLOps practices, and the ability to lead a team while working directly with clients. This is a client-facing role that requires strong communication skills, technical depth, and leadership ability.


Key Responsibilities

  • Lead end-to-end design, development, and deployment of demand forecasting models.
  • Collaborate with clients to gather requirements, define KPIs, and translate business needs into ML solutions.
  • Architect and implement ML workflows using AWS ML ecosystem (SageMaker, Bedrock, Lambda, S3, Step Functions, etc.).
  • Establish and enforce MLOps best practices for scalable, reproducible, and automated model deployment and monitoring.
  • Mentor and guide a team of ML engineers and data scientists, ensuring technical excellence and timely delivery.
  • Partner with cross-functional teams (engineering, data, business) to integrate forecasting insights into client systems.
  • Present results, methodologies, and recommendations to both technical and non-technical stakeholders.
  • Stay updated with the latest advancements in forecasting algorithms, time-series modeling, and AWS ML offerings.


Required Skills & Experience

  • 4-8 years of experience in machine learning with a strong focus on time-series forecasting and demand prediction.
  • Hands-on experience with AWS ML stack (Amazon SageMaker, Step Functions, Lambda, S3, Athena, CloudWatch, etc.).
  • Strong understanding of MLOps pipelines (CI/CD for ML, model monitoring, retraining workflows).
  • Proficiency in Python, SQL, and ML libraries (TensorFlow, PyTorch, Scikit-learn, Prophet, GluonTS, etc.).
  • Experience working directly with clients and stakeholders to understand business requirements and deliver ML solutions.
  • Strong leadership and team management skills, with the ability to mentor and guide junior team members.
  • Excellent communication and presentation skills for both technical and business audiences.


Preferred Qualifications

  • Experience with retail, FMCG, or supply chain demand forecasting use cases.
  • Exposure to generative AI and LLMs for augmenting forecasting solutions.
  • AWS Certification (e.g., AWS Certified Machine Learning – Specialty).


What We Offer

  • Opportunity to lead impactful demand forecasting projects with global clients.
  • Exposure to cutting-edge ML, AI, and AWS technologies.
  • Collaborative, fast-paced, and growth-oriented environment.
  • Competitive compensation and benefits.
Read more
MontyCloud

at MontyCloud

2 candid answers
Ariba Khan
Posted by Ariba Khan
Bengaluru (Bangalore)
12 - 16 yrs
Upto ₹70L / yr (Varies
)
skill iconAmazon Web Services (AWS)
skill iconPython

Role Overview

Are you passionate about revolutionizing the world of cloud platforms and eager to make a lasting impact on the industry? Do you thrive on the challenge of crafting large-scale cloud services that further simplify cloud adoption for other businesses? Are you driven to create impeccably designed APIs that will delight thousands of developers and turn them into loyal advocates? Or maybe you're excited by the prospect of engineering a near-real-time, millisecond-latency event processing pipeline?


If you are a driven, innovative, and experienced Principal Platform Engineer who is passionate about cloud technologies and looking to make a significant impact, we want to hear from you. Apply today to join our growing team!


MontyCloud is seeking an experienced Principal Platform Engineer with extensive knowledge in SaaS platforms, AWS cloud, and a proven track record of building and shipping highly scalable SaaS platforms. The ideal candidate will have strong system architecture and design skills, as well as exceptional programming and platform development experience.


Key Responsibilities:

  • Lead the design and implementation of our Cloud Management Platform, ensuring its scalability, reliability, and performance.
  • Collaborate with cross-functional teams to define system architecture, develop innovative solutions, and drive continuous improvements.
  • Utilize your expertise in AWS and other cloud technologies to build and maintain cloud-native components and services.
  • Serve as a technical mentor for the engineering team, fostering a culture of learning, collaboration, and innovation.
  • Drive the adoption of best practices, design patterns, and emerging technologies in the cloud domain.
  • Actively participate in code and design reviews, providing constructive feedback to team members.
  • Ensure the successful delivery of high-quality software by defining and implementing effective development processes and methodologies.
  • Communicate complex technical concepts effectively via well written technical documents and knowledge sessions to both technical and non-technical stakeholders.


Must Have Skills:

  • Experience in software development, with a focus on SaaS platforms and AWS cloud technologies.
  • Proven experience in designing, building, and maintaining highly scalable, performant, and resilient systems.
  • Expertise in serverless architecture, event-driven design, distributed systems, and event streams.
  • Experience in designing, building and shipping low-latency and high-performance APIs.
  • Experience building with services such as Kafka, Open Search/Elastic Search, AWS Kinesis, Azure Streams.
  • Strong programming skills in languages such as Python, Java, or Go.
  • Experience working with infrastructure as code tools, such as Terraform or CloudFormation.
  • Experience in building AI/ML systems and familiarity with Large Language Models is nice to have.
  • Excellent verbal and written communication skills, with the ability to articulate complex concepts to diverse audiences.
  • Demonstrated leadership experience, with the ability to inspire and motivate a team.


Good to Have

  • Familiarity with containerization technologies, such as Docker and Kubernetes.

Experience:

  • 12-15 years of experience in software development, with a focus on SaaS platforms and AWS cloud technologies.
  • 8+ years of experience in building and shipping SaaS platforms
  • 4+ years of experience in building platforms using Serverless Architecture
  • 8+ years. of experience in building cloud native applications on AWS or Azure
  • 8+ years of experience in in building applications using either Test Driven Development (TDD) or Behavior Driven Development (BDD) methodologies.


Education

  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
Read more
MontyCloud

at MontyCloud

2 candid answers
Ariba Khan
Posted by Ariba Khan
Bengaluru (Bangalore)
8 - 11 yrs
Upto ₹45L / yr (Varies
)
skill iconPython
skill iconAmazon Web Services (AWS)

Role Overview:

We are seeking a seasoned Lead Platform Engineer with a strong background in platform development and a proven track record of leading technology design and teams. The ideal candidate will have at least 8 years of overall experience, with a minimum of 5 years in relevant roles. This position entails owning module design and spearheading the implementation process alongside a team of talented platform engineers.


At MontyCloud, you'll have the opportunity to work on cutting-edge technology, shaping the future of our cloud management platform with your expertise. If you're passionate about building scalable, efficient, and innovative cloud solutions, we'd love to have you on our team.


Responsibilities:

  • Lead the design and architecture of robust, scalable platform modules, ensuring alignment with business objectives and technical standards.
  • Drive the implementation of platform solutions, collaborating closely with platform engineers and cross-functional teams to achieve project milestones.
  • Mentor and guide a team of platform engineers, fostering an environment of growth and continuous improvement.
  • Stay abreast of emerging technologies and industry trends, incorporating them into the platform to enhance functionality and user experience.
  • Ensure the reliability and security of the platform through comprehensive testing and adherence to best practices.
  • Collaborate with senior leadership to set technical strategy and goals for the platform engineering team.


Must have skills:

  • Expertise in Python programming, with a solid foundation in writing clean, efficient, and scalable code.
  • Proven experience in serverless application development, designing and implementing microservices, and working within event-driven architectures.
  • Demonstrated experience in building and shipping high-quality SaaS platforms/applications on AWS, showcasing a portfolio of successful deployments.
  • Comprehensive understanding of cloud computing concepts, AWS architectural best practices, and familiarity with a range of AWS services, including but not limited to Lambda, RDS, DynamoDB, and API Gateway.
  • Exceptional problem-solving skills, with a proven ability to optimize complex systems for efficiency and scalability.
  • Excellent communication skills, with a track record of effective collaboration with team members and successful engagement with stakeholders across various levels.
  • Previous experience leading technology design and engineering teams, with a focus on mentoring, guiding, and driving the team towards achieving project milestones and technical excellence.


Good to Have skills:

  • AWS Certified Solutions Architect, AWS Certified Developer, or other relevant cloud development certifications.
  • Experience with the AWS Boto3 SDK for Python.
  • Exposure to other cloud platforms such as Azure or GCP.
  • Knowledge of containerization and orchestration technologies, such as Docker and Kubernetes.


Experience:

  • Total Experience: 8 + years of experience in Software or Platform Engineering
  • Minimum 5 years of experience in roles directly relevant to platform development and team leadership
  • Minimum 4 years of experience in developing applications with Python
  • Minimum 3 years of experience in serverless application development
  • Minimum 3 years of experience in building event-driven architectures


Education:

  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field.


Read more
IT Services

IT Services

Agency job
via orangejobs by Shrutha Poojary
Bengaluru (Bangalore)
6 - 8 yrs
₹22L - ₹23L / yr
skill iconJava
skill iconAmazon Web Services (AWS)
Systems design
Design patterns
  • 6+ years of experience in Java backend development with expertise in Spring/Spring Boot and RESTful services.
  • Solid grasp of Object-Oriented Programming (OOP), system design, and design patterns.
  • Proven experience leading a team of engineers or taking ownership of modules/projects.
  • Experience with AWS Cloud services (EC2, Lambda, S3, etc.) is a strong advantage.
  • Familiarity with Agile/Scrum methodologies and working in cross-functional teams.
  • Excellent problem-solving, debugging, and analytical skills.
  • Strong communication and leadership skills.


Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Dipika
Posted by Dipika
Bengaluru (Bangalore), Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Hyderabad, Pune
5 - 7 yrs
₹5L - ₹20L / yr
skill iconJava
Microservices
06692
Apache Kafka
Apache ActiveMQ
+3 more

1 Senior Associate Technology L1 – Java Microservices


Company Description

Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value.


Job Description

We are looking for a Senior Associate Technology Level 1 - Java Microservices Developer to join our team of bright thinkers and doers. You’ll use your problem-solving creativity to design, architect, and develop high-end technology solutions that solve our clients’ most complex and challenging problems across different industries.

We are on a mission to transform the world, and you will be instrumental in shaping how we do it with your ideas, thoughts, and solutions.


Your Impact:

• Drive the design, planning, and implementation of multifaceted applications, giving you breadth and depth of knowledge across the entire project lifecycle.

• Combine your technical expertise and problem-solving passion to work closely with clients, turning • complex ideas into end-to-end solutions that transform our clients’ business

• Constantly innovate and evaluate emerging technologies and methods to provide scalable and elegant solutions that help clients achieve their business goals.


Qualifications

➢ 5 to 7 Years of software development experience

➢ Strong development skills in Java JDK 1.8 or above

➢ Java fundamentals like Exceptional handling, Serialization/Deserialization and Immutability concepts

➢ Good fundamental knowledge in Enums, Collections, Annotations, Generics, Auto boxing and Data Structure

➢ Database RDBMS/No SQL (SQL, Joins, Indexing)

➢ Multithreading (Re-entrant Lock, Fork & Join, Sync, Executor Framework)

➢ Spring Core & Spring Boot, security, transactions ➢ Hands-on experience with JMS (ActiveMQ, RabbitMQ, Kafka etc)

➢ Memory Mgmt (JVM configuration, Profiling, GC), profiling, Perf tunning, Testing, Jmeter/similar tool)

➢ Devops (CI/CD: Maven/Gradle, Jenkins, Quality plugins, Docker and containersization)

➢ Logical/Analytical skills. Thorough understanding of OOPS concepts, Design principles and implementation of

➢ different type of Design patterns. ➢ Hands-on experience with any of the logging frameworks (SLF4J/LogBack/Log4j) ➢ Experience of writing Junit test cases using Mockito / Powermock frameworks.

➢ Should have practical experience with Maven/Gradle and knowledge of version control systems like Git/SVN etc.

➢ Good communication skills and ability to work with global teams to define and deliver on projects.

➢ Sound understanding/experience in software development process, test-driven development.

➢ Cloud – AWS / AZURE / GCP / PCF or any private cloud would also be fine

➢ Experience in Microservices

Read more
a leading Data & Analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage. They are the partner of choice for enterprises on their digital transformation journey.  Teams offer solutions and services at the intersection of Advanced Data, Analytics, and AI.

a leading Data & Analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage. They are the partner of choice for enterprises on their digital transformation journey. Teams offer solutions and services at the intersection of Advanced Data, Analytics, and AI.

Agency job
via HyrHub by Shwetha Naik
Bengaluru (Bangalore), Mangalore
6.5 - 10 yrs
₹12L - ₹25L / yr
ETL
SQL
databricks
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)

• Overall 6+ years of experience in development and maintenance in a data engineering project.

• Collaborate with stakeholders to understand data requirements and translate them into technical solutions.

 

SQL

• Experienced in writing complex SQL queries and knowledge of SQL Analytical functions.

 

Databricks

• Design, develop, and deploy scalable data pipelines and ETL processes using Databricks.

• Optimize Spark jobs and ensure efficient data processing and storage

• Develop and maintain Databricks notebooks, workflows, and dashboards

• Proficient in building and optimizing data pipelines and workflows in Databricks

• Troubleshoot and resolve data pipeline issues and performance bottlenecks

• Monitor and manage data processing jobs to ensure high availability and reliability • Work with cloud platforms such as AWS, Azure, or Google Cloud to integrate Databricks with other services

• Knowledge of programming languages such as Python, Scala

• Stay current with the latest Databricks and big data technologies and trends

 

Cloud

• Experience working in cloud platforms( Azure/AWS platforms) and familiar with the cloud tools and technologies

Read more
Bluecopa

Bluecopa

Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Bengaluru (Bangalore)
6 - 9 yrs
₹10L - ₹15L / yr
DevOps
skill iconPython
Shell Scripting
CI/CD
skill iconKubernetes
+3 more

Role: DevOps Engineer


Exp: 4 - 7 Years

CTC: up to 28 LPA


Key Responsibilities

•   Design, build, and manage scalable infrastructure on cloud platforms (GCP, AWS, Azure, or OCI)

•   Administer and optimize Kubernetes clusters and container runtimes (Docker, containerd)

•   Develop and maintain CI/CD pipelines for multiple services and environments

•   Manage infrastructure as code using tools like Terraform and/or Pulumi

•   Automate operations with Python and shell scripting for deployment, monitoring, and maintenance

•   Ensure high availability and performance of production systems and troubleshoot incidents effectively

•   Monitor system metrics and implement observability best practices using tools like Prometheus, Grafana, ELK, etc.

•   Collaborate with development, security, and product teams to align infrastructure with business needs

•   Apply best practices in cloud networking, Linux administration, and configuration management

•   Support compliance and security audits; assist with implementation of cloud security measures (e.g., firewalls, IDS/IPS, IAM hardening)

•   Participate in on-call rotations and incident response activities


If Interested kindly share your updated resume on 82008 31681

Read more
CoffeeBeans

at CoffeeBeans

2 candid answers
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Pune
3 - 6 yrs
Upto ₹22L / yr (Varies
)
skill iconPython
SQL
ETL
Data modeling
Spark
+6 more

Role Overview

We're looking for experienced Data Engineers who can independently design, build, and manage scalable data platforms. You'll work directly with clients and internal teams to develop robust data pipelines that support analytics, AI/ML, and operational systems.

You’ll also play a mentorship role and help establish strong engineering practices across our data projects.

Key Responsibilities

  • Design and develop large-scale, distributed data pipelines (batch and streaming)
  • Implement scalable data models, warehouses/lakehouses, and data lakes
  • Translate business requirements into technical data solutions
  • Optimize data pipelines for performance and reliability
  • Ensure code is clean, modular, tested, and documented
  • Contribute to architecture, tooling decisions, and platform setup
  • Review code/design and mentor junior engineers

Must-Have Skills

  • Strong programming skills in Python and advanced SQL
  • Solid grasp of ETL/ELT, data modeling (OLTP & OLAP), and stream processing
  • Hands-on experience with frameworks like Apache Spark, Flink, etc.
  • Experience with orchestration tools like Airflow
  • Familiarity with CI/CD pipelines and Git
  • Ability to debug and scale data pipelines in production

Preferred Skills

  • Experience with cloud platforms (AWS preferred, GCP or Azure also fine)
  • Exposure to Databricks, dbt, or similar tools
  • Understanding of data governance, quality frameworks, and observability
  • Certifications (e.g., AWS Data Analytics, Solutions Architect, Databricks) are a bonus

What We’re Looking For

  • Problem-solver with strong analytical skills and attention to detail
  • Fast learner who can adapt across tools, tech stacks, and domains
  • Comfortable working in fast-paced, client-facing environments
  • Willingness to travel within India when required
Read more
Service Co

Service Co

Agency job
via Vikash Technologies by Rishika Teja
Bengaluru (Bangalore), Pune
4 - 6 yrs
₹10L - ₹18L / yr
skill iconPython
skill iconMachine Learning (ML)
skill iconAmazon Web Services (AWS)

Strong proficiency in Python and experience with ML libraries (Scikitlearn TensorFlow, PyTorch, scikit-learn) -ML & DS: Understanding of concept ML Modeling and Evaluation Metrix -Containerization & Orchestration:


Hands-on experience with Docker and Kubernetes for deploying ML models -CI/CD Pipelines:


Experience building automated CI/CD pipelines using tools like Jenkins, GitLab CI, GitHub Actions,


Experience with AWS,


Workflow Automation: Kubeflow/Airflow/MLFlow -Model Monitoring: Familiarity/Understanding with monitoring tools and techniques for ML models (e.g., Prometheus, Grafana, Seldon, Evidently AI) Version Control: Experience with Git: managing code and model versioning



Read more
A modern configuration management platform based on advanced

A modern configuration management platform based on advanced

Agency job
via Scaling Theory by Keerthana Prabkharan
Bengaluru (Bangalore)
3 - 5 yrs
₹25L - ₹100L / yr
DevOps
skill iconKubernetes
Terraform
Ansible
skill iconDocker
+1 more

Key Responsibilities:

Kubernetes Management:

Deploy, configure, and maintain Kubernetes clusters on AKS, EKS, GKE, and OKE.

Troubleshoot and resolve issues related to cluster performance and availability.

Database Migration:

 Plan and execute database migration strategies across multicloud environments, ensuring data integrity and minimal downtime.

Collaborate with database teams to optimize data flow and management.

Coding and Development:

 Develop, test, and optimize code with a focus on enhancing algorithms and data structures for system performance.

Implement best coding practices and contribute to code reviews.

Cross-Platform Integration:

  Facilitate seamless integration of services across different cloud providers to enhance interoperability.

Collaborate with development teams to ensure consistent application performance across environments.

Performance Optimization:

  Monitor system performance metrics, identify bottlenecks, and implement effective solutions to optimize resource utilization.

Conduct regular performance assessments and provide recommendations for improvements.

Experience:

  Minimum of 2+ years of experience in cloud computing, with a strong focus on Kubernetes management across multiple platforms.

Technical Skills:

  Proficient in cloud services and infrastructure, including networking and security considerations.

Strong programming skills in languages such as Python, Go, or Java, with a solid understanding of algorithms and data structures.

Problem-Solving:

 Excellent analytical and troubleshooting skills with a proactive approach to identifying and resolving issues.

Communication:

 Strong verbal and written communication skills, with the ability to collaborate effectively with cross-functional teams.

Preferred Skills:

- Familiarity with CI/CD tools and practices.

- Experience with container orchestration and management tools.

- Knowledge of microservices architecture and design patterns.


Read more
AryuPay Technologies
Bhavana Chaudhari
Posted by Bhavana Chaudhari
Bengaluru (Bangalore), Bhopal
6 - 8 yrs
₹6L - ₹10L / yr
skill iconDjango
RESTful APIs
Software deployment
CI/CD
skill iconPostgreSQL
+7 more

Senior Python Django Developer 

Experience: Back-end development: 6 years (Required)


Location:  Bangalore/ Bhopal

Job Description:

We are looking for a highly skilled Senior Python Django Developer with extensive experience in building and scaling financial or payments-based applications. The ideal candidate has a deep understanding of system design, architecture patterns, and testing best practices, along with a strong grasp of the startup environment.

This role requires a balance of hands-on coding, architectural design, and collaboration across teams to deliver robust and scalable financial products.

Responsibilities:

  • Design and develop scalable, secure, and high-performance applications using Python (Django framework).
  • Architect system components, define database schemas, and optimize backend services for speed and efficiency.
  • Lead and implement design patterns and software architecture best practices.
  • Ensure code quality through comprehensive unit testing, integration testing, and participation in code reviews.
  • Collaborate closely with Product, DevOps, QA, and Frontend teams to build seamless end-to-end solutions.
  • Drive performance improvements, monitor system health, and troubleshoot production issues.
  • Apply domain knowledge in payments and finance, including transaction processing, reconciliation, settlements, wallets, UPI, etc.
  • Contribute to technical decision-making and mentor junior developers.

Requirements:

  • 6 to 10 years of professional backend development experience with Python and Django.
  • Strong background in payments/financial systems or FinTech applications.
  • Proven experience in designing software architecture in a microservices or modular monolith environment.
  • Experience working in fast-paced startup environments with agile practices.
  • Proficiency in RESTful APIs, SQL (PostgreSQL/MySQL), NoSQL (MongoDB/Redis).
  • Solid understanding of Docker, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure).
  • Hands-on experience with test-driven development (TDD) and frameworks like pytest, unittest, or factory_boy.
  • Familiarity with security best practices in financial applications (PCI compliance, data encryption, etc.).

Preferred Skills:

  • Exposure to event-driven architecture (Celery, Kafka, RabbitMQ).
  • Experience integrating with third-party payment gateways, banking APIs, or financial instruments.
  • Understanding of DevOps and monitoring tools (Prometheus, ELK, Grafana).
  • Contributions to open-source or personal finance-related projects.

Job Types: Full-time, Permanent


Schedule:

  • Day shift

Supplemental Pay:

  • Performance bonus
  • Yearly bonus

Ability to commute/relocate:

  • JP Nagar, 5th Phase, Bangalore, Karnataka or Indrapuri, Bhopal, Madhya Pradesh: Reliably commute or willing to relocate with an employer-provided relocation package (Preferred)


 

Read more
Service Co

Service Co

Agency job
via Vikash Technologies by Rishika Teja
Bengaluru (Bangalore), Pune
4 - 7 yrs
₹10L - ₹17L / yr
skill iconPython
skill iconAmazon Web Services (AWS)
SQL

Demonstrated experience as a Python developer i

Good understanding and practical experience with Python frameworks including Django, Flask, and Bottle Proficient with Amazon Web Service and experienced in working with API’s

Solid understanding of databases SQL and mySQL Experience and knowledge of JavaScript is a benefit

Highly skilled with attention to detail

Good mentoring and leadership abilities

Excellent communication skills Ability to prioritize and manage own workload

Read more
Auxo AI
kusuma Gullamajji
Posted by kusuma Gullamajji
Bengaluru (Bangalore), Hyderabad, Mumbai, Gurugram
7 - 10 yrs
₹20L - ₹35L / yr
DevOps
Microsoft Windows Azure
skill iconAmazon Web Services (AWS)
CI/CD
ADF

Requirements

  • Design, implement, and manage CI/CD pipelines using Azure DevOps, GitHub, and Jenkins for automated deployments of applications and infrastructure changes.
  • Architect and deploy solutions on Kubernetes clusters (EKS and AKS) to support containerized applications and microservices architecture.
  • Collaborate with development teams to streamline code deployments, releases, and continuous integration processes across multiple environments.
  • Configure and manage Azure services including Azure Synapse Analytics, Azure Data Factory (ADF), Azure Data Lake Storage (ADLS), and other data services for efficient data processing and analytics workflows.
  • Utilize AWS services such as Amazon EMR, Amazon Redshift, Amazon S3, Amazon Aurora, IAM policies, and Azure Monitor for data management, warehousing, and governance.
  • Implement infrastructure as code (IaC) using tools like Terraform or CloudFormation to automate provisioning and management of cloud resources.
  • Ensure high availability, performance monitoring, and disaster recovery strategies for cloud-based applications and services.
  • Develop and enforce security best practices and compliance policies, including IAM policies, encryption, and access controls across Azure environments.
  • Collaborate with cross-functional teams to troubleshoot production issues, conduct root cause analysis, and implement solutions to prevent recurrence.
  • Stay current with industry trends, best practices, and evolving technologies in cloud computing, DevOps, and container orchestration.

**Qualifications: **

  • Bachelor’s degree in Computer Science, Engineering, or related field; or equivalent work experience.
  • 5+ years of experience as a DevOps Engineer or similar role with hands-on expertise in AWS and Azure cloud environments.
  • Strong proficiency in Azure DevOps, Git, GitHub, Jenkins, and CI/CD pipeline automation.
  • Experience deploying and managing Kubernetes clusters (EKS, AKS) and container orchestration platforms.
  • Deep understanding of cloud-native architectures, microservices, and serverless computing.
  • Familiarity with Azure Synapse, ADF, ADLS, and AWS data services (EMR, Redshift, Glue) for data integration and analytics.
  • Solid grasp of infrastructure as code (IaC) tools like Terraform, CloudFormation, or ARM templates.
  • Experience with monitoring tools (e.g., Prometheus, Grafana) and logging solutions for cloud-based applications.
  • Excellent troubleshooting skills and ability to resolve complex technical issues in production environments.


Read more
Bluecopa

Bluecopa

Agency job
Bengaluru (Bangalore)
4 - 7 yrs
₹10L - ₹15L / yr
DevOps
skill iconPython
skill iconKubernetes
skill iconAmazon Web Services (AWS)
Windows Azure
+2 more

Role: DevOps Engineer


Exp: 4 - 7 Years

CTC: up to 28 LPA


Key Responsibilities

•   Design, build, and manage scalable infrastructure on cloud platforms (GCP, AWS, Azure, or OCI)

•   Administer and optimize Kubernetes clusters and container runtimes (Docker, containerd)

•   Develop and maintain CI/CD pipelines for multiple services and environments

•   Manage infrastructure as code using tools like Terraform and/or Pulumi

•   Automate operations with Python and shell scripting for deployment, monitoring, and maintenance

•   Ensure high availability and performance of production systems and troubleshoot incidents effectively

•   Monitor system metrics and implement observability best practices using tools like Prometheus, Grafana, ELK, etc.

•   Collaborate with development, security, and product teams to align infrastructure with business needs

•   Apply best practices in cloud networking, Linux administration, and configuration management

•   Support compliance and security audits; assist with implementation of cloud security measures (e.g., firewalls, IDS/IPS, IAM hardening)

•   Participate in on-call rotations and incident response activities

Read more
Pepsalesai
Madhurya M
Posted by Madhurya M
Bengaluru (Bangalore)
2 - 7 yrs
₹15L - ₹35L / yr
skill iconPython
skill iconAmazon Web Services (AWS)
skill iconMongoDB
RESTful APIs
CICD pipelines


Job Title: Backend Developer

Experience: 2–7 Years

Location: On-site – Bangalore

Employment Type: Full-Time

Company: Pepsales AI (Multiplicity Technologies Inc.)

About Pepsales

Pepsales AI is a real-time sales enablement and conversation intelligence platform built for B2B SaaS sales teams. It empowers sellers across every stage of the sales cycle—before, during, and after discovery calls—by providing actionable insights that move deals forward. 

  • For Account Executives: Pepsales AI transforms the discovery process, ensuring objective deal qualification and frictionless handoffs to solution engineers—enabling AEs to focus on winning, not chasing.
  • For Solution Engineers and Consultants: It elevates demo experiences by delivering real-time buyer context and actionable insights, ensuring every interaction is highly personalized and impactful.
  • For Sales Leaders: It provides enterprise-grade intelligence across forecasting, pipeline health, team performance, coaching, and the authentic voice of the customer—empowering data-driven decision-making at scale.

With Pepsales AI, sales teams run sharper meetings, accelerate deal cycles, and close with confidence.

Role Overview

We’re seeking a passionate Backend Developer to join our fast-paced team in Bangalore and help build and scale the core systems powering Pepsales. In this full-time, on-site role, you’ll work on high-impact features that directly influence product innovation, customer success, and business growth. 

You’ll collaborate closely with the founding team and leadership, gaining end-to-end ownership and the chance to bring bold, innovative ideas to life in a rapidly scaling startup environment



Key Responsibilities

  • Design, develop, and maintain scalable backend systems and microservices.
  • Write clean, efficient, and well-documented code in Python.
  • Build and optimize RESTful APIs and WebSocket services for high performance.
  • Manage and optimize MongoDB databases for speed and scalability.
  • Deploy and maintain containerized applications using Docker.
  • Work extensively with AWS services (EC2, ALB, S3, Route 53) for robust cloud infrastructure.
  • Implement and maintain CI/CD pipelines for smooth and automated deployments.
  • Collaborate closely with frontend engineers, product managers, and leadership on architecture and feature planning.
  • Participate in sprint planning, technical discussions, and code reviews.
  • Take full ownership of features, embracing uncertainty with a problem-solving mindset.

Required Skills & Qualifications

  • 2–7 years of backend development experience with a proven track record of building scalable systems.
  • Strong proficiency in Python and its ecosystem.
  • Hands-on experience with MongoDB, Docker, and Microservice Architecture.
  • Practical experience with AWS services (EC2, ALB, S3, Route 53).
  • Familiarity with CI/CD tools and deployment best practices.
  • Strong understanding of REST API design principles and WebSocket communication.
  • Excellent knowledge of data structures, algorithms, and performance optimization.
  • Strong communication skills and ability to work in a collaborative, fast-paced environment.


What We Value

  • Excitement to work on cutting-edge technology and platforms , helping redefine how businesses engage and convert customers.
  • Thrives in a dynamic startup environment that values diversity, rapid innovation, and a growth mindset—adapting to change, challenging the status quo, and making a real impact.
  • Passion for ownership beyond coding, contributing to product strategy and innovation.


Why Join Pepsales?

  • Direct access to founders and a voice in high-impact decisions.
  • Opportunity to shape a next-gen AI SaaS product transforming sales worldwide.
  • Exposure to cutting-edge technologies in a rapidly growing startup.
  • Ownership-driven culture with fast career growth and learning opportunities.
  • A collaborative, innovation-driven environment that values creativity and problem-solving.


Read more
MindCrew Technologies

at MindCrew Technologies

3 recruiters
Agency job
Bengaluru (Bangalore)
3 - 5 yrs
₹10L - ₹15L / yr
skill iconJava
skill iconSpring Boot
skill iconPHP
GraphQL
Algorithms
+8 more

Job Title: Java Spring Boot Engineer

📍 Location: Bangalore

🧾 Experience: 3–4 Years

📝 Employment Type: Contract (1 Year + Extendable)


Required Skills & Qualifications:

  • Strong expertise in Java, Spring Boot, and backend development.
  • Hands-on experience with PHP.
  • Good understanding of data structures and algorithms.
  • Experience with GraphQL and RESTful APIs.
  • Proficiency in working with SQL & NoSQL databases.
  • Experience using Git for version control.
  • Familiarity with CI/CD pipelines, Docker, Kubernetes, and cloud platforms (AWS, Azure).
  • Exposure to monitoring and logging tools like Grafana, New Relic, and Splunk.
  • Strong problem-solving skills and ability to work in a collaborative team environment.


Read more
It is a global technology consultancy

It is a global technology consultancy

Agency job
via Scaling Theory by DivyaSri Rajendran
Bengaluru (Bangalore)
4.5 - 10 yrs
₹15L - ₹30L / yr
Spark
skill iconScala
Hadoop
skill iconAmazon Web Services (AWS)

Role overview:

  • Must have About 5 - 11 years and at least 3 years relevant experience with Bigdata. 
  • Must have Experience in building highly scalable business applications, which involve implementing large complex business flows and dealing with huge amounts of data. 
  • Must have experience in Hadoop, Hive, Spark with Scala with good experience in performance tuning and debugging issues.
  • Good to have any stream processing Spark/Java Kafka. 
  • Must have experience in design and development of Big data projects.
  • Good knowledge in Functional programming and OOP concepts, SOLID principles, design patterns for developing scalable applications. 
  • Familiarity with build tools like Maven. 
  • Must have experience with any RDBMS and at least one SQL database preferably PostgresSQL
  • Must have experience writing unit and integration tests using scaliest
  • Must have experience using any versioning control system - Git 
  • Must have experience with CI / CD pipeline – Jenkins is a plus  
  • Basic hands-on experience in one of the cloud provider (AWS/Azure) is a plus
  • Databricks Spark certification is a plus.


What would you do here:

As a Software Development Engineer 2 you will be responsible for expanding and optimising our data and data pipeline architecture as well as optimising data flow and collection for cross-functional teams. The ideal candidate is an experienced data pipeline design and data wrangler who enjoys optimising data systems and building them from the ground up. The Data Engineer will lead our software developers on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimising or even re-designing our company’s data architecture to support our next generation of products and data initiatives.

 

Responsibilities:

 

•Create and maintain optimal data pipeline architecture

•Assemble large complex data sets that meet functional / non-functional business requirements.

•Identify design and implement internal process improvements: automating manual processes optimising data delivery, coordinating to re-design infrastructure for greater scalability etc.

•Work with stakeholders including the Executive Product Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.

•Keep our data separated and secure

•Work with data and analytics experts to strive for greater functionality in our data systems.

- Support PROD systems


Read more
Capace Software Private Limited
Bengaluru (Bangalore), Bhopal
6 - 14 yrs
₹5L - ₹8L / yr
skill iconReact.js
skill iconNextJs (Next.js)
skill iconJavascript
TypeScript
Material Design
+5 more

·       Design, develop, and maintain high-performance, scalable web applications using Next.js  

·       Architect project structure for maintainability, scalability, and performance, following best practices for folder and component organization  

·       Lea d and participate in project deployments using modern CI/CD pipelines and cloud platforms. AWS is must  

·       Write comprehensive unit, integration, and end-to-end test cases using frameworks like Jest, React Testing Library, or Cypress  

·       Collaborate with cross-functional teams (designers, backend developers, product managers) to deliver seamless user experiences and integrate APIs (REST/GraphQL)  

·       Optimize applications for SEO, server-side rendering (SSR), static site generation (SSG), and performance  

·       Conduct code reviews, enforce coding standards, and mentor junior developers  

·       Troubleshoot, debug, and resolve issues across the stack to ensure reliability and security  

·       Maintain technical documentation, including architecture decisions, deployment guides, and test plans  

  Required Skills and Qualifications  

·       Development experience of 6+ years in Reactjs and Nextjs (preferred)  

·       Proven expertise in Leading, architecting and organizing large-scale Nextjs projects.  

·       Hands-on experience with project deployment and cloud platform as AWS (preferred)  

·       Strong proficiency in JavaScript (ES6+), TypeScript, MaterialUI and CSS Frameworks  

·       Experience writing and maintaining test cases using Jest, React Testing Library, Cypress, or similar tools  

·       Familiarity with state management libraries (Redux, Context API) and API integration (REST, GraphQL)  

·       Working knowledge of version control systems (Git) and CI/CD pipelines (preferred)  

·       Solid understanding of SSR, SSG, and SEO best practices  

·       Excellent problem-solving, communication, and teamwork skills  

·       Bachelor’s degree in Computer Science, Information Technology, or a related field (preferred)  

Nice to Have  

·       Experience with CSS frameworks (Tailwind, Bootstrap, Sass)  

·       Familiarity with containerization (Docker) and monitoring tools.  

·       Contributions to open-source Next.js projects or technical blogs.  


Read more
MathonGo

at MathonGo

1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
1 - 2 yrs
Upto ₹10L / yr (Varies
)
skill iconNodeJS (Node.js)
skill iconExpress
skill iconMongoDB
skill iconAmazon Web Services (AWS)
skill iconRedis
+4 more

About the Role

We are seeking highly driven Backend Web Developers with strong knowledge of Node.js, TypeScript, and MongoDB. You will play a key role in building and maintaining the backend architecture, APIs, and scalable services powering our web applications.

This position is ideal for candidates who are self-starters, comfortable in a startup environment, and can pick up tasks independently from Day 0.


Key Responsibilities

  • Design, develop, and maintain scalable backend services and RESTful APIs using Node.js & TypeScript.
  • Work with MongoDB for efficient data modeling, schema design, and query optimization.
  • Integrate backend services with frontend applications and third-party APIs.
  • Write clean, modular, and efficient code with a strong emphasis on performance and security.
  • Ensure error handling, logging, and monitoring are implemented for production readiness.
  • Collaborate with frontend developers, product managers, and designers to deliver end-to-end features.
  • Implement and maintain microservices architecture.
  • Good understanding of deployment processes and willingness to work with AWS stack (EC2, S3, Lambda, etc. – good to have).

Required Skills & Qualifications

  • Strong proficiency in Node.js and TypeScript.
  • Hands-on experience with MongoDB (Mongoose ORM preferred).
  • Solid understanding of Redis, Messaging Queues, etc.
  • Knowledge of Git/GitHub with a strong portfolio of deployed projects (blank GitHub profiles will be rejected).
  • Strong problem-solving, debugging, and optimization skills.
  • Ability to take ownership of tasks and work independently.
  • Familiarity with async programming, promises, event loops, and backend architecture concepts.

Preferred Skills

  • Prior experience (internship/full-time) in a startup environment.
  • Exposure to AWS stack (Lambda, EC2, S3, CloudWatch, RDS, etc.).
  • Experience with Docker, CI/CD pipelines, or cloud deployments.
  • Understanding of server-side caching and messaging queues.
  • Familiarity with testing frameworks (Jest).

Eligibility Criteria

  • Experience: 0 – 2 years (Freshers with strong projects are welcome).
  • Education: Tier 2 / Tier 3 college graduates preferred.
  • GitHub Requirement: Candidates must have solid GitHub profiles with deployed projects. Inactive or blank GitHub accounts will be rejected.

Selection Process

  1. Written Test – Core programming fundamentals & problem-solving.
  2. Sample Task – Real-world backend task (API/service implementation).
  3. Technical Interview (Basic, 30 min) – Node.js, TS, Mongo fundamentals.
  4. Advanced Technical Interview (90 min) – Deep dive into system design, architecture, scaling, and debugging.
  5. HR Round – Culture fit and final discussion.

Why Join Us?

  • Work in a high-growth startup environment where your contributions have a direct impact.
  • Ownership from Day 0 – take responsibility for building and shipping features.
  • Learn and grow with a team of passionate engineers.
  • Opportunity to work with modern tech stack and real-world problem-solving.
Read more
Bluecopa

Bluecopa

Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Bengaluru (Bangalore)
4 - 7 yrs
₹10L - ₹15L / yr
DevOps
skill iconPython
skill iconKubernetes
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
+1 more

Salary (Lacs): Up to 22 LPA


Required Qualifications

•   4–7 years of total experience, with a minimum of 4 years in a full-time DevOps role

•   Hands-on experience with major cloud platforms (GCP, AWS, Azure, OCI), more than one will be a plus

•   Proficient in Kubernetes administration and container technologies (Docker, containerd)

•   Strong Linux fundamentals

•   Scripting skills in Python and shell scripting

•   Knowledge of infrastructure as code with hands-on experience in Terraform and/or Pulumi (mandatory)

•   Experience in maintaining and troubleshooting production environments

•   Solid understanding of CI/CD concepts with hands-on experience in tools like Jenkins, GitLab CI, GitHub Actions, ArgoCD, Devtron, GCP Cloud Build, or Bitbucket Pipelines


If Interested kindly share your updated resume on 82008 31681

Read more
CoffeeBeans

at CoffeeBeans

2 candid answers
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Pune
6 - 9 yrs
Upto ₹32L / yr (Varies
)
skill iconPython
ETL
Data modeling
CI/CD
databricks
+2 more

We are looking for experienced Data Engineers who can independently build, optimize, and manage scalable data pipelines and platforms.

In this role, you’ll:

  • Work closely with clients and internal teams to deliver robust data solutions powering analytics, AI/ML, and operational systems.
  • Mentor junior engineers and bring engineering discipline into our data engagements.

Key Responsibilities

  • Design, build, and optimize large-scale, distributed data pipelines for both batch and streaming use cases.
  • Implement scalable data models, warehouses/lakehouses, and data lakes to support analytics and decision-making.
  • Collaborate with stakeholders to translate business requirements into technical solutions.
  • Drive performance tuning, monitoring, and reliability of data pipelines.
  • Write clean, modular, production-ready code with proper documentation and testing.
  • Contribute to architectural discussions, tool evaluations, and platform setup.
  • Mentor junior engineers and participate in code/design reviews.

Must-Have Skills

  • Strong programming skills in Python and advanced SQL expertise.
  • Deep understanding of ETL/ELT, data modeling (OLTP & OLAP), warehousing, and stream processing.
  • Hands-on with distributed data processing frameworks (Apache Spark, Flink, or similar).
  • Experience with orchestration tools like Airflow (or similar).
  • Familiarity with CI/CD pipelines and Git.
  • Ability to debug, optimize, and scale data pipelines in production.

Good to Have

  • Experience with cloud platforms (AWS preferred; GCP/Azure also welcome).
  • Exposure to Databricks, dbt, or similar platforms.
  • Understanding of data governance, quality frameworks, and observability.
  • Certifications (e.g., AWS Data Analytics, Solutions Architect, or Databricks).

Other Expectations

  • Comfortable working in fast-paced, client-facing environments.
  • Strong analytical and problem-solving skills with attention to detail.
  • Ability to adapt across tools, stacks, and business domains.
  • Willingness to travel within India for short/medium-term client engagements, as needed.
Read more
GaragePlug

at GaragePlug

4 candid answers
6 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
3yrs+
Best in industry
skill iconJava
skill iconSpring Boot
skill iconAmazon Web Services (AWS)
Messaging
Amazon SQS
+7 more

GaragePlug Inc

GaragePlug is one of the fastest-growing Automotive tech startups working towards revolutionising the automotive aftermarket industry with strong state-of-the-art technologies.


Role Overview

As we plan to grow, we have many challenges to solve. Some of the new features and products that are already in the pipeline include advanced analytics, search, reporting, etc., to name a few. Our present backend is based on the microservices architecture built using Spring Boot. With growing complexity, we are open to using other tools and technologies as needed. We are looking for a talented and motivated engineer to join our fleet and help us solve real-world problems in this exciting field. Join us and share the dream of building the next-generation online platform for the Auto industry.


What you'll do:

  • Design and architect our core components
  • End-to-end systems development
  • Ownership of complete systems from development to production and maintenance
  • Infrastructure management on AWS

Technologies you'll use:

  • Microservices, AWS, Java, Spring-boot
  • Gradle / Maven
  • ElasticSearch
  • Jenkins, CI/CD
  • Containerization technologies like Docker, Kubernetes, etc.
  • RDBMS (PostgreSQL) or NoSQL databases (MongoDB) & Enterprise Messaging Applications (Kafka/SQS)
  • JUnit, TestNG, Cucumber, etc.
  • Nginx
  • Any cool piece of technology that you can bring onboard


What you are:

  • You love technology and are always open to learning new tools
  • You are proficient with server technologies: Spring / Spring Boot
  • You have good experience in scaling, performance tuning & optimization at both API and storage layers
  • You have an excellent grasp of OOPS concepts, data structures, algorithms, design patterns & REST APIs
  • You are proficient in Java, SQL
  • You have good knowledge of Databases: RDBMS/Document
  • You have a good understanding of REST API design
  • You have knowledge of DevOps
  • Implement Coding Best Practices. Implement Code Quality gates as per the program norms
  • Knowledge of Angular 2+ is a big plus
Read more
ElevateHQ

at ElevateHQ

1 recruiter
Iliyas Shirol
Posted by Iliyas Shirol
Bengaluru (Bangalore)
5 - 6 yrs
₹15L - ₹25L / yr
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconPostgreSQL
skill iconAmazon Web Services (AWS)

Responsibilities:

  • Work as part of a collaborative, agile team to deliver industry-leading engagement capabilities on the web.
  • Integrate with other services to help the client seamlessly provide value to more companies.
  • Design and develop feature enhancements that continue to deliver value and provide an excellent customer experience.
  • Troubleshoot and resolve emergency server or code issues at any stack level.


Requirements:

  • Ability to empathize with end-users; build with scale and ease of adoption in mind.
  • Experience building in an agile setting with code reviews and quality as a priority.
  • Strong design skills for separation and modularity of code; aversion to overly complex, spaghetti code.
  • A bachelor's degree in Computer Science, related technical field, or commensurate experience.
  • 3 years experience in production web application development.
  • 5 years of experience working with NodeJS or an equivalent web framework and language.
  • Experience writing JavaScript code, either vanilla JS or using a framework such as Node.js or React, or Redux.
  • Unit testing experience (Jest).
  • 3rd-party API usage and integration experience.
  • RDBMS usage (e. g., MySQL, PostgreSQL).


Interview Process:

  • Introductory Round
  • Technical Round - Algo/low level coding round based on real world scenario
  • Technical Round - System Design or High level design coding round and AWS
  • Offer discussion


Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Bengaluru (Bangalore)
6 - 8 yrs
₹10L - ₹22L / yr
skill iconJava
skill iconAmazon Web Services (AWS)
Selenium
Automation

🚀 Hiring: Automation Tester

⭐ Experience: 6+ Years

📍 Location: Bangalore

⭐ Work Mode:- Hybrid

⏱️ Notice Period: Immediate Joiners

(Only immediate joiners & candidates serving notice period)


We’re looking for a skilled Automation Tester with a strong background in Java, Selenium, and AWS to join our growing QA team. If you're passionate about automation, love solving problems, and thrive in a dynamic environment, we’d love to hear from you!


Must-Have Skills:

✅ Strong hands-on experience in Automation Testing

✅ Proficiency in Java for writing test scripts

✅ Expertise in Selenium WebDriver

✅ Working knowledge of AWS services relevant to test environments

✅ Solid understanding of the SDLC and Agile methodologies


Read more
Genspark

at Genspark

2 candid answers
Agency job
via hirezyai by HR Hirezyai
Bengaluru (Bangalore), Chennai, Coimbatore
5 - 9 yrs
₹9L - ₹25L / yr
Apache Kafka
Apache
MLOps
skill iconAmazon Web Services (AWS)

The candidate should have extensive experience in designing and developing scalable data pipelines and real-time data processing solutions. As a key member of the team, the Senior Data Engineer will play a critical role in building end-to-end data workflows, supporting machine learning model deployment, and driving MLOps practices in a fast-paced, agile environment. Strong expertise in Apache Kafka, Apache Flink, AWS SageMaker, and Terraform is essential. Additional experience with infrastructure automation and CI/CD for ML models is a significant advantage.

Key Responsibilities

  1. Design, develop, and maintain high-performance ETL and real-time data pipelines using Apache Kafka and Apache Flink.
  2. Build scalable and automated MLOps pipelines for training, validation, and deployment of models using AWS SageMaker and associated services.
  3. Implement and manage Infrastructure as Code (IaC) using Terraform to provision and manage AWS environments.
  4. Collaborate with data scientists, ML engineers, and DevOps teams to streamline model deployment workflows and ensure reliable production delivery.
  5. Optimize data storage and retrieval strategies for large-scale structured and unstructured datasets.
  6. Develop data transformation logic and integrate data from various internal and external sources into data lakes and warehouses.
  7. Monitor, troubleshoot, and enhance performance of data systems in a cloud-native, fast-evolving production setup.
  8. Ensure adherence to data governance, privacy, and security standards across all data handling activities.
  9. Document data engineering solutions and workflows to facilitate cross-functional understanding and ongoing maintenance.


Read more
VyTCDC
Gobinath Sundaram
Posted by Gobinath Sundaram
Chennai, Bengaluru (Bangalore), Hyderabad, Mumbai, Pune, Noida
4 - 6 yrs
₹3L - ₹21L / yr
AWS Data Engineer
skill iconAmazon Web Services (AWS)
skill iconPython
PySpark
databricks
+1 more

 Key Responsibilities

  • Design and implement ETL/ELT pipelines using Databricks, PySpark, and AWS Glue
  • Develop and maintain scalable data architectures on AWS (S3, EMR, Lambda, Redshift, RDS)
  • Perform data wrangling, cleansing, and transformation using Python and SQL
  • Collaborate with data scientists to integrate Generative AI models into analytics workflows
  • Build dashboards and reports to visualize insights using tools like Power BI or Tableau
  • Ensure data quality, governance, and security across all data assets
  • Optimize performance of data pipelines and troubleshoot bottlenecks
  • Work closely with stakeholders to understand data requirements and deliver actionable insights

🧪 Required Skills

Skill AreaTools & TechnologiesCloud PlatformsAWS (S3, Lambda, Glue, EMR, Redshift)Big DataDatabricks, Apache Spark, PySparkProgrammingPython, SQLData EngineeringETL/ELT, Data Lakes, Data WarehousingAnalyticsData Modeling, Visualization, BI ReportingGen AI IntegrationOpenAI, Hugging Face, LangChain (preferred)DevOps (Bonus)Git, Jenkins, Terraform, Docker

📚 Qualifications

  • Bachelor's or Master’s degree in Computer Science, Data Science, or related field
  • 3+ years of experience in data engineering or data analytics
  • Hands-on experience with Databricks, PySpark, and AWS
  • Familiarity with Generative AI tools and frameworks is a strong plus
  • Strong problem-solving and communication skills

🌟 Preferred Traits

  • Analytical mindset with attention to detail
  • Passion for data and emerging technologies
  • Ability to work independently and in cross-functional teams
  • Eagerness to learn and adapt in a fast-paced environment


Read more
BRAVURA TECHNOLOGIES LLC
Bengaluru (Bangalore)
6 - 15 yrs
₹15L - ₹20L / yr
skill iconNodeJS (Node.js)
skill iconAmazon Web Services (AWS)
skill icon.NET

Key Responsibilities:

·      Design, develop, and maintain high-quality, scalable software solutions

·      Work extensively with AWS Serverless services such as Lambda, API Gateway, and DynamoDB

·      Implement and optimize Elasticsearch for search and analytics

·      Build backend services using NodeJS and develop front-end applications using Angular

·      Design and develop robust REST APIs with a solid understanding of web services

·      Write clean, efficient, and maintainable code in JavaScript and TypeScript

·      Troubleshoot, debug, and resolve performance issues in complex systems

·      Collaborate with cross-functional teams to deliver robust, end-to-end software solutions

Required Skills & Qualifications:

·      Bachelor’s degree in Computer Science or a related field

·      8+ years of experience in software development

·      Strong hands-on expertise in AWS Serverless (Lambda, API Gateway, DynamoDB) or .NET with SQL

·      Experience with Elasticsearch for efficient data retrieval and indexing

·      Proficiency in NodeJS, Angular, JavaScript, and TypeScript

·      Solid understanding of REST APIs and web services

·      Strong UI development skills with CSS

·      Proven experience designing solutions for complex technical requirements

·      Excellent debugging, analytical, and problem-solving abilities


Read more
krtrimaiq cognitive solutions
Bengaluru (Bangalore)
5 - 9 yrs
₹10L - ₹20L / yr
skill iconAmazon Web Services (AWS)
SQL
skill iconPython
ELT
Azure
+3 more

Job Title: Senior Data Engineer

Location: Bangalore | Hybrid

Company: krtrimaIQ Cognitive Solutions


Role Overview:

As a Senior Data Engineer, you will design, build, and optimize robust data foundations and end-to-end solutions to unlock maximum value from data across the organization. You will play a key role in fostering data-driven thinking, not only within the IT function but also among broader business stakeholders. You will serve as a technology and subject matter expert, providing mentorship to junior engineers and translating the company’s vision and Data Strategy into actionable, high-impact IT solutions.

Key Responsibilities:

  • Design, develop, and implement scalable data solutions to support business objectives and drive digital transformation.
  • Serve as a subject matter expert in data engineering, providing guidance and mentorship to junior team members.
  • Enable and promote data-driven culture throughout the organization, engaging both technical and business stakeholders.
  • Lead the design and delivery of Data Foundation initiatives, ensuring adoption and value realization across business units.
  • Collaborate with business and IT teams to capture requirements, design optimal data models, and deliver high-value insights.
  • Manage and drive change management, incident management, and problem management processes related to data platforms.
  • Present technical reports and actionable insights to stakeholders and leadership teams, acting as the expert in Data Analysis and Design.
  • Continuously improve efficiency and effectiveness of solution delivery, driving down costs and reducing implementation times.
  • Contribute to organizational knowledge-sharing and capability building (e.g., Centers of Excellence, Communities of Practice).
  • Champion best practices in code quality, DevOps, CI/CD, and data governance throughout the solution lifecycle.

Key Characteristics:

  • Technology expert with a passion for continuous learning and exploring multiple perspectives.
  • Deep expertise in the data engineering/technology domain, with hands-on experience across the full data stack.
  • Excellent communicator, able to bridge the gap between technical teams and business stakeholders.
  • Trusted leader, respected across levels for subject matter expertise and collaborative approach.

Mandatory Skills & Experience:

  • Mastery in public cloud platforms: AWS, Azure, SAP
  • Mastery in ELT (Extract, Load, Transform) operations
  • Advanced data modeling expertise for enterprise data platforms

Hands-on skills:

  • Data Integration & Ingestion
  • Data Manipulation and Processing
  • Source/version control and DevOps tools: GITHUB, Actions, Azure DevOps
  • Data engineering/data platform tools: Azure Data Factory, Databricks, SQL Database, Synapse Analytics, Stream Analytics, AWS Glue, Apache Airflow, AWS Kinesis, Amazon Redshift, SonarQube, PyTest
  • Experience building scalable and reliable data pipelines for analytics and other business applications

Optional/Preferred Skills:

  • Project management experience, especially running or contributing to Scrum teams
  • Experience working with BPC (Business Planning and Consolidation), Planning tools
  • Exposure to working with external partners in the technology ecosystem and vendor management

What We Offer:

  • Opportunity to leverage cutting-edge technologies in a high-impact, global business environment
  • Collaborative, growth-oriented culture with strong community and knowledge-sharing
  • Chance to influence and drive key data initiatives across the organization


Read more
HelloRamp.ai

at HelloRamp.ai

2 candid answers
Eman Khan
Posted by Eman Khan
Remote, Bengaluru (Bangalore)
1 - 2 yrs
₹8L - ₹12L / yr
Computer Vision
NERF
CUDA
TensorRT
ONNX
+10 more

About HelloRamp.ai

HelloRamp is on a mission to revolutionize media creation for automotive and retail using AI. Our platform powers 3D/AR experiences for leading brands like Cars24, Spinny, and Samsung. We’re now building the next generation of Computer Vision + AI products, including cutting-edge NeRF pipelines and AI-driven video generation.


What You’ll Work On

  • Develop and optimize Computer Vision pipelines for large-scale media creation.
  • Implement NeRF-based systems for high-quality 3D reconstruction.
  • Build and fine-tune AI video generation models using state-of-the-art techniques.
  • Optimize AI inference for production (CUDA, TensorRT, ONNX).
  • Collaborate with the engineering team to integrate AI features into scalable cloud systems.
  • Research latest AI/CV advancements and bring them into production.


Skills & Experience

  • Strong Python programming skills.
  • Deep expertise in Computer Vision and Machine Learning.
  • Hands-on with PyTorch/TensorFlow.
  • Experience with NeRF frameworks (Instant-NGP, Nerfstudio, Plenoxels) and/or video synthesis models.
  • Familiarity with 3D graphics concepts (meshes, point clouds, depth maps).
  • GPU programming and optimization skills.


Nice to Have

  • Knowledge of Three.js or WebGL for rendering AI outputs on the web.
  • Familiarity with FFmpeg and video processing pipelines.
  • Experience in cloud-based GPU environments (AWS/GCP).


Why Join Us?

  • Work on cutting-edge AI and Computer Vision projects with global impact.
  • Join a small, high-ownership team where your work matters.
  • Opportunity to experiment, publish, and contribute to open-source.
  • Competitive pay and flexible work setup.
Read more
Bengaluru (Bangalore)
5 - 10 yrs
₹20L - ₹25L / yr
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
Terraform
skill iconPython
+2 more

Job Type : Contract


Location : Bangalore


Experience : 5+yrs


The role focuses on cloud security engineering with a strong emphasis on GCP, while also covering AWS and Azure.


Required Skills:


  • 5+ years of experience in software and/or cloud platform engineering, particularly focused on GCP environment.
  • Knowledge of the Shared Responsibility Model; keen understanding of the security risks inherent in hosting cloud-based applications and data.
  • Experience developing across the security assurance lifecycle (including prevent, detect, respond, and remediate controls)?Experience in configuring Public Cloud native security tooling and capabilities with a focus on Google Cloud Organizational policies/constraints, VPC SC, IAM policies and GCP APIs.
  • Experience with Cloud Security Posture Management (CSPM) 3rd Party tools such as Wiz, Prisma, Check Point CloudGuard, etc.
  • Experience in Policy-as-code (Rego) and OPA platform.
  • Experience solutioning and configuring event-driven serverless-based security controls in Azure, including but not limited to technologies such as Azure Function, Automation Runbook, AWS Lambda and Google Cloud Functions.
  • Deep understanding of DevOps processes and workflows.
  • Working knowledge of the Secure SDLC process
  • Experience with Infrastructure as Code (IaC) tooling, preferably Terraform.
  • Familiarity with Logging and data pipeline concepts and architectures in cloud.
  • Strong in scripting languages such as PowerShell or Python or Bash or Go.
  • Knowledge of Agile best practices and methodologies
  • Experience creating technical architecture documentation.? Excellent communication, written and interpersonal skills.
  • Practical experience in designing and configuring CICD pipelines. Practical experience in GitHub Actions and Jenkins.
  • Experience in ITSM.
  • Ability to articulate complex technical concepts to non-technical stakeholders.
  • Experience with risk control frameworks and engagements with risk and regulatory functions
  • Experience in the financial industry would be a plus.


Read more
Codnatives
Bengaluru (Bangalore)
4.5 - 7 yrs
₹4L - ₹16L / yr
skill iconGo Programming (Golang)
skill iconAmazon Web Services (AWS)
skill iconJavascript
skill iconHTML/CSS

 Participate and contribute in platform requirements/story development.

 Contribute to the design and design alternatives to the requirements/stories and also participate in design reviews.

 Involved in Platform Sprint activities.

 Development of assigned stories in appropriate languages defined for each module.

 Develop Unit test cases and execute them part of continuous integration pipeline.

 Participate in peer code reviews

 

Years of experience needed –

 4+ years or more on Golang development.

Technical Skills:

 4+ years of development experience on Golang based development projects

 Mandatory Skills – Golang, AWS , JavaScript, CSS, HTML

 Must have experience in design and development of front end and back-end services for various business processes

 Good experience on Java, Spring boot & Microservices

 Experience using versioning controls such as GitHub

 Experience in working with SQL & NOSQL databases

 Understanding of containerization technologies such as docker, Kubernetes

 Good knowledge and hands-on on Unit Testing and available Test Frameworks

Read more
Quidcash
Smitha Vikram
Posted by Smitha Vikram
Bengaluru (Bangalore)
6 - 10 yrs
₹35L - ₹50L / yr
Object Oriented Programming (OOPs)
skill iconAmazon Web Services (AWS)
skill iconNodeJS (Node.js)
skill iconGo Programming (Golang)
skill iconKubernetes
+1 more

Quidcash is seeking a highly skilled and passionate Engineering Manager to lead, mentor, and grow a talented team of software engineers. You will be instrumental in shaping our technical direction, driving the development of our core products, and championing engineering excellence. This is a hands-on leadership role where you'll contribute to architectural decisions, foster a culture of innovation, and ensure your team is equipped to build scalable, robust, and intelligent systems.

If you're a leader who thrives on technical challenges, loves building high-performing teams, and is excited by the potential of AI/ML in fintech, we want to hear from you!

What You'll Do:

· Lead & Mentor: Manage, coach, and develop a team of software engineers, data scientists fostering an inclusive, collaborative, and high-performance culture. Conduct regular 1:1s, performance reviews, and support career growth.

· Technical Leadership: Provide strong technical guidance on architecture, design, and implementation of complex systems, particularly in microservices, OOPS principles, and cloud-native applications.

· AI/ML Integration: Drive the strategy and execution for integrating AI/ML models and techniques into our products and platforms, working closely with data scientists and engineers.

· Engineering Best Practices: Establish, evangelize, and enforce best practices for software development, including code quality, testing (unit, integration, E2E), CI/CD, security, and documentation.

· DevOps Culture: Champion and implement DevOps principles to improve deployment frequency, system reliability, and operational efficiency. Oversee CI/CD pipelines and infrastructure-as-code practices.

· Roadmap & Execution: Collaborate with Product Management, Design, and other stakeholders to define the technical roadmap, translate product requirements into actionable engineering tasks, and ensure timely delivery of high-quality software.

· Architectural Vision: Contribute to and influence the long-term architectural vision for Quidcash platforms, ensuring scalability, resilience, and maintainability.

· Problem Solving: Dive deep into complex technical challenges, lead troubleshooting efforts, and make critical technical decisions.

· Recruitment & Team Building: Actively participate in recruiting, interviewing, and onboarding new engineering talent.

What You'll Bring (Must-Haves):

· Experience:

o Proven experience (6+ years) in software engineering, with a strong foundation in Object-Oriented Programming (OOP/S) using languages like Java, Python, C#, Go, or similar.

o Demonstrable experience (2+ years) in an engineering leadership or management role, directly managing a team of engineers.

· Technical Acumen:

o Deep understanding and practical experience with microservice architecture, including design patterns, deployment strategies (e.g., Kubernetes, Docker), and inter-service communication.

o Solid experience with cloud platforms (AWS, GCP, or Azure).

o Familiarity and practical experience with AI/ML concepts, tools, and their application in real-world products (e.g., machine learning pipelines, model deployment, MLOps).

o Proficiency in establishing and driving DevOps practices (CI/CD, monitoring, alerting, infrastructure automation).

· Leadership & Management:

o Excellent leadership, communication, and interpersonal skills with a proven ability to mentor and grow engineering talent.

o Experience in setting up and enforcing engineering best practices (code reviews, testing methodologies, agile processes).

o Strong project management skills, with experience in Agile/Scrum methodologies.

· Mindset:

o A proactive, problem-solving attitude with a passion for continuous improvement.

o Ability to thrive in a fast-paced, dynamic startup environment.

o Strong business acumen and ability to align technical strategy with company goals.

Nice-to-Haves:

· Experience in the FinTech or financial services, lending industry.

· Hands-on experience with specific AI/ML frameworks (e.g., TensorFlow, PyTorch, scikit-learn).

· Experience with event-driven architectures (e.g., Kafka, RabbitMQ).

· Contributions to open-source projects or a strong public technical presence.

· Advanced degree (M.S. or Ph.D.) in Computer Science, Engineering, or a related field.

Why Join Quidcash?

· Impact: Play a pivotal role in shaping a product that directly impacts Indian SMEs business growth.

· Innovation: Work with cutting-edge technologies, including AI/ML, in a forward-thinking environment.

· Growth: Opportunities for professional development and career advancement in a growing company.

· Culture: Be part of a collaborative, supportive, and brilliant team that values every contribution.

· Benefits: Competitive salary, comprehensive benefits package, be a part of the next fin-tech evolution.

Read more
AI Powered Logistics Company

AI Powered Logistics Company

Agency job
via Recruiting Bond by Pavan Kumar
Bengaluru (Bangalore)
2 - 3 yrs
₹12L - ₹16L / yr
skill iconReact.js
skill iconNextJs (Next.js)
TypeScript
MERN Stack
skill iconMongoDB
+8 more

Job Title: Frontend Engineer- Reactjs, Nextjs, MUI 

Location: Hybrid weekly ⅔ days WFO (Bengaluru- India)


About the Role:

We're looking for a passionate and skilled Frontend Engineer with 1–3 years of experience to join our growing development team. This role is front-end-heavy, focused on building clean, scalable, and high-performance user interfaces using the latest technologies in the MERN stack—particularly Next.js, React, TypeScript, and Material UI (MUI).

You’ll work alongside a collaborative and talented team to design and build seamless web experiences that delight users. If you're excited about modern frontend architecture and want to grow in a fast-moving, remote-first environment, we'd love to hear from you.


Key Responsibilities:

  • Develop responsive, high-performance web applications using Next.js, React, and TypeScript.
  • Translate UI/UX designs into functional frontend components using MUI.
  • Collaborate with backend developers, designers, and product managers to deliver new features and improvements.
  • Ensure code quality through best practices, code reviews, and testing.
  • Optimize applications for maximum speed and scalability.


Must-Have Skills:

  • 1–3 years of professional experience in frontend development.
  • Strong proficiency in React, Next.js, and TypeScript.
  • Experience with Material UI (MUI) or similar component libraries.
  • Understanding of responsive design, modern frontend tooling, and web performance best practices.
  • Familiarity with Git and collaborative workflows.


Nice-to-Have (Bonus) Skills:

  • Familiarity with testing libraries (Jest, React Testing Library, Cypress).
  • Experience working with design tools like Figma or Adobe XD.
  • Basic knowledge of accessibility (a11y) standards and performance optimization.
  • Basic experience with Node.js, MongoDB, or working in a MERN stack environment.
  • Familiarity with AWS services or cloud deployment practices.
  • Experience with RESTful APIs or integrating with backend services.


Read more
AI Powered Logistics Company

AI Powered Logistics Company

Agency job
via Recruiting Bond by Pavan Kumar
Bengaluru (Bangalore)
2 - 4 yrs
₹12L - ₹16L / yr
MERN Stack
skill iconReact.js
TypeScript
skill iconNextJs (Next.js)
skill iconNodeJS (Node.js)
+7 more

Job Title: MERN STACK Developer

Location: Hybrid weekly ⅔ days WFO (Bengaluru- India)


About the Role:

We're looking for a passionate and skilled Frontend Engineer with 1–3 years of experience to join our growing development team. This role is front-end-heavy, focused on building clean, scalable, and high-performance user interfaces using the latest technologies in the MERN stack—particularly Next.jsReactTypeScript, and Material UI (MUI).

You’ll work alongside a collaborative and talented team to design and build seamless web experiences that delight users. If you're excited about modern frontend architecture and want to grow in a fast-moving, remote-first environment, we'd love to hear from you.


Key Responsibilities:

  • Develop responsive, high-performance web applications using Next.jsReact, and TypeScript.
  • Translate UI/UX designs into functional frontend components using MUI.
  • Collaborate with backend developers, designers, and product managers to deliver new features and improvements.
  • Ensure code quality through best practices, code reviews, and testing.
  • Optimize applications for maximum speed and scalability.


Must-Have Skills:

  • 1–3 years of professional experience in frontend development.
  • Strong proficiency in ReactNext.js, and TypeScript.
  • Experience with Material UI (MUI) or similar component libraries.
  • Understanding of responsive design, modern frontend tooling, and web performance best practices.
  • Familiarity with Git and collaborative workflows.


Nice-to-Have (Bonus) Skills:

  • Familiarity with testing libraries (Jest, React Testing Library, Cypress).
  • Experience working with design tools like Figma or Adobe XD.
  • Basic knowledge of accessibility (a11y) standards and performance optimization.
  • Basic experience with Node.jsMongoDB, or working in a MERN stack environment.
  • Familiarity with AWS services or cloud deployment practices.
  • Experience with RESTful APIs or integrating with backend services.


Read more
AI Powered Logistics Company

AI Powered Logistics Company

Agency job
via Recruiting Bond by Pavan Kumar
Bengaluru (Bangalore)
5 - 8 yrs
₹25L - ₹32L / yr
skill iconNodeJS (Node.js)
NOSQL Databases
SQL
skill iconMongoDB
RabbitMQ
+19 more

Job Title: Backend Engineer - NodeJS, NestJS, and Python

Location: Hybrid weekly ⅔ days WFO (Bengaluru- India)


About the role:

We are looking for a skilled and passionate Senior Backend Developer to join our dynamic team. The ideal candidate should have strong experience in Node.js and NestJS, along with a solid understanding of database management, query optimization, and microservices architecture. As a backend developer, you will be responsible for developing and maintaining scalable backend systems, building robust APIs, integrating databases, and working closely with frontend and DevOps teams to deliver high-quality software solutions.


What You'll Do 🛠️ 

  • Design, develop, and maintain server-side logic using Node.js, NestJS, and Python.
  • Develop and integrate RESTful APIs and microservices to support scalable systems.
  • Work with NoSQL and SQL databases (e.g., MongoDB, PostgreSQL, MySQL) to create and manage schemas, write complex queries, and optimize performance.
  • Collaborate with cross-functional teams including frontend, DevOps, and QA.
  • Ensure code quality, maintainability, and scalability through code reviews, testing, and documentation.
  • Monitor and troubleshoot production systems, ensuring high availability and performance.
  • Implement security and data protection best practices.


What You'll Bring 💼 

  • 4 to 6 years of professional experience as a backend developer.
  • Strong proficiency in Node.js and NestJS framework.
  • Good hands-on experience with Python (Django/Flask experience is a plus).
  • Solid understanding of relational and non-relational databases.
  • Proficient in writing complex NoSQL queries and SQL queries
  • Experience with microservices architecture and distributed systems.
  • Familiarity with version control systems like Git.
  • Basic understanding of containerization (e.g., Docker) and cloud services is a plus.
  • Excellent problem-solving skills and a collaborative mindset.

 

Bonus Points ➕ 

  • Experience with CI/CD pipelines.
  • Exposure to cloud platforms like AWS, GCP or Azure.
  • Familiarity with event-driven architecture or message brokers (MQTT, Kafka, RabbitMQ)


Why this role matters

You will help build the company from the ground up—shaping our culture and having an impact from Day 1 as part of the foundational team.

Read more
AI Powered Logistics Company

AI Powered Logistics Company

Agency job
via Recruiting Bond by Pavan Kumar
Bengaluru (Bangalore)
3 - 8 yrs
₹20L - ₹36L / yr
DevOps
skill iconKubernetes
skill iconMongoDB
skill iconPython
skill iconDocker
+35 more

Job Title: Sr Dev Ops Engineer

Location: Bengaluru- India (Hybrid work type)

Reports to: Sr Engineer manager


About Our Client : 

We are a solution-based, fast-paced tech company with a team that thrives on collaboration and innovative thinking. Our Client's IoT solutions provide real-time visibility and actionable insights for logistics and supply chain management. Cloud-based, AI-enhanced metrics coupled with patented hardware optimize processes, inform strategic decision making and enable intelligent supply chains without the costly infrastructure


About the role : We're looking for a passionate DevOps Engineer to optimize our software delivery and infrastructure. You'll build and maintain CI/CD pipelines for our microservices, automate infrastructure, and ensure our systems are reliable, scalable, and secure. If you thrive on enhancing performance and fostering operational excellence, this role is for you. 


What You'll Do 🛠️

  • Cloud Platform Management: Administer and optimize AWS resources, ensuring efficient billing and cost management.
  • Billing & Cost Optimization: Monitor and optimize cloud spending.
  • Containerization & Orchestration: Deploy and manage applications and orchestrate them.
  • Database Management: Deploy, manage, and optimize database instances and their lifecycles.
  • Authentication Solutions: Implement and manage authentication systems.
  • Backup & Recovery: Implement robust backup and disaster recovery strategies, for Kubernetes cluster and database backups.
  • Monitoring & Alerting: Set up and maintain robust systems using tools for application and infrastructure health and integrate with billing dashboards.
  • Automation & Scripting: Automate repetitive tasks and infrastructure provisioning.
  • Security & Reliability: Implement best practices and ensure system performance and security across all deployments.
  • Collaboration & Support: Work closely with development teams, providing DevOps expertise and support for their various application stacks. 


What You'll Bring 💼

  • Minimum of 4 years of experience in a DevOps or SRE role.
  • Strong proficiency in AWS Cloud, including services like Lambda, IoT Core, ElastiCache, CloudFront, and S3.
  • Solid understanding of Linux fundamentals and command-line tools.
  • Extensive experience with CI/CD tools, GitLab CI.
  • Hands-on experience with Docker and Kubernetes, specifically AWS EKS.
  • Proven experience deploying and managing microservices.
  • Expertise in database deployment, optimization, and lifecycle management (MongoDB, PostgreSQL, and Redis).
  • Experience with Identity and Access management solutions like Keycloak.
  • Experience implementing backup and recovery solutions.
  • Familiarity with optimizing scaling, ideally with Karpenter.
  • Proficiency in scripting (Python, Bash).
  • Experience with monitoring tools such as Prometheus, Grafana, AWS CloudWatch, Elastic Stack.
  • Excellent problem-solving and communication skills. 


Bonus Points ➕

  • Basic understanding of MQTT or general IoT concepts and protocols.
  • Direct experience optimizing React.js (Next.js), Node.js (Express.js, Nest.js) or Python (Flask) deployments in a containerized environment.
  • Knowledge of specific AWS services relevant to application stacks.
  • Contributions to open-source projects related to Kubernetes, MongoDB, or any of the mentioned frameworks.
  • AWS Certifications (AWS Certified DevOps Engineer, AWS Certified Solutions Architect, AWS Certified SysOps Administrator, AWS Certified Advanced Networking).


Why this role: 

•You will help build the company from the ground up—shaping our culture and having an impact from Day 1 as part of the foundational team.

Read more
AI Powered Logistics Company

AI Powered Logistics Company

Agency job
via Recruiting Bond by Pavan Kumar
Bengaluru (Bangalore)
3 - 8 yrs
₹20L - ₹30L / yr
Reliability engineering
DevOps
Message Queuing Telemetry Transport (MQTT)
skill iconKubernetes
skill iconMongoDB
+24 more

Job Title: Sr Dev Ops Engineer

Location: Bengaluru- India (Hybrid work type)

Reports to: Sr Engineer manager


About Our Client : 

We are a solution-based, fast-paced tech company with a team that thrives on collaboration and innovative thinking. Our Client's IoT solutions provide real-time visibility and actionable insights for logistics and supply chain management. Cloud-based, AI-enhanced metrics coupled with patented hardware optimize processes, inform strategic decision making and enable intelligent supply chains without the costly infrastructure


About the role : We're looking for a passionate DevOps Engineer to optimize our software delivery and infrastructure. You'll build and maintain CI/CD pipelines for our microservices, automate infrastructure, and ensure our systems are reliable, scalable, and secure. If you thrive on enhancing performance and fostering operational excellence, this role is for you. 


What You'll Do 🛠️

  • Cloud Platform Management: Administer and optimize AWS resources, ensuring efficient billing and cost management.
  • Billing & Cost Optimization: Monitor and optimize cloud spending.
  • Containerization & Orchestration: Deploy and manage applications and orchestrate them.
  • Database Management: Deploy, manage, and optimize database instances and their lifecycles.
  • Authentication Solutions: Implement and manage authentication systems.
  • Backup & Recovery: Implement robust backup and disaster recovery strategies, for Kubernetes cluster and database backups.
  • Monitoring & Alerting: Set up and maintain robust systems using tools for application and infrastructure health and integrate with billing dashboards.
  • Automation & Scripting: Automate repetitive tasks and infrastructure provisioning.
  • Security & Reliability: Implement best practices and ensure system performance and security across all deployments.
  • Collaboration & Support: Work closely with development teams, providing DevOps expertise and support for their various application stacks. 


What You'll Bring 💼

  • Minimum of 4 years of experience in a DevOps or SRE role.
  • Strong proficiency in AWS Cloud, including services like Lambda, IoT Core, ElastiCache, CloudFront, and S3.
  • Solid understanding of Linux fundamentals and command-line tools.
  • Extensive experience with CI/CD tools, GitLab CI.
  • Hands-on experience with Docker and Kubernetes, specifically AWS EKS.
  • Proven experience deploying and managing microservices.
  • Expertise in database deployment, optimization, and lifecycle management (MongoDB, PostgreSQL, and Redis).
  • Experience with Identity and Access management solutions like Keycloak.
  • Experience implementing backup and recovery solutions.
  • Familiarity with optimizing scaling, ideally with Karpenter.
  • Proficiency in scripting (Python, Bash).
  • Experience with monitoring tools such as Prometheus, Grafana, AWS CloudWatch, Elastic Stack.
  • Excellent problem-solving and communication skills. 


Bonus Points ➕

  • Basic understanding of MQTT or general IoT concepts and protocols.
  • Direct experience optimizing React.js (Next.js), Node.js (Express.js, Nest.js) or Python (Flask) deployments in a containerized environment.
  • Knowledge of specific AWS services relevant to application stacks.
  • Contributions to open-source projects related to Kubernetes, MongoDB, or any of the mentioned frameworks.
  • AWS Certifications (AWS Certified DevOps Engineer, AWS Certified Solutions Architect, AWS Certified SysOps Administrator, AWS Certified Advanced Networking).


Why this role: 

•You will help build the company from the ground up—shaping our culture and having an impact from Day 1 as part of the foundational team.

Read more
Gruve
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Pune
3 - 6 yrs
Upto ₹40L / yr (Varies
)
skill iconJava
skill iconSpring Boot
skill iconAmazon Web Services (AWS)
Windows Azure
DevOps
+1 more

We are seeking an experienced and highly skilled Technical Lead with a strong background in Java, SaaS architectures, firewalls, and cybersecurity products, including SIEM and SOAR platforms. The ideal candidate will lead technical initiatives, design and implement scalable systems, and drive best practices across the engineering team. This role requires deep technical expertise, leadership abilities, and a passion for building secure and high-performing security solutions.


Key Roles & Responsibilities:

  • Lead the design and development of scalable and secure software solutions using Java.
  • Architect and build SaaS-based cybersecurity applications, ensuring high availability, performance, and reliability.
  • Provide technical leadership, mentoring, and guidance to the development team.
  • Ensure best practices in secure coding, threat modeling, and compliance with industry standards.
  • Collaborate with cross-functional teams, including Product Management, Security, and DevOps to deliver high-quality security solutions.
  • Design and implement security analytics, automation workflows and ITSM integrations.
  •  Drive continuous improvements in engineering processes, tools, and technologies.
  • Troubleshoot complex technical issues and lead incident response for critical production systems.


Basic Qualifications:

  • A bachelor’s or master’s degree in computer science, electronics engineering or a related field
  • 3-6 years of software development experience, with expertise in Java.
  • Strong background in building SaaS applications with cloud-native architectures (AWS, GCP, or Azure).
  • In-depth understanding of microservices architecture, APIs, and distributed systems.
  • Experience with containerization and orchestration tools like Docker and Kubernetes.
  • Knowledge of DevSecOps principles, CI/CD pipelines, and infrastructure as code (Terraform, Ansible, etc.).
  • Strong problem-solving skills and ability to work in an agile, fast-paced environment.
  • Excellent communication and leadership skills, with a track record of mentoring engineers.

 

Preferred Qualifications:

  • Experience with cybersecurity solutions, including SIEM (e.g., Splunk, ELK, IBM QRadar) and SOAR (e.g., Palo Alto XSOAR, Swimlane).
  • Knowledge of zero-trust security models and secure API development.
  • Hands-on experience with machine learning or AI-driven security analytics.
Read more
PGAGI
Pooja Jain
Posted by Pooja Jain
Bengaluru (Bangalore)
2 - 7 yrs
₹6L - ₹14L / yr
skill iconPython
FastAPI
skill iconDjango
Computer Networking
skill iconAmazon Web Services (AWS)
+9 more

Backend Engineer - Python

Location

Bangalore, India

Experience Required

2-3 years minimum

Job Overview

We are seeking a skilled Backend Engineer with expertise in Python to join our engineering team. The ideal candidate will have hands-on experience building and maintaining enterprise-level, scalable backend systems.

Key Requirements

Technical Skills

    CS fundamentals are must (CN, DBMS, OS, System Design, OOPS) • Python Expertise: Advanced proficiency in Python with deep understanding of frameworks like Django, FastAPI, or Flask

• Database Management: Experience with PostgreSQL, MySQL, MongoDB, and database optimization

• API Development: Strong experience in designing and implementing RESTful APIs and GraphQL

• Cloud Platforms: Hands-on experience with AWS, GCP, or Azure services

• Containerization: Proficiency with Docker and Kubernetes

• Message Queues: Experience with Redis, RabbitMQ, or Apache Kafka

• Version Control: Advanced Git workflows and collaboration

Experience Requirements

• Minimum 2-3 years of backend development experience

• Proven track record of working on enterprise-level applications

• Experience building scalable systems handling high traffic loads

• Background in microservices architecture and distributed systems

• Experience with CI/CD pipelines and DevOps practices

Responsibilities

• Design, develop, and maintain robust backend services and APIs

• Optimize application performance and scalability

• Collaborate with frontend teams and product managers

• Implement security best practices and data protection measures

• Write comprehensive tests and maintain code quality

• Participate in code reviews and architectural discussions

• Monitor system performance and troubleshoot production issues

Preferred Qualifications

• Knowledge of caching strategies (Redis, Memcached)

• Understanding of software architecture patterns

• Experience with Agile/Scrum methodologies

• Open source contributions or personal projects

Read more
Lalitech

at Lalitech

1 recruiter
Govind Varshney
Posted by Govind Varshney
Remote, Bengaluru (Bangalore), Noida
5 - 10 yrs
₹7L - ₹20L / yr
Artificial Intelligence (AI)
Generative AI
skill iconPython
skill iconNodeJS (Node.js)
Vector database
+7 more

Location: Hybrid/ Remote

Type: Contract / Full‑Time

Experience: 5+ Years

Qualification: Bachelor’s or Master’s in Computer Science or a related technical field


Responsibilities:

  • Architect & implement the RAG pipeline: embeddings ingestion, vector search (MongoDB Atlas or similar), and context-aware chat generation.
  • Design and build Python‑based services (FastAPI) for generating and updating embeddings.
  • Host and apply LoRA/QLoRA adapters for per‑user fine‑tuning.
  • Automate data pipelines to ingest daily user logs, chunk text, and upsert embeddings into the vector store.
  • Develop Node.js/Express APIs that orchestrate embedding, retrieval, and LLM inference for real‑time chat.
  • Manage vector index lifecycle and similarity metrics (cosine/dot‑product).
  • Deploy and optimize on AWS (Lambda, EC2, SageMaker), containerization (Docker), and monitoring for latency, costs, and error rates.
  • Collaborate with frontend engineers to define API contracts and demo endpoints.
  • Document architecture diagrams, API specifications, and runbooks for future team onboarding.


Required Skills

  • Strong Python expertise (FastAPI, async programming).
  • Proficiency with Node.js and Express for API development.
  • Experience with vector databases (MongoDB Atlas Vector Search, Pinecone, Weaviate) and similarity search.
  • Familiarity with OpenAI’s APIs (embeddings, chat completions).
  • Hands‑on with parameters‑efficient fine‑tuning (LoRA, QLoRA, PEFT/Hugging Face).
  • Knowledge of LLM hosting best practices on AWS (EC2, Lambda, SageMaker).

Containerization skills (Docker):

  • Good understanding of RAG architectures, prompt design, and memory management.
  • Strong Git workflow and collaborative development practices (GitHub, CI/CD).


Nice‑to‑Have:

  • Experience with Llama family models or other open‑source LLMs.
  • Familiarity with MongoDB Atlas free tier and cluster management.
  • Background in data engineering for streaming or batch processing.
  • Knowledge of monitoring & observability tools (Prometheus, Grafana, CloudWatch).
  • Frontend skills in React to prototype demo UIs.
Read more
Lalitech

at Lalitech

1 recruiter
Govind Varshney
Posted by Govind Varshney
Remote, Bengaluru (Bangalore), Noida
5 - 10 yrs
₹10L - ₹20L / yr
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Azure
skill iconJavascript
skill iconReact.js
+5 more

 Location: Hybrid/ Remote

Openings: 2

Experience: 5–12 Years

Qualification: Bachelor’s or Master’s in Computer Science or a related technical field


Key Responsibilities

Architect & Design:

  • Provide technical and architectural direction for complex frontend solutions,ensuring alignment with enterprise standards and best practices.
  • Conduct design and code reviews to maintain high-quality, reusable, and scalable frontend interfaces for enterprise applications.
  • Collaborate with cross-functional teams to define and enforce UI/UX design guidelines, accessibility standards, and performance benchmarks.
  • Identify and address potential security vulnerabilities in frontend implementations, ensuring compliance with security and data privacy requirements.

Development & Debugging:

  • Write clean, maintainable, and efficient frontend code.
  • Debug and troubleshoot code to ensure robust, high-performing applications.
  • Develop reusable frontend libraries that can be leveraged across multiple projects.

AI Awareness (Preferred):

  • Understand AI/ML fundamentals and how they can enhance frontend applications.
  • Collaborate with teams integrating AI-based features into chat applications.

Collaboration & Reporting:

  • Work closely with cross-functional teams to align on architecture and deliverables.
  • Regularly report progress, identify risks, and propose mitigation strategies.

Quality Assurance:

  • Implement unit tests and end-to-end tests to ensure code quality.
  • Participate in code reviews and enforce best practices.


Required Skills 

  • 5-10 years of experience architecting and developing cloud-based global applications in a public cloud environment (AWS, Azure, or GCP).
  • Strong hands-on expertise in frontend technologies: JavaScript, HTML5, CSS3
  • Proficiency with Modern frameworks like React, Angular, or Node.js
  • Backend familiarity with Java, Spring Boot (or similar technologies).
  • Experience developing real-world, at-scale products.
  • General knowledge of cloud platforms (AWS, Azure, or GCP) and their structure, use, and capabilities.
  • Strong problem-solving, debugging, and performance optimization skills.
Read more
Lalitech

at Lalitech

1 recruiter
Govind Varshney
Posted by Govind Varshney
Remote, Bengaluru (Bangalore), Noida
5 - 10 yrs
₹7L - ₹20L / yr
Fullstack Developer
skill iconJavascript
skill iconHTML/CSS
skill iconReact.js
skill iconSpring Boot
+9 more

Location: Hybrid/ Remote

Openings: 2

Experience: 5+ Years

Qualification: Bachelor’s or Master’s in Computer Science or related field


Job Responsibilities


Problem Solving & Optimization:

  • Analyze and resolve complex technical and application issues.
  • Optimize application performance, scalability, and reliability.

Design & Develop:

  • Build, test, and deploy scalable full-stack applications with high performance and security.
  • Develop clean, reusable, and maintainable code for both frontend and backend.

AI Integration (Preferred):

  • Collaborate with the team to integrate AI/ML models into applications where applicable.
  • Explore Generative AI, NLP, or machine learning solutions that enhance product capabilities.

Technical Leadership & Mentorship:

  • Provide guidance, mentorship, and code reviews for junior developers.
  • Foster a culture of technical excellence and knowledge sharing.

Agile & Delivery Management:

  • Participate in Agile ceremonies (sprint planning, stand-ups, retrospectives).
  • Define and scope backlog items, track progress, and ensure timely delivery.

Collaboration:

  • Work closely with cross-functional teams (product managers, designers, QA) to deliver high-quality solutions.
  • Coordinate with geographically distributed teams.

Quality Assurance & Security:

  • Conduct peer reviews of designs and code to ensure best practices.
  • Implement security measures and ensure compliance with industry standards.

Innovation & Continuous Improvement:

  • Identify areas for improvement in the software development lifecycle.
  • Stay updated with the latest tech trends, especially in AI and cloud technologies, and recommend new tools or frameworks.

Required Skills

  • Strong proficiency in JavaScript, HTML5, CSS3
  • Hands-on expertise with frontend frameworks like React, Angular, or Vue.js
  • Backend development experience with Java, Spring Boot (Node.js is a plus)
  • Knowledge of REST APIs, microservices, and scalable architectures
  • Familiarity with cloud platforms (AWS, Azure, or GCP)
  • Experience with Agile/Scrum methodologies and JIRA for project tracking
  • Proficiency in Git and version control best practices
  • Strong debugging, performance optimization, and problem-solving skills
  • Ability to analyze customer requirements and translate them into technical specifications
Read more
Lalitech

at Lalitech

1 recruiter
Govind Varshney
Posted by Govind Varshney
Remote, Bengaluru (Bangalore), Noida
0 - 2 yrs
₹3.5L - ₹4.5L / yr
Fullstack Developer
skill iconJavascript
skill iconReact.js
skill iconNodeJS (Node.js)
RESTful APIs
+6 more

Location: Hybrid/ Remote

Openings: 5

Experience: 0 - 2Years

Qualification: Bachelor’s or Master’s in Computer Science or a related technical field


Key Responsibilities:

Backend Development & APIs

  • Build microservices that provide REST APIs to power web frontends.
  • Design clean, reusable, and scalable backend code meeting enterprise security standards.
  • Conceptualize and implement optimized data storage solutions for high-performance systems.

Deployment & Cloud

  • Deploy microservices using a common deployment framework on AWS and GCP.
  • Inspect and optimize server code for speed, security, and scalability.

Frontend Integration

  • Work on modern front-end frameworks to ensure seamless integration with back-end services.
  • Develop reusable libraries for both frontend and backend codebases.


AI Awareness (Preferred)

  • Understand how AI/ML or Generative AI can enhance enterprise software workflows.
  • Collaborate with AI specialists to integrate AI-driven features where applicable.

Quality & Collaboration

  • Participate in code reviews to maintain high code quality.
  • Collaborate with teams using Agile/Scrum methodologies for rapid and structured delivery.


Required Skills:

  • Proficiency in JavaScript (ES6+), Webpack, Mocha, Jest
  • Experience with recent frontend frameworks – React.js, Redux.js, Node.js (or similar)
  • Deep understanding of HTML5, CSS3, SASS/LESS, and Content Management Systems
  • Ability to design and implement RESTful APIs and understand their impact on client-side applications
  • Familiarity with cloud platforms (AWS, Azure, or GCP) – deployment, storage, and scalability
  • Experience working with Agile and Scrum methodologies
  • Strong backend expertise in Java, J2EE, Spring Boot is a plus but not mandatory
Read more
Codnatives
Bengaluru (Bangalore), Pune
5 - 9 yrs
₹5L - ₹14L / yr
Data engineering
skill iconAmazon Web Services (AWS)
Amazon Redshift

Good experience in 5+ SQL and NoSQL database development and optimization. 

∙Strong hands-on experience with Amazon Redshift, MySQL, MongoDB, and Flyway. 

∙In-depth understanding of data warehousing principles and performance tuning techniques. 

∙Strong hands-on experience in building complex aggregation pipelines in NoSQL databases such as MongoDB. 

∙Proficient in Python or Scala for data processing and automation. 

∙3+ years of experience working with AWS-managed database services. 

∙3+ years of experience with Power BI or similar BI/reporting platforms. 

Read more
VDart
Don Blessing
Posted by Don Blessing
Hyderabad, Bengaluru (Bangalore), Noida, Gurugram
5 - 15 yrs
₹10L - ₹15L / yr
skill iconPython
skill iconAmazon Web Services (AWS)
API

Job Description:


Title : Python AWS Developer with API

 

Tech Stack : AWS API gateway, Lambda functionality, Oracle RDS, SQL & database management, (OOPS) principles, Java script, Object relational Mapper, Git, Docker, Java dependency management, CI/CD, AWS cloud & S3, Secret Manager, Python, API frameworks, well-versed with Front and back end programming (python).

 

Responsibilities: 

·      Worked on building high-performance APIs using AWS services and Python. Python coding, debugging programs and integrating app with third party web services.

·      Troubleshoot and debug non-prod defects, back-end development, API, main focus on coding and monitoring applications.

·      Core application logic design.

·      Supports dependency teams in UAT testing and perform functional application testing which includes postman testing

 

Read more
Tekit Software solution Pvt Ltd
Hyderabad, Bengaluru (Bangalore)
8 - 10 yrs
₹15L - ₹27L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
PySpark
SQL

🔍 Job Description:

We are looking for an experienced and highly skilled Technical Lead to guide the development and enhancement of a large-scale Data Observability solution built on AWS. This platform is pivotal in delivering monitoring, reporting, and actionable insights across the client's data landscape.

The Technical Lead will drive end-to-end feature delivery, mentor junior engineers, and uphold engineering best practices. The position reports to the Programme Technical Lead / Architect and involves close collaboration to align on platform vision, technical priorities, and success KPIs.

🎯 Key Responsibilities:

  • Lead the design, development, and delivery of features for the data observability solution.
  • Mentor and guide junior engineers, promoting technical growth and engineering excellence.
  • Collaborate with the architect to align on platform roadmap, vision, and success metrics.
  • Ensure high quality, scalability, and performance in data engineering solutions.
  • Contribute to code reviews, architecture discussions, and operational readiness.


🔧 Primary Must-Have Skills (Non-Negotiable):

  • 5+ years in Data Engineering or Software Engineering roles.
  • 3+ years in a technical team or squad leadership capacity.
  • Deep expertise in AWS Data Services: Glue, EMR, Kinesis, Lambda, Athena, S3.
  • Advanced programming experience with PySpark, Python, and SQL.
  • Proven experience in building scalable, production-grade data pipelines on cloud platforms.


Read more
Aeos Labs

at Aeos Labs

2 candid answers
Tejas Tholpadi
Posted by Tejas Tholpadi
Bengaluru (Bangalore)
0 - 2 yrs
₹7L - ₹9L / yr
TypeScript
skill iconPostgreSQL
skill iconNextJs (Next.js)
skill iconRedis
skill iconAmazon Web Services (AWS)
+2 more

We are looking for someone with a hacker mindset who is ready to pick up new problems and build full stack AI solutions for some of the biggest brands in the country and the world.

Read more
Tracxn

at Tracxn

1 recruiter
Tracxn Technologies
Posted by Tracxn Technologies
Bengaluru (Bangalore)
4 - 8 yrs
₹14L - ₹25L / yr
skill iconPython
Shell Scripting
Ansible
Linux/Unix
skill iconAmazon Web Services (AWS)
+1 more

Mode of Hire: Permanent

Required Skills Set (Mandatory): Linux, Shell Scripting, Python, AWS, Security best practices, Git

Desired Skills (Good if you have): Ansible, Terraform


Job Responsibilities

  • Design, develop, and maintain deployment pipelines and automation tooling to improve platform efficiency, scalability, and reliability.
  • Manage infrastructure and services in production AWS environments.
  • Drive platform improvements with a focus on security, scalability, and operational excellence.
  • Collaborate with engineering teams to enhance development tooling, streamline access workflows, and improve platform usability through feedback.
  • Mentor junior engineers and help foster a culture of high-quality engineering and knowledge sharing.


Job Requirements

  • Strong foundational understanding of Linux systems.
  • Cloud experience (e.g., AWS) with strong problem-solving in cloud-native environments.
  • Proven track record of delivering robust, well-documented, and secure automation solutions.
  • Comfortable owning end-to-end delivery of infrastructure components and tooling.


Preferred Qualifications

  • Advanced system and cloud optimization skills.
  • Prior experience in platform teams or DevOps roles at product-focused startups.
  • Demonstrated contributions to internal tooling, open-source, or automation projects.
Read more
Bengaluru (Bangalore)
5 - 8 yrs
₹16L - ₹22L / yr
skill iconJava
skill iconSpring Boot
RESTful APIs
skill iconAmazon Web Services (AWS)
Object Oriented Programming (OOPs)

Job Description

We are looking for a hands-on Tech Lead – Java with strong software engineering fundamentals, a deep understanding of Java-based backend systems, and proven experience leading agile teams. This role involves a balance of individual contribution and technical leadership — mentoring developers, designing scalable architectures, and driving the success of product delivery in fast-paced environments.


Key Responsibilities

  • Lead the end-to-end design, development, and deployment of Java-based applications and RESTful APIs.
  • Collaborate with product managers and architects to define technical solutions and translate business requirements into scalable software.
  • Guide and mentor team members in best coding practices, design patterns, and architectural decisions.
  • Drive code reviews, technical discussions, and ensure high code quality and performance standards.
  • Troubleshoot critical production issues and implement long-term fixes and improvements.
  • Advocate for continuous improvement in tools, processes, and systems across the engineering team.
  • Stay up to date with modern technologies and recommend their adoption where appropriate.


Required Skills

  • 5+ years of experience in Java backend development with expertise in Spring/Spring Boot and RESTful services.
  • Solid grasp of Object-Oriented Programming (OOP), system design, and design patterns.
  • Proven experience leading a team of engineers or taking ownership of modules/projects.
  • Experience with AWS Cloud services (EC2, Lambda, S3, etc.) is a strong advantage.
  • Familiarity with Agile/Scrum methodologies and working in cross-functional teams.
  • Excellent problem-solving, debugging, and analytical skills.
  • Strong communication and leadership skills.


About HummingWave

HummingWave is a leading IT product development company specializing in building full-scale application systems with robust cloud backends, sleek mobile/web frontends, and seamless enterprise integrations. With 50+ digital products delivered across domains for clients in the US, Europe, and Asia-Pacific, we are a team of highly skilled engineers committed to technical excellence and innovation.



Thanks

Read more
Bengaluru (Bangalore), Mumbai
8 - 15 yrs
₹6L - ₹10L / yr
DevOps
skill iconAmazon Web Services (AWS)
skill iconDocker

 Create and manage Jenkins Pipelines using Linux groovy

scripting and python

 Analyze and fix issues in Jenkins, GitHub, Nexus,

SonarQube and AWS cloud

 Perform Jenkins, GitHub, SonarQube and Nexus

administration.

 Create resources in AWS environment using

infrastructure-as-code. Analyze and fix issues in AWS

Cloud.


Good-to-Have


 AWS Cloud certification

 Terraform Certification

  •  Kubernetes/Docker experience
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort