Cutshort logo

50+ DevOps Jobs in India

Apply to 50+ DevOps Jobs on CutShort.io. Find your next job, effortlessly. Browse DevOps Jobs and apply today!

icon
Ekloud INC
ashwini rathod
Posted by ashwini rathod
INDIA
7 - 25 yrs
₹10L - ₹30L / yr
CSM
ServiceNow
RESTful APIs
DevOps
CSA
+1 more

Proper Job Description

A Senior CSM ServiceNow Implementer position, requiring a technical background with experience in Customer Service Management (CSM). Applicants should have 7–10 years of IT experience, including at least 5 years in ServiceNow development and a minimum of 2 years focused on CSM implementations. This role involves delivering enterprise-level solutions within an Agile framework.


Responsibilities include configuring and customizing features within the ServiceNow CSM module, such as case management, transform maps, access control lists (ACLs), Agent Workspace, Flow Designer, business rules, UI policies, UI scripts, and scheduled jobs.


The role also requires designing and implementing integrations with internal and external systems using REST and SOAP APIs. Experience working with IntegrationHub and MID Server is essential.


The candidate will develop scalable, maintainable solutions following ServiceNow best practices and organizational DevOps guidelines, utilizing up-to-date platform features to maintain code quality.


Collaboration with architects, product owners, and QA teams is expected throughout the delivery cycle. The candidate may also provide technical guidance and mentorship to junior developers to support consistency and standards adherence.


Experience

o 7–10 years of overall IT/software development experience


o Minimum 5 years of ServiceNow development experience


o At least 2 years of experience delivering CSM implementations projects


Technical Skills

o Configuring and customizing features within the ServiceNow CSM module


o Transform maps, access control lists (ACLs), Agent Workspace, Flow Designer, business rules, UI policies, UI scripts, and scheduled jobs.


o Strong expertise in building and consuming REST and SOAP APIs


o Experience working with Integration Hub and MID Server


o Deep understanding of ServiceNow root models, including CMDB, case tables, and SLAs


o Familiar with ServiceNow Agent Workspace, Virtual Agent, and Configurable Workflows


Certifications (Mandatory)

o Certified System Administrator (CSA)


o Certified Implementation Specialist – CSM


Soft Skills

o Strong analytical and problem-solving skills


o Effective communication and stakeholder engagement abilities


o Ability to work independently and deliver in a fast-paced environment


 


Location and Flexible locations: PAN India (Any CG Location)

Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
4 - 7 yrs
Best in industry
DevOps
Reliability engineering
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
+5 more

About NonStop io Technologies:

NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.


Brief Description:

We are looking for a skilled and proactive DevOps Engineer to join our growing engineering team. The ideal candidate will have hands-on experience in building, automating, and managing scalable infrastructure and CI CD pipelines. You will work closely with development, QA, and product teams to ensure reliable deployments, performance, and system security.


Roles and Responsibilities:

● Design, implement, and manage CI CD pipelines for multiple environments

● Automate infrastructure provisioning using Infrastructure as Code tools

● Manage and optimize cloud infrastructure on AWS, Azure, or GCP

● Monitor system performance, availability, and security

● Implement logging, monitoring, and alerting solutions

● Collaborate with development teams to streamline release processes

● Troubleshoot production issues and ensure high availability

● Implement containerization and orchestration solutions such as Docker and Kubernetes

● Enforce DevOps best practices across the engineering lifecycle

● Ensure security compliance and data protection standards are maintained


Requirements:

● 4 to 7 years of experience in DevOps or Site Reliability Engineering

● Strong experience with cloud platforms such as AWS, Azure, or GCP - Relevant Certifications will be a great advantage

● Hands-on experience with CI CD tools like Jenkins, GitHub Actions, GitLab CI, or Azure DevOps

● Experience working in microservices architecture

● Exposure to DevSecOps practices

● Experience in cost optimization and performance tuning in cloud environments

● Experience with Infrastructure as Code tools such as Terraform, CloudFormation, or ARM

● Strong knowledge of containerization using Docker

● Experience with Kubernetes in production environments

● Good understanding of Linux systems and shell scripting

● Experience with monitoring tools such as Prometheus, Grafana, ELK, or Datadog

● Strong troubleshooting and debugging skills

● Understanding of networking concepts and security best practices


Why Join Us?

● Opportunity to work on a cutting-edge healthcare product

● A collaborative and learning-driven environment

● Exposure to AI and software engineering innovations

● Excellent work ethic and culture


If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!

Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Trivandrum, Thiruvananthapuram
9 - 12 yrs
₹21L - ₹27L / yr
skill iconJava
Spring
Apache Kafka
SQL
skill iconPostgreSQL
+16 more

JOB DETAILS:

Job Title: Java Lead-Java, MS, Kafka-TVM - Java (Core & Enterprise), Spring/Micronaut, Kafka

Industry: Global Digital Transformation Solutions Provider

Salary: Best in Industry

Experience: 9 to 12 years

Location: Trivandrum, Thiruvananthapuram

 

Job Description

Experience

  • 9+ years of experience in Java-based backend application development
  • Proven experience building and maintaining enterprise-grade, scalable applications
  • Hands-on experience working with microservices and event-driven architectures
  • Experience working in Agile and DevOps-driven development environments

 

Mandatory Skills

  • Advanced proficiency in core Java and enterprise Java concepts
  • Strong hands-on experience with Spring Framework and/or Micronaut for building scalable backend applications
  • Strong expertise in SQL, including database design, query optimization, and performance tuning
  • Hands-on experience with PostgreSQL or other relational database management systems
  • Strong experience with Kafka or similar event-driven messaging and streaming platforms
  • Practical knowledge of CI/CD pipelines using GitLab
  • Experience with Jenkins for build automation and deployment processes
  • Strong understanding of GitLab for source code management and DevOps workflows

 

Responsibilities

  • Design, develop, and maintain robust, scalable, and high-performance backend solutions
  • Develop and deploy microservices using Spring or Micronaut frameworks
  • Implement and integrate event-driven systems using Kafka
  • Optimize SQL queries and manage PostgreSQL databases for performance and reliability
  • Build, implement, and maintain CI/CD pipelines using GitLab and Jenkins
  • Collaborate with cross-functional teams including product, QA, and DevOps to deliver high-quality software solutions
  • Ensure code quality through best practices, reviews, and automated testing

 

Good-to-Have Skills

  • Strong problem-solving and analytical abilities
  • Experience working with Agile development methodologies such as Scrum or Kanban
  • Exposure to cloud platforms such as AWS, Azure, or GCP
  • Familiarity with containerization and orchestration tools such as Docker or Kubernetes

 

Skills: java, spring boot, kafka development, cicd, postgresql, gitlab

 

Must-Haves

Java Backend (9+ years), Spring Framework/Micronaut, SQL/PostgreSQL, Kafka, CI/CD (GitLab/Jenkins)

Advanced proficiency in core Java and enterprise Java concepts

Strong hands-oacn experience with Spring Framework and/or Micronaut for building scalable backend applications

Strong expertise in SQL, including database design, query optimization, and performance tuning

Hands-on experience with PostgreSQL or other relational database management systems

Strong experience with Kafka or similar event-driven messaging and streaming platforms

Practical knowledge of CI/CD pipelines using GitLab

Experience with Jenkins for build automation and deployment processes

Strong understanding of GitLab for source code management and DevOps workflows

 

 

*******

Notice period - 0 to 15 days only

Job stability is mandatory

Location: only Trivandrum

F2F Interview on 21st Feb 2026

 

Read more
Insight Global

Insight Global

Agency job
via Covetus Technologies by Rahul Chauhan
Remote only
10 - 15 yrs
₹32L - ₹35L / yr
Netsuite
Microsoft Windows Azure
Firewall
AKS
DevOps
+1 more

Required Skills & Experience


· 10+ years of experience in cloud architecture, platform engineering, or infrastructure engineering. · 6+ years of hands-on experience with Microsoft Azure in enterprise environments. · Proven experience designing and operating large-scale SaaS platforms. · Strong proficiency in: o Azure Networking (VNETs, ExpressRoute, Private Link, Firewalls) o Azure Compute (AKS, VM Scale Sets, Functions, App Services) o Infrastructure as Code (Terraform) o CI/CD tools (Azure DevOps, etc.) o Observability tools · In-depth knowledge of cloud security, identity, and compliance frameworks—especially in healthcare. · Experience with Kubernetes, container platforms, and cloud-native tooling. Strong understanding of platform engineering concepts


Nice to Have Skills & Experience


· Azure certifications such as: o Azure Solutions Architect Expert o Azure DevOps Engineer Expert o Azure Security Engineer · Experience working in regulated industries (healthcare, financial services, government). · Knowledge of Global Regulatory requirements including ISO27001 · Experience with distributed systems design and microservices architecture. Strong communication, leadership, and strategic planning skills.


Job Description


We are seeking an experienced Microsoft Azure Lead Practice Architect to lead the design, implementation, and evolution of our cloud platform powering our enterprise-scale SaaS health software products. In this role, you will provide architectural leadership, platform strategy guidance, and technical mentorship across the Platform Engineering team. You will shape how our cloud-native infrastructure is built and operated, ensuring it is secure, scalable, compliant, and highly reliable for the healthcare industry in a cost-effective manner. As a senior technical leader, you will partner closely with product engineering, security, compliance, DevOps, and operations teams to define best practices, engineer reusable platform components, and guide the adoption of Azure services and modern platform engineering principles.

Read more
Bengaluru (Bangalore)
5 - 10 yrs
₹15L - ₹20L / yr
skill iconAmazon Web Services (AWS)
DevOps

Job Summary

We are looking for a skilled AWS DevOps Engineer with 5+ years of experience to design, implement, and manage scalable, secure, and highly available cloud infrastructure. The ideal candidate should have strong expertise in CI/CD, automation, containerization, and cloud-native deployments on Amazon Web Services.

Key Responsibilities

  • Design, build, and maintain scalable infrastructure on AWS
  • Implement and manage CI/CD pipelines for automated build, test, and deployment
  • Automate infrastructure using Infrastructure as Code (IaC) tools
  • Monitor system performance, availability, and security
  • Manage containerized applications and orchestration platforms
  • Troubleshoot production issues and ensure high availability
  • Collaborate with development teams for DevOps best practices
  • Implement logging, monitoring, and alerting systems

Required Skills

  • Strong hands-on experience with AWS services (EC2, S3, RDS, VPC, IAM, Lambda, CloudWatch)
  • Experience with CI/CD tools like Jenkins, GitHub Actions, or GitLab CI/CD
  • Infrastructure as Code using Terraform / CloudFormation
  • Containerization using Docker
  • Container orchestration using Kubernetes / EKS
  • Scripting knowledge in Python / Bash / Shell
  • Experience with monitoring tools (CloudWatch, Prometheus, Grafana)
  • Strong understanding of Linux systems and networking
  • Experience with Git and version control

Good to Have

  • Experience with configuration management tools (Ansible, Chef, Puppet)
  • Knowledge of microservices architecture
  • Experience with security best practices and DevSecOps
  • AWS Certification (Solutions Architect / DevOps Engineer)
  • Experience working in Agile/Scrum teams


Read more
Bengaluru (Bangalore)
6 - 7 yrs
₹35L - ₹40L / yr
skill iconGo Programming (Golang)
DevOps
skill iconNodeJS (Node.js)

📍 BeBetta | Full-time | Core Platform Engineering


The Opportunity

BeBetta is building the future of fan engagement — our north star is to become the “Netflix of Gaming.”

We’re creating a platform that is always-on, globally scalable, and effortlessly personalized.

To achieve this, we need a backend capable of supporting millions of concurrent users making real-time predictions during live events, with streaming-grade availability and ultra-low latency.

As our Senior Backend & Infrastructure Engineer, you won’t just build services — you’ll design, scale, and operate the entire platform they run on.

You will lead our transition to a Go-based distributed microservices architecture and take full ownership of both the backend systems and the cloud infrastructure that powers them end-to-end.

You’ll be the driving force behind:

  • The prediction engine
  • The rewards ledger
  • Real-time data pipelines
  • The reliability, scalability, and efficiency of the infrastructure underneath


What You’ll Achieve

🚀 Architect High-Performance Systems

  • Design and build our core backend in Golang
  • Deliver services that are fast, resilient, and horizontally scalable
  • Support streaming-scale, real-time workloads

⚡ Solve Hard Concurrency Problems

  • Ensure fairness and accuracy for thousands of simultaneous actions
  • Build systems that maintain consistency under heavy load

🧠 Drive Technical Strategy

  • Lead our evolution from Node.js to Go
  • Define clean service contracts and platform standards
  • Mentor engineers and raise the technical bar across the team

📈 Own Reliability at Scale

  • Design systems to meet strict SLAs/SLOs for availability and latency
  • Handle live event spikes confidently and predictably

🔧 Operate What You Build

  • Own uptime, performance, and production stability
  • Ship fast while maintaining safety and reliability

☁️ Design Cloud Architecture

  • Lead decisions around networking, compute, scaling models, and environments
  • Build secure, cost-efficient, production-grade infrastructure


DevOps & Infrastructure Responsibilities

You will own the platform foundation end-to-end, including:

  • CI/CD: Automated pipelines for build, testing, security checks, and progressive delivery
  • Containers & Orchestration: Docker and Kubernetes operations, Helm/Argo-based deployments and rollouts
  • Infrastructure as Code: Terraform-based, repeatable, secure AWS/GCP environments
  • Observability & Reliability: Define SLIs/SLOs, Implement monitoring, logging, tracing, and alerting
  • Performance & Cost Optimization: Reduce latency, Optimize cloud spend, Design caching and efficient data layers
  • Security & Compliance: Secrets management, IAM best practices, Vulnerability scanning
  • Incident Readiness: Lead incident response and postmortems, Establish a strong reliability culture
  • Capacity & Scaling Engineering: Model traffic and forecast load, Design systems that scale predictably during live events
  • Production Readiness: Chaos testing, Failure simulations, Disaster recovery planning, Game-day exercises


What You’ll Bring

  • Proven experience building high-performance backend systems in Golang
  • 7–8+ years in system design, distributed systems, and API development
  • Experience migrating systems from Node.js or service-based architectures
  • Deep knowledge of PostgreSQL and Redis
  • Strong expertise with Docker, Kubernetes, Terraform, AWS/GCP
  • CI/CD and progressive deployment experience
  • Solid infrastructure engineering fundamentals (networking, scaling, cloud architecture)
  • Experience operating production systems at high concurrency or live-event scale
  • Strong testing, observability, and clean architecture practices
  • Curiosity, ownership mindset, and grit


Why You’ll Love Working Here

This isn’t just another backend role.

It’s a chance to architect and operate the technology backbone behind the “Netflix of Gaming.”


You’ll have:

  • Autonomy to make big decisions
  • Ownership across software and infrastructure
  • The opportunity to build for massive, global scale
  • Direct impact on systems that must never go down

If you love building mission-critical systems and owning them end-to-end, this role is built for you.


Read more
One2n

at One2n

3 candid answers
Krunali Lole
Posted by Krunali Lole
Remote, Pune
9 - 12 yrs
₹30L - ₹45L / yr
SRE
Monitoring
DevOps
Terraform
open telemetry
+7 more


About the role:

We are looking for a Staff Site Reliability Engineer who can operate at a staff level across multiple teams and clients. If you care about designing reliable platforms, influencing system architecture, and raising reliability standards across teams, you’ll enjoy working at One2N.

At One2N, you will work with our startups and enterprise clients, solving One-to-N scale problems where the proof of concept is already established and the focus is on scalability, maintainability, and long-term reliability. In this role, you will drive reliability, observability, and infrastructure architecture across systems, influencing design decisions, defining best practices, and guiding teams to build resilient, production-grade systems.


Key responsibilities:

  • Own and drive reliability and infrastructure strategy across multiple products or client engagements
  • Design and evolve platform engineering and self-serve infrastructure patterns used by product engineering teams
  • Lead architecture discussions around observability, scalability, availability, and cost efficiency.
  • Define and standardize monitoring, alerting, SLOs/SLIs, and incident management practices.
  • Build and review production-grade CI/CD and IaC systems used across teams
  • Act as an escalation point for complex production issues and incident retrospectives.
  • Partner closely with engineering leads, product teams, and clients to influence system design decisions early.
  • Mentor young engineers through design reviews, technical guidance, and best practices.
  • Improve Developer Experience (DX) by reducing cognitive load, toil, and operational friction.
  • Help teams mature their on-call processes, reliability culture, and operational ownership.
  • Stay ahead of trends in cloud-native infrastructure, observability, and platform engineering, and bring relevant ideas into practice


About you:

  • 9+ years of experience in SRE, DevOps, or software engineering roles
  • Strong experience designing and operating Kubernetes-based systems on AWS at scale
  • Deep hands-on expertise in observability and telemetry, including tools like OpenTelemetry, Datadog, Grafana, Prometheus, ELK, Honeycomb, or similar.
  • Proven experience with infrastructure as code (Terraform, Pulumi) and cloud architecture design.
  • Strong understanding of distributed systems, microservices, and containerized workloads.
  • Ability to write and review production-quality code (Golang, Python, Java, or similar)
  • Solid Linux fundamentals and experience debugging complex system-level issues
  • Experience driving cross-team technical initiatives.
  • Excellent analytical and problem-solving skills, keen attention to detail, and a passion for continuous improvement.
  • Strong written, communication, and collaboration skills, with the ability to work effectively in a fast-paced, agile environment.


Nice to have:

  • Experience working in consulting or multi-client environments.
  • Exposure to cost optimization, or large-scale AWS account management
  • Experience building internal platforms or shared infrastructure used by multiple teams.
  • Prior experience influencing or defining engineering standards across organizations.


Read more
Foyforyou
Hardika Bhansali
Posted by Hardika Bhansali
Mumbai
6 - 12 yrs
₹15L - ₹35L / yr
DevOps

Engineering Manager – FOY (FoyForYou.com)

Function: Software Engineering → Leadership, Product Engineering, Architecture

Skills: Team Leadership, System Design, Mobile + Web Architecture, DevOps, CI/CD, Backend (Node JS), Cloud (AWS/GCP), Data-driven Decision Making

About FOY

FOY (FoyForYou.com) is one of India’s fastest-growing beauty & wellness destinations. We offer customers a curated selection of 100% authentic products, trusted brands, and a seamless shopping experience. Our mission is to make beauty effortless, personal, and accessible for every Indian.

As we scale aggressively across mobile, web, logistics, personalization, and omnichannel commerce—our engineering team is the core engine behind FOY’s growth. We’re looking for a technical leader who is excited to build the future of beauty commerce.

Job Description

We’re hiring an Engineering Manager (7–12 years) who can lead a high-performing engineering team, drive product delivery, architect scalable systems, and cultivate a culture of excellence.

This role is ideal for someone who has been a strong individual contributor and is now passionate about leading people, driving execution, and shaping FOY’s technology roadmap.

Responsibilities

1. People & Team Leadership

  • Lead, mentor, and grow a team of backend, frontend, mobile, and DevOps engineers.
  • Drive high performance through code quality, engineering discipline, and structured reviews.
  • Build a culture of ownership, speed, and continuous improvement.
  • Hire top talent and help scale the engineering org.

2. Technical Leadership & Architecture

  • Work with product, design, data, and business to define the technical direction for FOY.
  • Architect robust and scalable systems for:
  • Mobile commerce
  • High-scale APIs
  • Personalization & recommendations
  • Product/catalog systems
  • Payments, checkout, and logistics
  • Ensure security, reliability, and high availability across all systems.

3. Execution & Delivery

  • Own end-to-end delivery of key product features and platform capabilities.
  • Drive sprint planning, execution, tracking, and timely delivery.
  • Build and improve engineering processes—CI/CD pipelines, testing automation, release cycles.
  • Reduce tech debt and ensure long-term maintainability.

4. Quality, Performance & Observability

  • Drive standards across:
  • Code quality
  • System performance
  • Monitoring & alerting
  • Incident management
  • Work closely with QA, mobile, backend, and data teams to ensure smooth releases.

5. Cross-Functional Collaboration

  • Partner with product managers to translate business goals into technical outcomes.
  • Work with designers, marketing, operations, and supply chain to deliver user-centric solutions.
  • Balance trade-offs between speed and engineering excellence.

Requirements

  • 7–12 years of hands-on engineering experience with at least 2–4 years in a leadership or tech lead role.
  • Strong experience in backend development (Node/Python preferred).
  • Good exposure to mobile or web engineering (Native Android/iOS).
  • Strong understanding of system design, APIs, microservices, caching, databases, and cloud infrastructure (AWS/GCP).
  • Experience running Agile/Scrum teams with measurable delivery outputs.
  • Ability to dive deep into code when required while enabling the team to succeed independently.
  • Strong communication and stakeholder management skills.
  • Passion for building user-focused products and solving meaningful problems.

Bonus Points

  • Experience in e-commercehigh-scale consumer apps, or marketplaces.
  • Prior experience in a fast-growing startup environment.
  • Strong DevOps/Cloud knowledge (CI/CD, Kubernetes, SRE principles).
  • Data-driven approach to engineering decisions.
  • A GitHub/portfolio showcasing real projects or open-source contributions.

Why Build Your Engineering Career at FOY?

At FOY, engineering drives business. We move fast, build big, and aim high.

We look for:

1. Rockstar Team Players

Your leadership directly impacts product, growth, and customer experience.

2. Owners With Passion

You’ll be trusted with large problem spaces and will operate with autonomy and accountability.

3. Big Dreamers

We’re scaling quickly. If you dream big and thrive in ambitious environments, you’ll feel at home at FOY.

Join Us

If building and scaling world-class engineering teams excites you, we’d love to meet you.

Apply now and help FOY redefine beauty commerce in India.








Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
7 - 10 yrs
₹21L - ₹30L / yr
Perforce
DevOps
skill iconGit
skill iconGitHub
skill iconPython
+7 more

JOB DETAILS:

* Job Title: Specialist I - DevOps Engineering

* Industry: Global Digital Transformation Solutions Provider

* Salary: Best in Industry

* Experience: 7-10 years

* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram

 

Job Description

Job Summary:

As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.

The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.

 

Key Responsibilities:

  • Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
  • Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
  • Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
  • Define migration scope — determine how much history to migrate and plan the repository structure.
  • Manage branch renaming and repository organization for optimized post-migration workflows.
  • Collaborate with development teams to determine migration points and finalize migration strategies.
  • Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.

 

Required Qualifications:

  • Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
  • Hands-on experience with P4-Fusion.
  • Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
  • Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
  • Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
  • Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
  • Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
  • Familiarity with CI/CD pipeline integration to validate workflows post-migration.
  • Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
  • Excellent communication and collaboration skills for cross-team coordination and migration planning.
  • Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.

 

Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools

 

Must-Haves

Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)

Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Trivandrum, Thiruvananthapuram, Pune
3 - 5 yrs
₹15L - ₹25L / yr
Terraform
Splunk
DevOps
Windows Azure
SQL Azure
+12 more

JOB DETAILS:

* Job Title: Lead I - Azure, Terraform, GitLab CI 

* Industry: Global Digital Transformation Solutions Provider

* Salary: Best in Industry

* Experience: 3-5 years

* Location: Trivandrum/Pune

 

Job Description

Job Title: DevOps Engineer

Experience: 4–8 Years 

Location: Trivandrum & Pune 

Job Type: Full-Time

Mandatory skills: Azure, Terraform, GitLab CI, Splunk

 

Job Description

We are looking for an experienced and driven DevOps Engineer with 4 to 8 years of experience to join our team in Trivandrum or Pune. The ideal candidate will take ownership of automating cloud infrastructure, maintaining CI/CD pipelines, and implementing monitoring solutions to support scalable and reliable software delivery in a cloud-first environment.

 

Key Responsibilities

  • Design, manage, and automate Azure cloud infrastructure using Terraform.
  • Develop scalable, reusable, and version-controlled Infrastructure as Code (IaC) modules.
  • Implement monitoring and logging solutions using Splunk, Azure Monitor, and Dynatrace.
  • Build and maintain secure and efficient CI/CD pipelines using GitLab CI or Harness.
  • Collaborate with cross-functional teams to enable smooth deployment workflows and infrastructure updates.
  • Analyze system logs, performance metrics, and s to troubleshoot and optimize performance.
  • Ensure infrastructure security, compliance, and scalability best practices are followed.

 

Mandatory Skills

Candidates must have hands-on experience with the following technologies:

  • Azure – Cloud infrastructure management and deployment
  • Terraform – Infrastructure as Code for scalable provisioning
  • GitLab CI – Pipeline development, automation, and integration
  • Splunk – Monitoring, logging, and troubleshooting production systems

 

Preferred Skills

  • Experience with Harness (for CI/CD)
  • Familiarity with Azure Monitor and Dynatrace
  • Scripting proficiency in Python, Bash, or PowerShell
  • Understanding of DevOps best practices, containerization, and microservices architecture
  • Exposure to Agile and collaborative development environments

 

Skills Summary

Azure, Terraform, GitLab CI, Splunk (Mandatory) Additional: Harness, Azure Monitor, Dynatrace, Python, Bash, PowerShell

 

Skills: Azure, Splunk, Terraform, Gitlab Ci

 

******

Notice period - 0 to 15days only

Job stability is mandatory

Location: Trivandrum/Pune

Read more
Digital transformation excellence provider

Digital transformation excellence provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai
12 - 20 yrs
₹30L - ₹40L / yr
Product Management
Business-to-business
Analytics
Product engineering
Procurement management
+26 more

 JOB DETAILS:

* Job Title: Head of Engineering/Senior Product Manager

* Industry: Digital transformation excellence provider

* Salary: Best in Industry

* Experience: 12-20 years

* Location: Mumbai

 

Job Description

Role Overview

The VP / Head of Technology will lead company’s technology function across engineering, product development, cloud infrastructure, security, and AI-led initiatives. This role focuses on delivering scalable, high-quality technology solutions across company’s core verticals including eCommerce, Procurement & e-Sourcing, ERP integrations, Sustainability/ESG, and Business Services.

This leader will drive execution, ensure technical excellence, modernize platforms, and collaborate closely with business and delivery teams.

 

Roles and Responsibilities:

Technology Execution & Architecture Leadership

·        Own and execute the technology roadmap aligned with business goals.

·        Build and maintain scalable architecture supporting multiple verticals.

·        Enforce engineering best practices, code quality, performance, and security.

·        Lead platform modernization including microservices, cloud-native architecture, API-first systems, and integration frameworks.

 

Product & Engineering Delivery

·        Manage multi-product engineering teams across eCommerce platforms, procurement systems, ERP integrations, analytics, and ESG solutions.

·        Own the full SDLC — requirements, design, development, testing, deployment, support.

·        Implement Agile, DevOps, CI/CD for faster releases and improved reliability.

·        Oversee product/platform interoperability across all company systems.

 

Vertical-Specific Technology Leadership

Procurement Tech:

·        Lead architecture and enhancements of procurement and indirect spend platforms.

·        Ensure interoperability with SAP Ariba, Coupa, Oracle, MS Dynamics, etc.

 

eCommerce:

·        Drive development of scalable B2B/B2C commerce platforms, headless commerce, marketplace integrations, and personalization capabilities.

 

Sustainability/ESG:

·        Support development of GHG tracking, reporting systems, and sustainability analytics platforms.

 

Business Services:

·        Enhance operational platforms with automation, workflow management, dashboards, and AI-driven efficiency tools.

 

Data, Cloud, Security & Infrastructure

·        Own cloud infrastructure strategy (Azure/AWS/GCP).

·        Ensure adherence to compliance standards (SOC2, ISO 27001, GDPR).

·        Lead cybersecurity policies, monitoring, threat detection, and recovery planning.

·        Drive observability, cost optimization, and system scalability.

 

AI, Automation & Innovation

·        Integrate AI/ML, analytics, and automation into product platforms and service delivery.

·        Build frameworks for workflow automation, supplier analytics, personalization, and operational efficiency.

·        Lead R&D for emerging tech aligned to business needs.

 

Leadership & Team Management

·        Lead and mentor engineering managers, architects, developers, QA, and DevOps.

·        Drive a culture of ownership, innovation, continuous learning, and performance accountability.

·        Build capability development frameworks and internal talent pipelines.

 

Stakeholder Collaboration

·        Partner with Sales, Delivery, Product, and Business Teams to align technology outcomes with customer needs.

·        Ensure transparent reporting on project status, risks, and technology KPIs.

·        Manage vendor relationships, technology partnerships, and external consultants.

 

Education, Training, Skills, and Experience Requirements:

Experience & Background

·        16+ years in technology execution roles, including 5–7 years in senior leadership.

·        Strong background in multi-product engineering for B2B platforms or enterprise systems.

·        Proven delivery experience across: eCommerce, ERP integrations, procurement platforms, ESG solutions, and automation.

 

Technical Skills

·        Expertise in cloud platforms (Azure/AWS/GCP), microservices architecture, API frameworks.

·        Strong grasp of procurement tech, ERP integrations, eCommerce platforms, and enterprise-scale systems.

·        Hands-on exposure to AI/ML, automation tools, data engineering, and analytics stacks.

·        Strong understanding of security, compliance, scalability, performance engineering.

 

Leadership Competencies

·        Execution-focused technology leadership.

·        Strong communication and stakeholder management skills.

·        Ability to lead distributed teams, manage complexity, and drive measurable outcomes.

·        Innovation mindset with practical implementation capability.

 

Education

·        Bachelor’s or Master’s in Computer Science/Engineering or equivalent.

·        Additional leadership education (MBA or similar) is a plus, not mandatory.

 

Travel Requirements

·        Occasional travel for client meetings, technology reviews, or global delivery coordination.

 

Must-Haves

·        10+ years of technology experience, with with at least 6 years leading large (50-100+) multi product engineering teams.

·        Must have worked on B2B Platforms. Experience in Procurement Tech or Supply Chain

·        Min. 10+ Years of Expertise in Cloud-Native Architecture, Expert-level design in Azure, AWS, or GCP using Microservices, Kubernetes (K8s), and Docker.

·        Min. 8+ Years of Expertise in Modern Engineering Practices, Advanced DevOps, CI/CD pipelines, and automated testing frameworks (Selenium, Cypress, etc.).

·        Hands-on leadership experience in Security & Compliance.

·        Min. 3+ Years of Expertise in AI & Data Engineering, Practical implementation of LLMs, Predictive Analytics, or AI-driven automation

·        Strong technology execution leadership, with ownership of end-to-end technology roadmaps aligned to business outcomes.

·        Min. 6+ Years of Expertise in B2B eCommerce Logic Architecture of Headless Commerce, marketplace integrations, and complex B2B catalog management.

·        Strong product management exposure

·        Proven experience in leading end-to-end team operations

·        Relevant experience in product-driven organizations or platforms

·        Strong Subject Matter Expertise (SME)

 

Education: - Master degree.

 

**************

Joining time / Notice Period: Immediate - 45days.

Location: - Andheri,

5 days working (3 - 2 days’ work from office)

Read more
Hyderabad
8 - 12 yrs
₹29L - ₹38L / yr
AWS Cloud Engineering
DevOps
Cloud Operations
EC2
VPC
+4 more

Role & Responsibilities

We are looking for a hands-on AWS Cloud Engineer to support day-to-day cloud operations, automation, and reliability of AWS environments. This role works closely with the Cloud Operations Lead, DevOps, Security, and Application teams to ensure stable, secure, and cost-effective cloud platforms.

Key Responsibilities-

  • Operate and support AWS production environments across multiple accounts
  • Manage infrastructure using Terraform and support CI/CD pipelines
  • Support Amazon EKS clusters, upgrades, scaling, and troubleshooting
  • Build and manage Docker images and push to Amazon ECR
  • Monitor systems using CloudWatch and third-party tools; respond to incidents
  • Support AWS networking (VPCs, NAT, Transit Gateway, VPN/DX)
  • Assist with cost optimization, tagging, and governance standards
  • Automate operational tasks using Python, Lambda, and Systems Manager

Ideal Candidate

  • Strong hands-on AWS experience (EC2, VPC, IAM, S3, ALB, CloudWatch)
  • Experience with Terraform and Git-based workflows
  • Hands-on experience with Kubernetes / EKS
  • Experience with CI/CD tools (GitHub Actions, Jenkins, etc.)
  • Scripting experience in **Python or Bash**
  • Understanding of monitoring, incident management, and cloud security basics

Nice to Have-

  • AWS Associate-level certifications
  • Experience with Karpenter, Prometheus, New Relic
  • Exposure to FinOps and cost optimization practices
Read more
Remote only
5 - 15 yrs
₹12L - ₹24L / yr
Systems Development Life Cycle (SDLC)
Agile/Scrum
Waterfall
JIRA
DevOps
+5 more

Summary

We are seeking a skilled and experienced Business Analyst cum Project Manager to lead the delivery of digital solutions across diverse projects. This hybrid role requires a professional who can effectively bridge business needs with technical execution, manage project timelines, and ensure successful outcomes through structured methodologies.

Key Responsibilities

  • Lead end-to-end project delivery, from initiation to closure, ensuring alignment with business goals.
  • Gather, analyze, and document business requirements and translate them into clear user stories and functional specifications.
  • Manage project plans, timelines, risks, and deliverables using Agile, Waterfall, or hybrid methodologies.
  • Facilitate stakeholder meetings, workshops, sprint planning, and retrospectives.
  • Collaborate with cross-functional teams including developers, testers, and business stakeholders.
  • Conduct user acceptance testing (UAT) and ensure solutions meet business expectations.
  • Coordinate and support testing activities, including functional, integration, and compliance-related testing.
  • Maintain comprehensive documentation including business process models, user guides, and project reports.
  • Monitor project progress and provide regular updates to stakeholders.

Required Skills & Experience

  • Minimum 5 years of experience in a combined Business Analyst and Project Manager role.
  • Strong understanding of SDLC, Agile, Waterfall, and hybrid project delivery frameworks.
  • Proficiency in tools such as Jira, Azure DevOps, Microsoft Power Platform, Visio, PowerPoint, and Word.
  • Experience with UML modeling (Use Case, Sequence, Activity, and Class diagrams).
  • Hands-on experience in testing, including ISO standards–related projects and other compliance or regulatory-driven initiatives.
  • Strong analytical, problem-solving, and conceptual thinking skills.
  • Excellent communication and stakeholder management abilities.
  • Ability to manage multiple projects and priorities in a fast-paced environment.
  • Familiarity with UX principles and customer journey mapping concepts.

Preferred Qualifications

  • Bachelor’s degree in Business, Information Systems, Computer Science, or a related field.
  • Experience working with remote and cross-functional teams.
  • Exposure to data-driven decision-making and digital product development.
  • Prior involvement in quality assurance, audit preparation, or ISO certification support projects is an added advantage.

Note

This is an immediate requirement. The position will be initially offered as a contract role and is expected to be converted to a permanent position based on performance and business requirements.

Read more
Technology Industry

Technology Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
5 - 8 yrs
₹38L - ₹50L / yr
skill iconJava
skill iconSpring Boot
CI/CD
Spring
Microservices
+16 more

Job Details

- Job Title: SDE-3

Industry: Technology

Domain - Information technology (IT)

Experience Required: 5-8 years

Employment Type: Full Time

Job Location: Bengaluru

CTC Range: Best in Industry

 

Role & Responsibilities

As a Software Development Engineer - 3, Backend Engineer at company, you will play a critical role in architecting, designing, and delivering robust backend systems that power our platform. You will lead by example, driving technical excellence and mentoring peers while solving complex engineering problems. This position offers the opportunity to work with a highly motivated team in a fast-paced and innovative environment.

 

Key Responsibilities:

Technical Leadership-

  • Design and develop highly scalable, fault-tolerant, and maintainable backend systems using Java and related frameworks.
  • Provide technical guidance and mentorship to junior developers, fostering a culture of learning and growth.
  • Review code and ensure adherence to best practices, coding standards, and security guidelines.

System Architecture and Design-

  • Collaborate with cross-functional teams, including product managers and frontend engineers, to translate business requirements into efficient technical solutions.
  • Own the architecture of core modules and contribute to overall platform scalability and reliability.
  • Advocate for and implement microservices architecture, ensuring modularity and reusability.

Problem Solving and Optimization-

  • Analyze and resolve complex system issues, ensuring high availability and performance of the platform.
  • Optimize database queries and design scalable data storage solutions.
  • Implement robust logging, monitoring, and alerting systems to proactively identify and mitigate issues.

Innovation and Continuous Improvement-

  • Stay updated on emerging backend technologies and incorporate relevant advancements into our systems.
  • Identify and drive initiatives to improve codebase quality, deployment processes, and team productivity.
  • Contribute to an advocate for a DevOps culture, supporting CI/CD pipelines and automated testing.

Collaboration and Communication-

  • Act as a liaison between the backend team and other technical and non-technical teams, ensuring smooth communication and alignment.
  • Document system designs, APIs, and workflows to maintain clarity and knowledge transfer across the team.

 

Ideal Candidate

  • Strong Java Backend Engineer.
  • Must have 5+ years of backend development with strong focus on Java (Spring / Spring Boot)
  • Must have been SDE-2 for at least 2.5 years
  • Hands-on experience with RESTful APIs and microservices architecture
  • Strong understanding of distributed systems, multithreading, and async programming
  • Experience with relational and NoSQL databases
  • Exposure to Kafka/RabbitMQ and Redis/Memcached
  • Experience with AWS / GCP / Azure, Docker, and Kubernetes
  • Familiar with CI/CD pipelines and modern DevOps practices
  • Product companies (B2B SAAS preferred)
  • have stayed for at least 2 years with each of the previous companies
  • (Education): B.Tech in computer science from Tier 1, Tier 2 colleges


Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
7 - 12 yrs
₹40L - ₹80L / yr
skill iconMachine Learning (ML)
Apache Spark
Apache Airflow
skill iconPython
skill iconAmazon Web Services (AWS)
+23 more

Review Criteria:

  • Strong MLOps profile
  • 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
  • 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
  • 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
  • Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
  • Must have hands-on Python for pipeline & automation development
  • 4+ years of experience in AWS cloud, with recent companies
  • (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth

 

Preferred:

  • Hands-on in Docker deployments for ML workflows on EKS / ECS
  • Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
  • Experience with CI / CD / CT using GitHub Actions / Jenkins.
  • Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
  • Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.

 

Job Specific Criteria:

  • CV Attachment is mandatory
  • Please provide CTC Breakup (Fixed + Variable)?
  • Are you okay for F2F round?
  • Have candidate filled the google form?

 

Role & Responsibilities:

We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.

 

Key Responsibilities:

  • Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
  • Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
  • Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
  • Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
  • Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
  • Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
  • Collaborate with data scientists to productionize notebooks, experiments, and model deployments.

 

Ideal Candidate:

  • 8+ years in MLOps/DevOps with strong ML pipeline experience.
  • Strong hands-on experience with AWS:
  • Compute/Orchestration: EKS, ECS, EC2, Lambda
  • Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
  • Workflow: MWAA/Airflow, Step Functions
  • Monitoring: CloudWatch, OpenSearch, Grafana
  • Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
  • Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
  • Strong Linux, scripting, and troubleshooting skills.
  • Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.

 

Education:

  • Master’s degree in computer science, Machine Learning, Data Engineering, or related field. 


Read more
Technology Industry

Technology Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
2 - 5 yrs
₹4L - ₹5L / yr
DevOps
Windows Azure
CI/CD
MySQL
skill iconPython
+12 more

JOB DETAILS:

* Job Title: DevOps Engineer (Azure)

* Industry: Technology

* Salary: Best in Industry

* Experience: 2-5 years

* Location: Bengaluru, Koramangala

Review Criteria

  • Strong Azure DevOps Engineer Profiles.
  • Must have minimum 2+ years of hands-on experience as an Azure DevOps Engineer with strong exposure to Azure DevOps Services (Repos, Pipelines, Boards, Artifacts).
  • Must have strong experience in designing and maintaining YAML-based CI/CD pipelines, including end-to-end automation of build, test, and deployment workflows.
  • Must have hands-on scripting and automation experience using Bash, Python, and/or PowerShell
  • Must have working knowledge of databases such as Microsoft SQL Server, PostgreSQL, or Oracle Database
  • Must have experience with monitoring, alerting, and incident management using tools like Grafana, Prometheus, Datadog, or CloudWatch, including troubleshooting and root cause analysis

 

Preferred

  • Knowledge of containerisation and orchestration tools such as Docker and Kubernetes.
  • Knowledge of Infrastructure as Code and configuration management tools such as Terraform and Ansible.
  • Preferred (Education) – BE/BTech / ME/MTech in Computer Science or related discipline

 

Role & Responsibilities

  • Build and maintain Azure DevOps YAML-based CI/CD pipelines for build, test, and deployments.
  • Manage Azure DevOps Repos, Pipelines, Boards, and Artifacts.
  • Implement Git branching strategies and automate release workflows.
  • Develop scripts using Bash, Python, or PowerShell for DevOps automation.
  • Monitor systems using Grafana, Prometheus, Datadog, or CloudWatch and handle incidents.
  • Collaborate with dev and QA teams in an Agile/Scrum environment.
  • Maintain documentation, runbooks, and participate in root cause analysis.

 

Ideal Candidate

  • 2–5 years of experience as an Azure DevOps Engineer.
  • Strong hands-on experience with Azure DevOps CI/CD (YAML) and Git.
  • Experience with Microsoft Azure (OCI/AWS exposure is a plus).
  • Working knowledge of SQL Server, PostgreSQL, or Oracle.
  • Good scripting, troubleshooting, and communication skills.
  • Bonus: Docker, Kubernetes, Terraform, Ansible experience.
  • Comfortable with WFO (Koramangala, Bangalore).


Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Bengaluru (Bangalore)
2 - 5 yrs
₹4L - ₹6L / yr
DevOps
Azure

Strong Azure DevOps Engineer Profiles.

Mandatory (Experience 1) – Must have minimum 2+ years of hands-on experience as an Azure DevOps Engineer with strong exposure to Azure DevOps Services (Repos, Pipelines, Boards, Artifacts).

Mandatory (Experience 2) – Must have strong experience in designing and maintaining YAML-based CI/CD pipelines, including end-to-end automation of build, test, and deployment workflows.

Mandatory (Experience 3) – Must have hands-on scripting and automation experience using Bash, Python, and/or PowerShell

Mandatory (Experience 4) – Must have working knowledge of databases such as Microsoft SQL Server, PostgreSQL, or Oracle Database

Mandatory (Experience 5) – Must have experience with monitoring, alerting, and incident management using tools like Grafana, Prometheus, Datadog, or CloudWatch, including troubleshooting and root cause analysis

Preferred

Read more
Global digital transformation solutions provider

Global digital transformation solutions provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Trivandrum, Kochi (Cochin)
4 - 6 yrs
₹11L - ₹17L / yr
Windows Azure
skill iconPython
SQL Azure
databricks
PySpark
+15 more

JOB DETAILS:

* Job Title: Associate III - Azure Data Engineer 

* Industry: Global digital transformation solutions provide

* Salary: Best in Industry

* Experience: 4 -6 years

* Location: Trivandrum, Kochi

Job Description: Azure Data Engineer (4–6 Years Experience)

Job Type: Full-time 

Locations: Kochi, Trivandrum

 

Must-Have Skills

Azure & Data Engineering

  • Azure Data Factory (ADF)
  • Azure Databricks (PySpark)
  • Azure Synapse Analytics
  • Azure Data Lake Storage Gen2
  • Azure SQL Database

 

Programming & Querying

  • Python (PySpark)
  • SQL / Spark SQL

 

Data Modelling

  • Star & Snowflake schema
  • Dimensional modelling

 

Source Systems

  • SQL Server
  • Oracle
  • SAP
  • REST APIs
  • Flat files (CSV, JSON, XML)

 

CI/CD & Version Control

  • Git
  • Azure DevOps / GitHub Actions

 

Monitoring & Scheduling

  • ADF triggers
  • Databricks jobs
  • Log Analytics

 

Security

  • Managed Identity
  • Azure Key Vault
  • Azure RBAC / Access Control

 

Soft Skills

  • Strong analytical & problem-solving skills
  • Good communication and collaboration
  • Ability to work in Agile/Scrum environments
  • Self-driven and proactive

 

Good-to-Have Skills

  • Power BI basics
  • Delta Live Tables
  • Synapse Pipelines
  • Real-time processing (Event Hub / Stream Analytics)
  • Infrastructure as Code (Terraform / ARM templates)
  • Data governance tools like Azure Purview
  • Azure Data Engineer Associate (DP-203) certification

 

Educational Qualifications

  • Bachelor’s degree in Computer Science, Information Technology, or a related field.

 

Skills: Azure Data Factory, Azure Databricks, Azure Synapse, Azure Data Lake Storage

 

Must-Haves

Azure Data Factory (4-6 years), Azure Databricks/PySpark (4-6 years), Azure Synapse Analytics (4-6 years), SQL/Spark SQL (4-6 years), Git/Azure DevOps (4-6 years)

Skills: Azure, Azure data factory, Python, Pyspark, Sql, Rest Api, Azure Devops

Relevant 4 - 6 Years

python is mandatory

 

******

Notice period - 0 to 15 days only (Feb joiners’ profiles only)

Location: Kochi

F2F Interview 7th Feb

Read more
Global digital transformation solutions provider

Global digital transformation solutions provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
6 - 9 yrs
₹13L - ₹22L / yr
Web API
skill iconC#
skill icon.NET
skill iconAmazon Web Services (AWS)
Agile/Scrum
+19 more

JOB DETAILS:

* Job Title: Lead I - (Web Api, C# .Net, .Net Core, Aws (Mandatory)

* Industry: Global digital transformation solutions provide

* Salary: Best in Industry

* Experience: 6 -9 years

* Location: Hyderabad

Job Description

Role Overview

We are looking for a highly skilled Senior .NET Developer who has strong experience in building scalable, high‑performance backend services using .NET Core and C#, with hands‑on expertise in AWS cloud services. The ideal candidate should be capable of working in an Agile environment, collaborating with cross‑functional teams, and contributing to both design and development. Experience with React and Datadog monitoring tools will be an added advantage.

 

Key Responsibilities

  • Design, develop, and maintain backend services and APIs using .NET Core and C#.
  • Work with AWS services (Lambda, S3, ECS/EKS, API Gateway, RDS, etc.) to build cloud‑native applications.
  • Collaborate with architects and senior engineers on solution design and implementation.
  • Write clean, scalable, and well‑documented code.
  • Use Postman to build and test RESTful APIs.
  • Participate in code reviews and provide technical guidance to junior developers.
  • Troubleshoot and optimize application performance.
  • Work closely with QA, DevOps, and Product teams in an Agile setup.
  • (Optional) Contribute to frontend development using React.
  • (Optional) Use Datadog for monitoring, logging, and performance metrics.

 

Required Skills & Experience

  • 6+ years of experience in backend development.
  • Strong proficiency in C# and .NET Core.
  • Experience building RESTful services and microservices.
  • Hands‑on experience with AWS cloud platform.
  • Solid understanding of API testing using Postman.
  • Knowledge of relational databases (SQL Server, PostgreSQL, etc.).
  • Strong problem‑solving and debugging skills.
  • Experience working in Agile/Scrum teams.

 

Good to Have

  • Experience with React for frontend development.
  • Exposure to Datadog for monitoring and logging.
  • Knowledge of CI/CD tools (GitHub Actions, Jenkins, AWS CodePipeline, etc.).
  • Containerization experience (Docker, Kubernetes).

 

Soft Skills

  • Strong communication and collaboration abilities.
  • Ability to work in a fast‑paced environment.
  • Ownership mindset with a focus on delivering high‑quality solutions.

 

Skills

.NET Core, C#, AWS, Postman

 

Notice period - 0 to 15 days only

Location: Hyderabad

Virtual Interview: 7th Feb 2026

First round will be Virtual

2nd round will be F2F

Read more
Global digital transformation solutions provider

Global digital transformation solutions provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
4 - 10 yrs
₹8L - ₹20L / yr
Automated testing
Software Testing (QA)
Mobile App Testing (QA)
Web applications
skill iconJavascript
+17 more

JOB DETAILS:

* Job Title: Tester III - Software Testing- Playwright + API testing

* Industry: Global digital transformation solutions provide

* Salary: Best in Industry

* Experience: 4 -10 years

* Location: Hyderabad

Job Description

Responsibilities:

  • Design, develop, and maintain automated test scripts for web applications using Playwright.
  • Perform API testing using industry-standard tools and frameworks.
  • Collaborate with developers, product owners, and QA teams to ensure high-quality releases.
  • Analyze test results, identify defects, and track them to closure.
  • Participate in requirement reviews, test planning, and test strategy discussions.
  • Ensure automation coverage, maintain reusable test frameworks, and optimize execution pipelines.

 

Required Experience:

  • Strong hands-on experience in Automation Testing for web-based applications.
  • Proven expertise in Playwright (JavaScript, TypeScript, or Python-based scripting).
  • Solid experience in API testing (Postman, REST Assured, or similar tools).
  • Good understanding of software QA methodologies, tools, and processes.
  • Ability to write clear, concise test cases and automation scripts.
  • Experience with CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps) is an added advantage.

 

Good to Have:

  • Knowledge of cloud environments (AWS/Azure)
  • Experience with version control tools like Git
  • Familiarity with Agile/Scrum methodologies

 

Skills: automation testing, sql, api testing, soap ui testing, playwright

Read more
Global digital transformation solutions provider

Global digital transformation solutions provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai
7 - 9 yrs
₹16L - ₹20L / yr
skill iconReact Native
Mobile App Development
Scalability
Architecture
Microservices
+7 more

JOB DETAILS:

- Job Title: Lead II - Software Engineering- React Native - React Native, Mobile App Architecture, Performance Optimization & Scalability

- Industry: Global digital transformation solutions provider

- Experience: 7-9 years

- Working Days: 5 days/week

- Job Location: Mumbai

- CTC Range: Best in Industry

 

Job Description

Job Title

Lead React Native Developer (6–8 Years Experience)

 

Position Overview

We are looking for a Lead React Native Developer to provide technical leadership for our mobile applications. This role involves owning architectural decisions, setting development standards, mentoring teams, and driving scalable, high-performance mobile solutions aligned with business goals.

 

Must-Have Skills

  • 6–8 years of experience in mobile application development
  • Extensive hands-on experience leading React Native projects
  • Expert-level understanding of React Native architecture and internals
  • Strong knowledge of mobile app architecture patterns
  • Proven experience with performance optimization and scalability
  • Experience in technical leadership, team management, and mentorship
  • Strong problem-solving and analytical skills
  • Excellent communication and collaboration abilities
  • Proficiency in modern React Native development practices
  • Experience with Expo toolkit and libraries
  • Strong understanding of custom hooks development
  • Focus on writing clean, maintainable, and scalable code
  • Understanding of mobile app lifecycle
  • Knowledge of cross-platform design consistency

 

Good-to-Have Skills

  • Experience with microservices architecture
  • Knowledge of cloud platforms such as AWS, Firebase, etc.
  • Understanding of DevOps practices and CI/CD pipelines
  • Experience with A/B testing and feature flag implementation
  • Familiarity with machine learning integration in mobile applications
  • Exposure to innovation-driven technical decision-making

 

Skills: React native, mobile app development, devops, machine learning

 

******

Notice period - 0 to 15 days only (Need Feb Joiners)

Location: Navi Mumbai, Belapur

Read more
AsperAI

at AsperAI

4 candid answers
Bisman Gill
Posted by Bisman Gill
BLR
3 - 6 yrs
Upto ₹33L / yr (Varies
)
CI/CD
skill iconKubernetes
skill iconDocker
kubeflow
TensorFlow
+7 more

About the Role

We are seeking a highly skilled and experienced AI Ops Engineer to join our team. In this role, you will be responsible for ensuring the reliability, scalability, and efficiency of our AI/ML systems in production. You will work at the intersection of software engineering, machine learning, and DevOps— helping to design, deploy, and manage AI/ML models and pipelines that power mission-critical business applications.

The ideal candidate has hands-on experience in AI/ML operations and orchestrating complex data pipelines, a strong understanding of cloud-native technologies, and a passion for building robust, automated, and scalable systems.


Key Responsibilities

  • AI/ML Systems Operations: Develop and manage systems to run and monitor production AI/ML workloads, ensuring performance, availability, cost-efficiency and convenience.
  • Deployment & Automation: Build and maintain ETL, ML and Agentic pipelines, ensuring reproducibility and smooth deployments across environments.
  • Monitoring & Incident Response: Design observability frameworks for ML systems (alerts and notifications, latency, cost, etc.) and lead incident triage, root cause analysis, and remediation.
  • Collaboration: Partner with data scientists, ML engineers, and software engineers to operationalize models at scale.
  • Optimization: Continuously improve infrastructure, workflows, and automation to reduce latency, increase throughput, and minimize costs.
  • Governance & Compliance: Implement MLOps best practices, including versioning, auditing, security, and compliance for data and models.
  • Leadership: Mentor junior engineers and contribute to the development of AI Ops standards and playbooks.


Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent practical experience).
  • 4+ years of experience in AI/MLOps, DevOps, SRE, Data Engineering, or with at least 2+ years in AI/ML-focused operations.
  • Strong expertise with cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes, Docker).
  • Hands-on experience with ML pipelines and frameworks (MLflow, Kubeflow, Airflow, SageMaker, Vertex AI, etc.).
  • Proficiency in Python and/or other scripting languages for automation.
  • Familiarity with monitoring/observability tools (Prometheus, Grafana, Datadog, ELK, etc.).
  • Deep understanding of CI/CD, GitOps, and Infrastructure as Code (Terraform, Helm, etc.).
  • Knowledge of data governance, model drift detection, and compliance in AI systems.
  • Excellent problem-solving, communication, and collaboration skills.

Nice-to-Have

  • Experience in large-scale distributed systems and real-time data streaming (Kafka, Flink, Spark).
  • Familiarity with data science concepts, and frameworks such as scikit-learn, Keras, PyTorch, Tensorflow, etc.
  • Full Stack Development knowledge to collaborate effectively across end-to-end solution delivery
  • Contributions to open-source MLOps/AI Ops tools or platforms.
  • Exposure to Responsible AI practices, model fairness, and explainability frameworks

Why Join Us

  • Opportunity to shape and scale AI/ML operations in a fast-growing, innovation-driven environment.
  • Work alongside leading data scientists and engineers on cutting-edge AI solutions.
  • Competitive compensation, benefits, and career growth opportunities.
Read more
MNC

MNC

Agency job
via rekha by Rekja Gorle
Mumbai
5 - 10 yrs
₹10L - ₹25L / yr
Windows Azure
DevOps
Microsoft Windows Azure
skill iconKubernetes
Google Cloud Platform (GCP)
+6 more

We are hiring a Senior DevOps Engineer (5–10 years experience) with strong hands-on expertise in AWS, CI/CD, Docker, Kubernetes, and Linux. The role involves designing, automating, and managing scalable cloud infrastructure and deployment pipelines. Experience with Terraform/Ansible, monitoring tools, and security best practices is required. Immediate joiners preferred.

Read more
Impacto Digifin Technologies

at Impacto Digifin Technologies

4 candid answers
1 recruiter
Navitha Reddy
Posted by Navitha Reddy
Bengaluru (Bangalore)
1 - 4 yrs
₹5L - ₹7L / yr
skill iconPython
Automation
Test Automation (QA)
Object Oriented Programming (OOPs)
RESTful APIs
+10 more

Job Description: Python Automation Engineer Location: Bangalore (Office-based) Experience: 1–2 Years Joining: Immediate to 30 Days Role Overview We are looking for a Python Automation Engineer who combines strong programming skills with hands-on automation expertise. This role involves developing automation scripts, designing automation frameworks, and contributing independently to automation solutions, with leads delegating tasks and solution directions. The ideal candidate is not a novice—they have solid real-world Python experience and are comfortable working across API automation, automation tooling, and CI/CD-driven environments. Key Responsibilities Design, develop, and maintain automation scripts and reusable automation frameworks using Python Build and enhance API automation for REST-based services and common backend frameworks Independently own automation tasks and deliver solutions with minimal supervision Collaborate with leads and engineering teams to understand automation requirements Maintain clean, modular, and scalable automation code Occasionally review automation code written by other team members Integrate automation suites with CI/CD pipelines Package and ship automation tools/frameworks using containerization Required Skills & Qualifications Python (Core Requirement) Strong, in-depth hands-on experience in Python, including: Object-Oriented Programming (OOP) and modular design Writing reusable libraries and frameworks Exception handling, logging, and debugging Asynchronous concepts, performance-aware coding Unit testing and test automation practices Code quality, readability, and maintainability API Automation Strong experience automating REST APIs Hands-on with common Python API libraries (e.g., requests, httpx, or equivalent) Understanding of API request/response handling, validations, and workflows Familiarity with different backend frameworks and fast APIs DevOps & Engineering Practices (Must-Have) Strong knowledge of Git Experience with CI/CD tools (Jenkins, GitHub Actions, GitLab, or similar) Ability to integrate automation suites into pipelines Hands-on experience with Docker for shipping automation tools/frameworks Good-to-Have Skills UI automation using Selenium (Page Object Model, cross-browser testing, headless execution) Exposure to Playwright for UI automation Basic working knowledge of Java and/or JavaScript (reading, writing small scripts, debugging) Understanding of API authentication, retries, mocking, and related best practices Domain Exposure Experience or interest in SaaS platforms Exposure to AI / ML-based platforms is a plus What We’re Looking For A strong engineering mindset, not just tool usage Someone who can build automation systems, not only execute test cases Comfortable working independently while aligning with technical leads Passion for clean code, scalable automation, and continuous improvement SKILLA IN 1 WORKKD TO PUT IN KEYSKILL SECTION 

Read more
Mumbai, andheri
9 - 15 yrs
₹8L - ₹10L / yr
Manager - Talent Acquisition
Talent Acquisition Specialists
Sourcing
it recruiter
dot net
+12 more

Job Title Manager - Talent Acquisition

Function Human Resource Dept.

Reports to HR Head

Span of control Team – Talent Acquisition Specialists

Principal Purpose • Collaborate with department managers regularly and proactively

identify future hiring needs. Attract and recruit candidates at the

right cost, time, and quality. Explore and optimize all channels of

sourcing - Internal & External. Build a talent pipeline for future

hiring needs.

• Drive excellence, experience design and data-driven decision

making.


Key Responsibilities Principal


• Identify talent needs and translate them into an agreed recruitment

plan, aimed at the fulfilment of the needs within time, budget and

quality constraints.

• Develop an in-depth knowledge of the job specifications to include

experience, skills and behavioral competencies needed for success

in each role.

• Conduct in-depth vacancy intake discussions leading to agreement

with hiring manager on proposed recruitment plan.

• Partner with stakeholders to understand business requirements,

educate them on market dynamics and constantly evolve the

recruitment process.

• Create a hiring plan with deliverables, timelines and a formal

tracking process.

• Coordinate, Schedule and Interview candidates within the

framework of the position specification. Possess strong ability to

screen, interview and prepare a candidate slate within an

appropriate and consistent timeline.

• Conduct in-depth interviews of potential candidates,

demonstrating ability to anticipate hiring manager preferences.

• Build and maintain a network of potential candidates through

proactive sourcing/research and ongoing relationship management

• Recommend ideas and strategies related to recruitment that will

contribute to the growth of the company, implement new

processes and fine-tuning standard processes for recruiting that fit

within the Organization's mission to deliver high-value results to

our customers.

• Participate in special projects/initiatives, including assessment of

best practices in interviewing techniques, leveraging of internal

sources of talent and identification of top performers for senior-

level openings.

• Build an “Employer Brand” in the Talent Market and Drive

Improvements in the Talent Acquisition Process

• Collaborate with marketing and communications teams for

integrated branding campaigns.

• Monitor and improve onboarding satisfaction scores and early

attrition rates by tracking feedback from new recruits across.

• Coordinate with HR operations, IT, medical admin, and business

functions to ensure Day 1 readiness (system access, ID cards,

induction slotting, etc.).

• Ensure fast TAT, high-quality selection, and seamless onboarding

process management.

• Develop KPI dashboards (time-to-fill, cost-per-hire, quality-of-hire,

interview-to-offer ratio) and present insights into leadership.

• Mentor and develop a high-performing recruitment team; manage

performance and succession planning.


Desired Skills


• Strategic thinker with analytical mindset.

• Change agent able to scale processes across multiple teams or

geographies.

• Project management and process optimization abilities.

• Strong employer branding and candidate experience focus.


Desired Experience & Qualification


• 10+ years of experience in HR with major exposure in Talent

Acquisition, preferably in the IT industry.

• Bachelor’s or Master’s degree in Human Resources (Mandatory)

Read more
Chennai
5 - 10 yrs
₹15L - ₹30L / yr
skill iconPython
FastAPI
skill iconReact.js
DevOps

🚀 We’re Hiring: Senior Full Stack Developer (Python FastAPI & React.js)


📍 Location: Chennai, Tamil Nadu (On-site)

🧠 Experience: 5+ Years

🕒 Employment Type: Full-time


About Lumera Software Solutions


Lumera Software Solutions is a product development–focused organization building technology solutions for world-class supply chain leaders. Our products help global enterprises optimize, automate, and gain real-time visibility across complex supply chain operations.

We design and build scalable, high-performance software used by industry-leading supply chain organizations worldwide, collaborating closely with global stakeholders to solve real, high-impact problems.

Our teams work closely across engineering, design, and product to deliver reliable, well-architected solutions, while collaborating with a globally distributed team across regions and time zones.


🔥 Why This Role Is Different


This role is designed for senior engineers and architect-leaning developers who want to take technical ownership and influence how complex, enterprise-grade systems are built.

You will be building mission-critical product features used by world-class supply chain leaders, working on problems that demand strong system design, scalability, and long-term architectural thinking.

This is a core product development role, not a maintenance or support-driven position.


💼 What You’ll Do

  • Act as a senior individual contributor, owning complex, backend-heavy features end-to-end
  • Design and develop scalable, production-grade backend services and APIs using Python & FastAPI
  • Build and maintain front-end components using React.js, with primary focus on backend-driven functionality
  • Participate in system design, architecture reviews, and technical decision-making
  • Apply AI-assisted development tools to improve development speed, testing, and code quality
  • Collaborate with a globally distributed engineering and product team while working from our Chennai office
  • Work closely with databases to design efficient schemas and optimize queries
  • Apply strong understanding of DevOps concepts including CI/CD, containerization, deployment, monitoring, and observability
  • Mentor engineers through code reviews and design discussions
  • Drive improvements in performance, reliability, security, and scalability


🛠️ What We’re Looking For

  • 5+ years of hands-on experience in full stack roles with a backend-heavy focus
  • Strong proficiency in Python with FastAPI, including API design and performance considerations
  • Solid experience building and consuming RESTful APIs at scale
  • Good working experience with React.js, JavaScript (ES6+), HTML, and CSS
  • Strong understanding of backend architecture, system design, and data modeling
  • Good understanding of DevOps concepts such as CI/CD pipelines, Docker, cloud deployments, and monitoring
  • Ability to make sound technical trade-offs with long-term maintainability in mind
  • Experience mentoring developers or leading by technical example
  • Comfortable working in a focused, work-from-office (WFO) product development environment
  • Educational Qualification: B.E / B.Tech (Computer Science or related fields)**


Nice to Have (Not Mandatory)

  • Experience with cloud platforms (AWS / Azure / GCP)
  • Familiarity with Docker and containerization
  • Experience setting up or working with CI/CD pipelines
  • Exposure to product development teams or building in-house products
  • Exposure to AI/ML concepts, data pipelines, or integrating AI services into products
  • Experience using AI developer tools to enhance productivity


🎯 What We Offer

  • Competitive salary based on skills and impact
  • High ownership and visibility of your work
  • Fast learning curve with real technical challenges
  • A collaborative, no-nonsense engineering culture
  • Long-term growth as the company scales


✨ Lumera Software Solutions is an equal opportunity employer. We value talent, ownership, and diversity.

Read more
Mphasis
Agency job
via Thomasmount Consulting by Shirin Shahana
Bengaluru (Bangalore)
10 - 18 yrs
₹25L - ₹33L / yr
technical program manager
Systems Development Life Cycle (SDLC)
Databases
DevOps

About the Role

We are looking for a Senior Program Manager (DevX) to lead enterprise-scale initiatives that improve developer productivity, engineering workflows, and platform efficiency. This role is central to our software transformation journey, ensuring developers have the right tools, platforms, and processes to deliver at scale.

What You’ll Do

  • Lead end-to-end DevX and Developer Productivity programs
  • Drive engineering efficiency across SDLC, CI/CD, and platform tooling
  • Partner with Engineering, Platform, and Product teams on tooling and transformation roadmaps
  • Standardize and optimize developer, database, and SDLC tools
  • Reduce friction from code → build → test → deploy
  • Establish program governance, metrics, and executive reporting
  • Manage risks, dependencies, and large-scale change initiatives

What We’re Looking For

  • 10+ years in Technical Program Management / Engineering Program Management
  • Proven experience driving DevX, Platform, or Engineering Transformation programs
  • Strong understanding of Agile, DevOps, and modern SDLC practices
  • Experience working with large, cross-functional engineering teams
  • Excellent stakeholder and executive communication skills

Tools & Platforms

  • Developer Tools: GitHub, VS Code, IntelliJ, Postman
  • CI/CD & SDLC: Azure DevOps, Jenkins, GitHub Actions, JIRA, Confluence, SonarQube
  • Platforms: Docker, Kubernetes
  • Database Tools: SSMS, DBeaver, DataGrip, Azure Data Studio, pgAdmin

Nice to Have

  • Platform or Cloud Engineering background
  • Experience in large-scale software or engineering transformations
  • Ex-Engineer or Technical background


Read more
MIC Global

at MIC Global

3 candid answers
1 product
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
5 - 15 yrs
Upto ₹50L / yr (Varies
)
Windows Azure
DevOps
Infrastructure

MIC Global is a full-stack micro-insurance provider, purpose-built to design and deliver embedded parametric micro-insurance solutions to platform companies. Our mission is to make insurance more accessible for new, emerging, and underserved risks using our MiIncome loss-of-income products, MiConnect, MiIdentity, Coverpoint technology, and more - backed by innovative underwriting capabilities as a Lloyd’s Coverholder and through our in-house reinsurer, MicRe.


We operate across 12+ countries, with our Global Operations Center in Bangalore supporting clients worldwide, including a leading global ride-hailing platform and a top international property rental marketplace. Our distributed teams across the UK, USA, and Asia collaborate to ensure that no one is beyond the reach of financial security.


Role Overview

As Lead – Product Support & IT Infrastructure, you will oversee the technology backbone that supports MIC Global’s products, data operations, and global business continuity. You will manage all aspects of IT infrastructure, system uptime, cybersecurity, and support operations ensuring that MIC’s platforms remain reliable, secure, and scalable.


This is a pivotal, hands-on leadership role, blending strategic oversight with operational execution. The ideal candidate combines strong technical expertise with a proactive, service-oriented mindset to support both internal teams and external partners.


Key Responsibilities


Infrastructure & Operations

  • Oversee all IT infrastructure and operations, including database administration, hosting environments, and production systems.
  • Ensure system reliability, uptime, and performance across global deployments.
  • Align IT operations with Agile development cycles and product release plans.
  • Manage the IT service desk (MiTracker), ensuring timely and high-quality resolution of incidents.
  • Drive continuous improvement in monitoring, alerting, and automation processes.
  • Lead the development, testing, and maintenance of Disaster Recovery (DR) and Business Continuity Plans (BCP).
  • Manage vendor relationships, IT budgets, and monthly cost reporting.

Security & Compliance

  • Lead cybersecurity efforts across the organization, developing and implementing comprehensive information security strategies.
  • Monitor, respond to, and mitigate security incidents in a timely manner.
  • Maintain compliance with industry standards and data protection regulations (e.g., SOC 2, GDPR, ISO27001).
  • Prepare regular reports on security incidents, IT costs, and system performance for review with the Head of Technology.

Team & Process Management

  • Deliver exceptional customer service by ensuring internal and external technology users are supported effectively.
  • Implement strategies to ensure business continuity during absences — including defined backup responsibilities and robust process documentation.
  • Promote knowledge sharing and operational excellence across Product Support and IT teams.
  • Build and maintain a culture of accountability, responsiveness, and cross-team collaboration.

Required Qualifications

  • Azure administration experience and qualifications, such as Microsoft Certified: Azure Administrator Associate or Azure Solutions Architect Expert.
  • Strong SQL Server DBA capabilities and experience, including performance tuning, high availability configurations, and certifications like Microsoft Certified: Azure Database Administrator Associate.
  • 8+ years of experience in IT infrastructure management, DevOps, or IT operations, essential to be within Product focused companies; fintech, insurtech, or SaaS environments.
  • Proven experience leading service desk or technical support functions in a 24/7 uptime environment.
  • Deep understanding of cloud infrastructure (AWS/Azure/GCP), database administration, and monitoring tools (e.g., Grafana, Datadog, CloudWatch).
  • Hands-on experience with security frameworks, incident response, and business continuity planning.
  • Strong analytical, problem-solving, and communication skills, with the ability to work cross-functionally.
  • Demonstrated leadership in managing teams and implementing scalable IT systems and processes.

Benefits

  • 33 days of paid holiday
  • Competitive compensation well above market average
  • Work in a high-growth, high-impact environment with passionate, talented peers
  • Clear path for personal growth and leadership development.
Read more
Deqode

at Deqode

1 recruiter
Samiksha Agrawal
Posted by Samiksha Agrawal
Pune
7 - 10 yrs
₹7L - ₹18L / yr
SRE
DevOps
Terraform
skill iconKubernetes
skill iconDocker

Role: DevOps Engineer

Experience: 7+ Years

Location: Pune / Trivandrum

Work Mode: Hybrid

𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:

  • Drive CI/CD pipelines for microservices and cloud architectures
  • Design and operate cloud-native platforms (AWS/Azure)
  • Manage Kubernetes/OpenShift clusters and containerized applications
  • Develop automated pipelines and infrastructure scripts
  • Collaborate with cross-functional teams on DevOps best practices
  • Mentor development teams on continuous delivery and reliability
  • Handle incident management, troubleshooting, and root cause analysis

𝐌𝐚𝐧𝐝𝐚𝐭𝐨𝐫𝐲 𝐒𝐤𝐢𝐥𝐥𝐬:

  • 7+ years in DevOps/SRE roles
  • Strong experience with AWS or Azure
  • Hands-on with Docker, Kubernetes, and/or OpenShift
  • Proficiency in Jenkins, Git, Maven, JIRA
  • Strong scripting skills (Shell, Python, Perl, Ruby, JavaScript)
  • Solid networking knowledge and troubleshooting skills
  • Excellent communication and collaboration abilities

𝐏𝐫𝐞𝐟𝐞𝐫𝐫𝐞𝐝 𝐒𝐤𝐢𝐥𝐥𝐬:

  • Experience with Helm, monitoring tools (Splunk, Grafana, New Relic, Datadog)
  • Knowledge of Microservices and SOA architectures
  • Familiarity with database technologies


Read more
OpsTree Solutions

at OpsTree Solutions

4 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Hyderabad
4yrs+
Upto ₹30L / yr (Varies
)
skill iconPython
skill iconAmazon Web Services (AWS)
EKS
skill iconKubernetes
DevOps
+3 more

Key Responsibilities:

  • Lead the architecture, design, and implementation of scalable, secure, and highly available AWS infrastructure leveraging services such as VPC, EC2, IAM, S3, SNS/SQS, EKS, KMS, and Secrets Manager.
  • Develop and maintain reusable, modular IaC frameworks using Terraform and Terragrunt, and mentor team members on IaC best practices.
  • Drive automation of infrastructure provisioning, deployment workflows, and routine operations through advanced Python scripting.
  • Take ownership of cost optimization strategy by analyzing usage patterns, identifying savings opportunities, and implementing guardrails across multiple AWS environments.
  • Define and enforce infrastructure governance, including secure access controls, encryption policies, and secret management mechanisms.
  • Collaborate cross-functionally with development, QA, and operations teams to streamline and scale CI/CD pipelines for containerized microservices on Kubernetes (EKS).
  • Establish monitoring, alerting, and observability practices to ensure platform health, resilience, and performance.
  • Serve as a technical mentor and thought leader, guiding junior engineers and shaping cloud adoption and DevOps culture across the organization.
  • Evaluate emerging technologies and tools, recommending improvements to enhance system performance, reliability, and developer productivity.
  • Ensure infrastructure complies with security, regulatory, and operational standards, and drive initiatives around audit readiness and compliance.

Mandatory Skills & Experience:

  • AWS (Advanced Expertise): VPC, EC2, IAM, S3, SNS/SQS, EKS, KMS, Secrets Management
  • Infrastructure as Code: Extensive experience with Terraform and Terragrunt, including module design and IaC strategy
  • Strong hold in Kubernetes
  • Scripting & Automation: Proficient in Python, with a strong track record of building tools, automating workflows, and integrating cloud services
  • Cloud Cost Optimization: Proven ability to analyze cloud spend and implement sustainable cost control strategies
  • Leadership: Experience in leading DevOps/infrastructure teams or initiatives, mentoring engineers, and making architecture-level decisions

Nice to Have:

  • Experience designing or managing CI/CD pipelines for Kubernetes-based environments
  • Backend development background in Python (e.g., FastAPI, Flask)
  • Familiarity with monitoring/observability tools such as Prometheus, Grafana, CloudWatch
  • Understanding of system performance tuning, capacity planning, and scalability best practices
  • Exposure to compliance standards such as SOC 2, HIPAA, or ISO 27001
Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹60L - ₹80L / yr
Apache Airflow
Apache Spark
AWS CloudFormation
DevOps
MLOps
+19 more

Review Criteria:

  • Strong MLOps profile
  • 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
  • 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
  • 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
  • Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
  • Must have hands-on Python for pipeline & automation development
  • 4+ years of experience in AWS cloud, with recent companies
  • (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth

 

Preferred:

  • Hands-on in Docker deployments for ML workflows on EKS / ECS
  • Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
  • Experience with CI / CD / CT using GitHub Actions / Jenkins.
  • Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
  • Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.

 

Job Specific Criteria:

  • CV Attachment is mandatory
  • Please provide CTC Breakup (Fixed + Variable)?
  • Are you okay for F2F round?
  • Have candidate filled the google form?

 

Role & Responsibilities:

We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.

 

Key Responsibilities:

  • Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
  • Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
  • Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
  • Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
  • Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
  • Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
  • Collaborate with data scientists to productionize notebooks, experiments, and model deployments.

 

Ideal Candidate:

  • 8+ years in MLOps/DevOps with strong ML pipeline experience.
  • Strong hands-on experience with AWS:
  • Compute/Orchestration: EKS, ECS, EC2, Lambda
  • Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
  • Workflow: MWAA/Airflow, Step Functions
  • Monitoring: CloudWatch, OpenSearch, Grafana
  • Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
  • Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
  • Strong Linux, scripting, and troubleshooting skills.
  • Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.

 

Education:

  • Master’s degree in computer science, Machine Learning, Data Engineering, or related field. 
Read more
Marble X
Manpreet Kaur
Posted by Manpreet Kaur
Mumbai
3 - 10 yrs
₹3L - ₹22L / yr
Shell Scripting
skill iconPython
MLOps
skill iconJenkins
skill iconGit
+4 more


Skills  - MLOps Pipeline Development | CI/CD (Jenkins) | Automation Scripting | Model Deployment & Monitoring | ML Lifecycle Management | Version Control & Governance | Docker & Kubernetes | Performance Optimization | Troubleshooting | Security & Compliance


Responsibilities:

1. Design, develop, and implement MLOps pipelines for the continuous deployment and

integration of machine learning models

2. Collaborate with data scientists and engineers to understand model requirements and

optimize deployment processes

3. Automate the training, testing and deployment processes for machine learning models

4. Continuously monitor and maintain models in production, ensuring optimal

performance, accuracy and reliability

5. Implement best practices for version control, model reproducibility and governance

6. Optimize machine learning pipelines for scalability, efficiency and cost-effectiveness

7. Troubleshoot and resolve issues related to model deployment and performance

8. Ensure compliance with security and data privacy standards in all MLOps activities

9. Keep up to date with the latest MLOps tools, technologies and trends

10. Provide support and guidance to other team members on MLOps practices


Required skills and experience:

• 3-10 years of experience in MLOps, DevOps or a related field

• Bachelor’s degree in computer science, Data Science or a related field

• Strong understanding of machine learning principles and model lifecycle management

• Experience in Jenkins pipeline development

• Experience in automation scripting



Read more
Financial Services

Financial Services

Agency job
via WITS Innovation Lab by kanchan Tigga
The Capital, Bandra (East), Mumbai
2.5 - 10 yrs
₹5L - ₹22L / yr
skill iconJenkins
skill iconPython
Shell Scripting
MLOps
DevOps
+9 more

Responsibilities


  • Design, develop, and implement MLOps pipelines for the continuous deployment and integration of machine learning models
  • Collaborate with data scientists and engineers to understand model requirements and optimize deployment processes
  • Automate the training, testing and deployment processes for machine learning models
  • Continuously monitor and maintain models in production, ensuring optimal performance, accuracy and reliability
  • Implement best practices for version control, model reproducibility and governance
  • Optimize machine learning pipelines for scalability, efficiency and cost-effectiveness
  • Troubleshoot and resolve issues related to model deployment and performance
  • Ensure compliance with security and data privacy standards in all MLOps activities
  • Keep up to date with the latest MLOps tools, technologies and trends
  • Provide support and guidance to other team members on MLOps practices


Required Skills And Experience


  • 3-10 years of experience in MLOps, DevOps or a related field
  • Bachelors degree in computer science, Data Science or a related field
  • Strong understanding of machine learning principles and model lifecycle management
  • Experience in Jenkins pipeline development
  • Experience in automation scripting


Read more
US based large Biotech company with WW operations.

US based large Biotech company with WW operations.

Agency job
Remote only
5 - 10 yrs
₹20L - ₹25L / yr
skill iconAmazon Web Services (AWS)
cicd
decsecops
Terraform
Ansible
+9 more

Senior Cloud Engineer Job Description

Position Title: Senior Cloud Engineer -- AWS [LONG TERM-CONTRACT POSITION]

Location: Remote [REQUIRES WORKING IN CST TIME ZONE]


Position Overview

The Senior Cloud Engineer will play a critical role in designing, deploying, and managing scalable, secure, and highly available cloud infrastructure across multiple platforms (AWS, Azure, Google Cloud). This role requires deep technical expertise, leadership in cloud

strategy, and hands-on experience with automation, DevOps practices, and cloud-native technologies. The ideal candidate will work collaboratively with cross-functional teams to deliver robust cloud solutions, drive best practices, and support business objectives

through innovative cloud engineering.


Key Responsibilities

Design, implement, and maintain cloud infrastructure and services, ensuring high availability, performance, and security across multi-cloud environments (AWS, Azure, GCP)


Develop and manage Infrastructure as Code (IaC) using tools such as Terraform, CloudFormation, and Ansible for automated provisioning and configuration


Lead the adoption and optimization of DevOps methodologies, including CI/CD pipelines, automated testing, and deployment processes


Collaborate with software engineers, architects, and stakeholders to architect cloud-native solutions that meet business and technical requirements


Monitor, troubleshoot, and optimize cloud systems for cost, performance, and reliability, using cloud monitoring and logging tools


Ensure cloud environments adhere to security best practices, compliance standards, and governance policies, including identity and access management, encryption, and vulnerability management

Mentor and guide junior engineers, sharing knowledge and fostering a culture of continuous improvement and innovation


Participate in on-call rotation and provide escalation support for critical cloud infrastructure issues

Document cloud architectures, processes, and procedures to ensure knowledge transfer and operational excellence


Stay current with emerging cloud technologies, trends, and best practices,

Required Qualifications

  • Bachelors or Masters degree in Computer Science, Engineering, Information Systems, or a related field, or equivalent work experience
  • 6–10 years of experience in cloud engineering or related roles, with a proven track record in large-scale cloud environments
  • Deep expertise in at least one major cloud platform (AWS, Azure, Google Cloud) and experience in multi-cloud environments
  • Strong programming and scripting skills (Python, Bash, PowerShell, etc.) for automation and cloud service integration
  • Proficiency with DevOps tools and practices, including CI/CD (Jenkins, GitLab CI), containerization (Docker, Kubernetes), and configuration management (Ansible, Chef)
  • Solid understanding of networking concepts (VPC, VPN, DNS, firewalls, load balancers), system administration (Linux/Windows), and cloud storage solutions
  • Experience with cloud security, governance, and compliance frameworks
  • Excellent analytical, troubleshooting, and root cause analysis skills
  • Strong communication and collaboration abilities, with experience working in agile, interdisciplinary teams
  • Ability to work independently, manage multiple priorities, and lead complex projects to completion


Preferred Qualifications

  • Relevant cloud certifications (e.g., AWS Certified Solutions Architect, AWS DevOps Engineer, Microsoft AZ-300/400/500, Google Professional Cloud Architect)
  • Experience with cloud cost optimization and FinOps practices
  • Familiarity with monitoring/logging tools (CloudWatch, Kibana, Logstash, Datadog, etc.)
  • Exposure to cloud database technologies (SQL, NoSQL, managed database services)
  • Knowledge of cloud migration strategies and hybrid cloud architectures


Read more
IT Services & Staffing Solutions Industry

IT Services & Staffing Solutions Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
12 - 14 yrs
₹29L - ₹38L / yr
skill iconAmazon Web Services (AWS)
DevOps
Terraform
Troubleshooting
Amazon VPC
+16 more

REVIEW CRITERIA:

MANDATORY:

  • Strong Hands-On AWS Cloud Engineering / DevOps Profile
  • Mandatory (Experience 1): Must have 12+ years of experience in AWS Cloud Engineering / Cloud Operations / Application Support
  • Mandatory (Experience 2): Must have strong hands-on experience supporting AWS production environments (EC2, VPC, IAM, S3, ALB, CloudWatch)
  • Mandatory (Infrastructure as a code): Must have hands-on Infrastructure as Code experience using Terraform in production environments
  • Mandatory (AWS Networking): Strong understanding of AWS networking and connectivity (VPC design, routing, NAT, load balancers, hybrid connectivity basics)
  • Mandatory (Cost Optimization): Exposure to cost optimization and usage tracking in AWS environments
  • Mandatory (Core Skills): Experience handling monitoring, alerts, incident management, and root cause analysis
  • Mandatory (Soft Skills): Strong communication skills and stakeholder coordination skills


ROLE & RESPONSIBILITIES:

We are looking for a hands-on AWS Cloud Engineer to support day-to-day cloud operations, automation, and reliability of AWS environments. This role works closely with the Cloud Operations Lead, DevOps, Security, and Application teams to ensure stable, secure, and cost-effective cloud platforms.


KEY RESPONSIBILITIES:

  • Operate and support AWS production environments across multiple accounts
  • Manage infrastructure using Terraform and support CI/CD pipelines
  • Support Amazon EKS clusters, upgrades, scaling, and troubleshooting
  • Build and manage Docker images and push to Amazon ECR
  • Monitor systems using CloudWatch and third-party tools; respond to incidents
  • Support AWS networking (VPCs, NAT, Transit Gateway, VPN/DX)
  • Assist with cost optimization, tagging, and governance standards
  • Automate operational tasks using Python, Lambda, and Systems Manager


IDEAL CANDIDATE:

  • Strong hands-on AWS experience (EC2, VPC, IAM, S3, ALB, CloudWatch)
  • Experience with Terraform and Git-based workflows
  • Hands-on experience with Kubernetes / EKS
  • Experience with CI/CD tools (GitHub Actions, Jenkins, etc.)
  • Scripting experience in Python or Bash
  • Understanding of monitoring, incident management, and cloud security basics


NICE TO HAVE:

  • AWS Associate-level certifications
  • Experience with Karpenter, Prometheus, New Relic
  • Exposure to FinOps and cost optimization practices
Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
6 - 8 yrs
₹16L - ₹22L / yr
skill iconC#
skill icon.NET
ASP.NET
SQL
SQL server
+17 more

JOB DETAILS:

Job Role: Lead I - .Net Developer - .NET, Azure, Software Engineering

Industry: Global digital transformation solutions provider

Work Mode: Hybrid

Salary: Best in Industry

Experience: 6-8 years

Location: Hyderabad 


Job Description:

 Experience in Microsoft Web development technologies such as Web API, SOAP XML 

• C#/.NET .Netcore and ASP.NET Web application experience Cloud based development experience in AWS or Azure

• Knowledge of cloud architecture and technologies

• Support/Incident management experience in a 24/7 environment

• SQL Server and SSIS experience

• DevOps experience of Github and Jenkins CI/CD pipelines or similar

• Windows Server 2016/2019+ and SQL Server 2019+ experience

• Experience of the full software development lifecycle

• You will write clean, scalable code, with a view towards design patterns and security best practices

• Understanding of Agile methodologies working within the SCRUM framework AWS knowledge


Must-Haves

C#/.NET/.NET Core (experienced), ASP.NET Web application (experienced), SQL Server/SSIS (experienced), DevOps (Github/Jenkins CI/CD), Cloud architecture (AWS or Azure)

.NET (Senior level), Azure (Very good knowledge), Stakeholder Management (Good)

Mandatory skills: Net core with Azure or AWS experience

Notice period - 0 to 15 days only

Location: Hyderabad

Virtual Drive - 17th Jan

Read more
Ride-hailing Industry

Ride-hailing Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
5 - 7 yrs
₹42L - ₹45L / yr
DevOps
skill iconPython
Shell Scripting
Infrastructure
Terraform
+16 more

JOB DETAILS:

- Job Title: Senior Devops Engineer 2

- Industry: Ride-hailing

- Experience: 5-7 years

- Working Days: 5 days/week

- Work Mode: ONSITE

- Job Location: Bangalore

- CTC Range: Best in Industry


Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)

 

Criteria:

1.   Candidate must be from a product-based or scalable app-based start-ups company with experience handling large-scale production traffic.

2.   Minimum 5 yrs of experience working as a DevOps/Infrastructure Consultant

3.   Own end-to-end infrastructure right from non-prod to prod environment including self-managed

4.   Candidate must have experience in database migration from scratch 

5.   Must have a firm hold on the container orchestration tool Kubernetes

6.   Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet

7.   Understanding programming languages like GO/Python, and Java

8.   Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

9.   Working experience on Cloud platform - AWS

10. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.

 

Description 

Job Summary:

As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.

 

Job Responsibilities:

● Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs

● Codify our infrastructure

● Do what it takes to keep the uptime above 99.99%

● Understand the bigger picture and sail through the ambiguities

● Scale technology considering cost and observability and manage end-to-end processes

● Understand DevOps philosophy and evangelize the principles across the organization

● Strong communication and collaboration skills to break down the silos

 

Job Requirements:

● B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience

● Minimum 5 yrs of experience working as a DevOps/Infrastructure Consultant

● Must have a firm hold on the container orchestration tool Kubernetes

● Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet

● Strong problem-solving skills, and ability to write scripts using any scripting language

● Understanding programming languages like GO/Python, and Java

● Comfortable working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

 

What’s there for you?

Company’s team handles everything – infra, tooling, and self-manages a bunch of databases, such as

● 150+ microservices with event-driven architecture across different tech stacks Golang/ java/ node

● More than 100,000 Request per second on our edge gateways

● ~20,000 events per second on self-managed Kafka

● 100s of TB of data on self-managed databases

● 100s of real-time continuous deployment to production

● Self-managed infra supporting

● 100% OSS

Read more
Ride-hailing Industry

Ride-hailing Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 9 yrs
₹47L - ₹50L / yr
DevOps
skill iconPython
Shell Scripting
skill iconKubernetes
Terraform
+15 more

JOB DETAILS:

- Job Title: Lead DevOps Engineer

- Industry: Ride-hailing

- Experience: 6-9 years

- Working Days: 5 days/week

- Work Mode: ONSITE

- Job Location: Bangalore

- CTC Range: Best in Industry


Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)

 

Criteria:

1.   Candidate must be from a product-based or scalable app-based start-ups company with experience handling large-scale production traffic.

2.   Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant

3.   Candidate must have 2 years of experience as an lead (handling team of 3 to 4 members at least)

4.   Own end-to-end infrastructure right from non-prod to prod environment including self-managed

5.   Candidate must have Self experience in database migration from scratch 

6.   Must have a firm hold on the container orchestration tool Kubernetes

7.   Should have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet

8.   Understanding programming languages like GO/Python, and Java

9.   Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

10.   Working experience on Cloud platform -AWS

11. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.

 

Description

Job Summary:

As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.

 

Job Responsibilities:

● Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs

● Codify our infrastructure

● Do what it takes to keep the uptime above 99.99%

● Understand the bigger picture and sail through the ambiguities

● Scale technology considering cost and observability and manage end-to-end processes

● Understand DevOps philosophy and evangelize the principles across the organization

● Strong communication and collaboration skills to break down the silos

 

Job Requirements:

● B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience

● Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant

● Must have a firm hold on the container orchestration tool Kubernetes

● Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet

● Strong problem-solving skills, and ability to write scripts using any scripting language

● Understanding programming languages like GO/Python, and Java

● Comfortable working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

 

What’s there for you?

Company’s team handles everything – infra, tooling, and self-manages a bunch of databases, such as

● 150+ microservices with event-driven architecture across different tech stacks Golang/ java/ node

● More than 100,000 Request per second on our edge gateways

● ~20,000 events per second on self-managed Kafka

● 100s of TB of data on self-managed databases

● 100s of real-time continuous deployment to production

● Self-managed infra supporting

● 100% OSS

Read more
Ride-hailing Industry

Ride-hailing Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
4 - 6 yrs
₹34L - ₹37L / yr
DevOps
skill iconPython
Shell Scripting
skill iconKubernetes
Monitoring
+18 more

JOB DETAILS:

- Job Title: Senior Devops Engineer 1

- Industry: Ride-hailing

- Experience: 4-6 years

- Working Days: 5 days/week

- Work Mode: ONSITE

- Job Location: Bangalore

- CTC Range: Best in Industry


Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)

 

Criteria:

1. Candidate must be from a product-based or scalable app-based startups company with experience handling large-scale production traffic.

2. Candidate must have strong Linux expertise with hands-on production troubleshooting and working knowledge of databases and middleware (Mongo, Redis, Cassandra, Elasticsearch, Kafka).

3. Candidate must have solid experience with Kubernetes.

4. Candidate should have strong knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.

5. Candidate must be an individual contributor with strong ownership.

6. Candidate must have hands-on experience with DATABASE MIGRATIONS and observability tools such as Prometheus and Grafana.

7. Candidate must have working knowledge of Go/Python and Java.

8. Candidate should have working experience on Cloud platform - AWS

9. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.

 

Description 

Job Summary:

As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.

 

Job Responsibilities:

- Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs.

- Understanding the needs of stakeholders and conveying this to developers.

- Working on ways to automate and improve development and release processes.

- Identifying technical problems and developing software updates and ‘fixes’.

- Working with software developers to ensure that development follows established processes and works as intended.

- Do what it takes to keep the uptime above 99.99%.

- Understand DevOps philosophy and evangelize the principles across the organization.

- Strong communication and collaboration skills to break down the silos

 

Job Requirements:

- B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience.

- Minimum 4 yrs of experience working as a DevOps/Infrastructure Consultant.

- Strong background in operating systems like Linux.

- Understands the container orchestration tool Kubernetes.

- Proficient Knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.

- Problem-solving attitude, and ability to write scripts using any scripting language.

- Understanding programming languages like GO/Python, and Java.

- Basic understanding of databases and middlewares like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

- Should be able to take ownership of tasks, and must be responsible. - Good communication skills

 

Read more
Bengaluru (Bangalore)
4 - 6 yrs
₹30L - ₹37L / yr
DevOps

Candidate must be from a product-based company with experience handling large-scale production traffic.

2. Candidate must have strong Linux expertise with hands-on production troubleshooting and working knowledge of databases and middleware (Mongo, Redis, Cassandra, Elasticsearch, Kafka).

3. Candidate must have solid experience with Kubernetes.

4. Candidate should have strong knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.

5. Candidate must be an individual contributor with strong ownership.

6. Candidate must have hands-on experience with DATABASE MIGRATIONS and observability tools such as Prometheus and Grafana.

7. Candidate must have working knowledge of Go/Python and Java.

8. Candidate should have working experience on Cloud platform - AWS

9. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation

Read more
Bengaluru (Bangalore)
6 - 9 yrs
₹47L - ₹50L / yr
DevOps

Candidate must be from a product-based company with experience handling large-scale production traffic.


2. Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant


3. Candidate must have 2 years of experience as an lead (handling team of 3 to 4 members atleast)


4. Own end-to-end infrastructure right from non-prod to prod environment including self-managed


5. Candidate must have Self experience in database migration from scratch


6. Must have a firm hold on the container orchestration tool Kubernetes


7. Should have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet


8. Understanding programming languages like GO/Python, and Java


9. Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.


10. Working experience on Cloud platform -AWS


11. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation

Read more
Zolvit (formerly Vakilsearch)

at Zolvit (formerly Vakilsearch)

1 video
2 recruiters
Lakshmi J
Posted by Lakshmi J
Chennai
2 - 4 yrs
₹10L - ₹16L / yr
DevOps
Linux administration
Unix administration
Shell Scripting
CI/CD
+5 more

We are looking for a passionate DevOps Engineer who can support deployment and monitor our Production, QE, and Staging environments performance. Applicants should have a strong understanding of UNIX internals and should be able to clearly articulate how it works. Knowledge of shell scripting & security aspects is a must. Any experience with infrastructure as code is a big plus. The key responsibility of the role is to manage deployments, security, and support of business solutions. Having experience in database applications like Postgres, ELK, NodeJS, NextJS & Ruby on Rails is a huge plus. At VakilSearch. Experience doesn't matter, passion to produce change matters



Responsibilities and Accountabilities:

  • As part of the DevOps team, you will be responsible for configuration, optimization, documentation, and support of the infra components of VakilSearch’s product which are hosted in cloud services & on-prem facility
  • Design, build tools and framework that support deploying and managing our platform & Exploring new tools, technologies, and processes to improve speed, efficiency, and scalability
  • Support and troubleshoot scalability, high availability, performance, monitoring, backup, and restore of different Env 
  • Manage resources in a cost-effective, innovative manner including assisting subordinates ineffective use of resources and tools
  • Resolve incidents as escalated from Monitoring tools and Business Development Team
  • Implement and follow security guidelines, both policy and technology to protect our data
  • Identify root cause for issues and develop long-term solutions to fix recurring issues and Document it
  • Strong in performing production operation activities even at night times if required
  • Ability to automate [Scripts] recurring tasks to increase velocity and quality
  • Ability to manage and deliver multiple project phases at the same time

I Qualification(s): 

  • Experience in working with Linux Server, DevOps tools, and Orchestration tools 
  • Linux, AWS, GCP, Azure, CompTIA+, and any other certification are a value-add 

II Experience Required in DevOps Aspects:

  • Length of Experience: Minimum 1-4 years of experience
  • Nature of Experience: 
  • Experience in Cloud deployments, Linux administration[ Kernel Tuning is a value add ], Linux clustering, AWS, virtualization, and networking concepts [ Azure, GCP value add ]
  • Experience in deployment solutions CI/CD like Jenkins, GitHub Actions [ Release Management is a value add ]
  • Hands-on experience in any of the configuration management IaC tools like Chef, Terraform, and CloudFormation [ Ansible & Puppet is a value add ]
  • Administration, Configuring and utilizing Monitoring and Alerting tools like Prometheus, Grafana, Loki, ELK, Zabbix, Datadog, etc
  • Experience with Containerization and orchestration tools like Docker, and Kubernetes [ Docker swarm is a value add ]Good Scripting skills in at least one interpreted language - Shell/bash scripting or Ruby/Python/Perl
  • Experience in Database applications like PostgreSQL, MongoDB & MySQL [DataOps]
  • Good at Version Control & source code management systems like GitHub, GIT
  • Experience in Serverless [ Lambda/GCP cloud function/Azure function ]
  • Experience in Web Server Nginx, and Apache
  • Knowledge in Redis, RabbitMQ, ELK, REST API [ MLOps Tools is a value add ]
  • Knowledge in Puma, Unicorn, Gunicorn & Yarn
  • Hands-on VMWare ESXi/Xencenter deployments is a value add
  • Experience in Implementing and troubleshooting TCP/IP networks, VPN, Load Balancing & Web application firewalls
  • Deploying, Configuring, and Maintaining Linux server systems ON premises and off-premises
  • Code Quality like SonarQube is a value-add
  • Test Automation like Selenium, JMeter, and JUnit is a value-add
  • Experience in Heroku and OpenStack is a value-add 
  • Experience in Identifying Inbound and Outbound Threats and resolving it
  • Knowledge of CVE & applying the patches for OS, Ruby gems, Node, and Python packages  
  • Documenting the Security fix for future use
  • Establish cross-team collaboration with security built into the software development lifecycle 
  • Forensics and Root Cause Analysis skills are mandatory 
  • Weekly Sanity Checks of the on-prem and off-prem environment 

 

III Skill Set & Personality Traits required:

  • An understanding of programming languages such as Ruby, NodeJS, ReactJS, Perl, Java, Python, and PHP
  • Good written and verbal communication skills to facilitate efficient and effective interaction with peers, partners, vendors, and customers


IV Age Group: 21 – 36 Years


V Cost to the Company: As per industry standards


Read more
Bengaluru (Bangalore)
5 - 10 yrs
₹25L - ₹50L / yr
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconPython
skill iconJava
Data engineering
+10 more

Job Title : Senior Software Engineer (Full Stack — AI/ML & Data Applications)

Experience : 5 to 10 Years

Location : Bengaluru, India

Employment Type : Full-Time | Onsite


Role Overview :

We are seeking a Senior Full Stack Software Engineer with strong technical leadership and hands-on expertise in AI/ML, data-centric applications, and scalable full-stack architectures.

In this role, you will design and implement complex applications integrating ML/AI models, lead full-cycle development, and mentor engineering teams.


Mandatory Skills :

Full Stack Development (React/Angular/Vue + Node.js/Python/Java), Data Engineering (Spark/Kafka/ETL), ML/AI Model Integration (TensorFlow/PyTorch/scikit-learn), Cloud & DevOps (AWS/GCP/Azure, Docker, Kubernetes, CI/CD), SQL/NoSQL Databases (PostgreSQL/MongoDB).


Key Responsibilities :

  • Architect, design, and develop scalable full-stack applications for data and AI-driven products.
  • Build and optimize data ingestion, processing, and pipeline frameworks for large datasets.
  • Deploy, integrate, and scale ML/AI models in production environments.
  • Drive system design, architecture discussions, and API/interface standards.
  • Ensure engineering best practices across code quality, testing, performance, and security.
  • Mentor and guide junior developers through reviews and technical decision-making.
  • Collaborate cross-functionally with product, design, and data teams to align solutions with business needs.
  • Monitor, diagnose, and optimize performance issues across the application stack.
  • Maintain comprehensive technical documentation for scalability and knowledge-sharing.

Required Skills & Experience :

  • Education : B.E./B.Tech/M.E./M.Tech in Computer Science, Data Science, or equivalent fields.
  • Experience : 5+ years in software development with at least 2+ years in a senior or lead role.
  • Full Stack Proficiency :
  • Front-end : React / Angular / Vue.js
  • Back-end : Node.js / Python / Java
  • Data Engineering : Experience with data frameworks such as Apache Spark, Kafka, and ETL pipeline development.
  • AI/ML Expertise : Practical exposure to TensorFlow, PyTorch, or scikit-learn and deploying ML models at scale.
  • Databases : Strong knowledge of SQL & NoSQL systems (PostgreSQL, MongoDB) and warehousing tools (Snowflake, BigQuery).
  • Cloud & DevOps : Working knowledge of AWS, GCP, or Azure; containerization & orchestration (Docker, Kubernetes); CI/CD; MLflow/SageMaker is a plus.
  • Visualization : Familiarity with modern data visualization tools (D3.js, Tableau, Power BI).

Soft Skills :

  • Excellent communication and cross-functional collaboration skills.
  • Strong analytical mindset with structured problem-solving ability.
  • Self-driven with ownership mentality and adaptability in fast-paced environments.

Preferred Qualifications (Bonus) :

  • Experience deploying distributed, large-scale ML or data-driven platforms.
  • Understanding of data governance, privacy, and security compliance.
  • Exposure to domain-driven data/AI use cases in fintech, healthcare, retail, or e-commerce.
  • Experience working in Agile environments (Scrum/Kanban).
  • Active open-source contributions or a strong GitHub technical portfolio.
Read more
Bengaluru (Bangalore)
8 - 9 yrs
₹20L - ₹25L / yr
DevOps

Job Title: Senior DevOps Engineer

Experience: 8+ Years

Joining: Immediate Joiner

Location: Bangalore (Onsite/Hybrid – as applicable)

Job Description:

We are looking for a highly experienced Senior DevOps Engineer with 8+ years of hands-on experience to join our team immediately. The ideal candidate will be responsible for designing, implementing, and managing scalable, secure, and highly available infrastructure.

Key Responsibilities:

  • Design, build, and maintain CI/CD pipelines for application deployment
  • Manage cloud infrastructure (AWS/Azure/GCP) and optimize cost and performance
  • Automate infrastructure using Infrastructure as Code (Terraform/CloudFormation)
  • Manage containerized applications using Docker and Kubernetes
  • Monitor system performance, availability, and security
  • Collaborate closely with development, QA, and security teams
  • Troubleshoot production issues and perform root cause analysis
  • Ensure high availability, disaster recovery, and backup strategies

Required Skills:

  • 8+ years of experience in DevOps / Site Reliability Engineering
  • Strong expertise in Linux/Unix administration
  • Hands-on experience with AWS / Azure / GCP
  • CI/CD tools: Jenkins, GitLab CI, GitHub Actions, Azure DevOps
  • Containers & orchestration: Docker, Kubernetes
  • Infrastructure as Code: Terraform, CloudFormation, Ansible
  • Monitoring tools: Prometheus, Grafana, ELK, CloudWatch
  • Strong scripting skills (Bash, Python)
  • Experience with security best practices and compliance

Good to Have:

  • Experience with microservices architecture
  • Knowledge of DevSecOps practices
  • Cloud certifications (AWS/Azure/GCP)
  • Experience in high-traffic production environments

Why Join Us:

  • Opportunity to work on scalable, enterprise-grade systems
  • Collaborative and growth-oriented work environment
  • Competitive compensation and benefits
  • Immediate joiners preferred.
Read more
Bengaluru (Bangalore)
3 - 6 yrs
₹10L - ₹15L / yr
Google Cloud Platform (GCP)
DevOps

Job Description

We are seeking a DevOps Engineer with 3+ years of experience and strong expertise in Google Cloud Platform (GCP) to design, automate, and manage scalable cloud infrastructure. The role involves building CI/CD pipelines, implementing Infrastructure as Code, and ensuring high availability, security, and performance of cloud-native applications.

Key Responsibilities

  • Design, deploy, and manage GCP infrastructure using best practices
  • Implement Infrastructure as Code (IaC) using Terraform
  • Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI
  • Manage containerized workloads using Docker and Kubernetes (GKE)
  • Configure and manage GCP networking (VPCs, Subnets, VPN, Cloud Interconnect, Load Balancers, Firewall rules)
  • Implement monitoring and logging using Cloud Monitoring and Cloud Logging
  • Ensure high availability, scalability, and security of applications
  • Troubleshoot production issues and perform root cause analysis
  • Collaborate with development and product teams to improve deployment and reliability
  • Optimize cloud cost, performance, and security

Required Skills & Qualifications

  • 3+ years of experience as a DevOps / Cloud Engineer
  • Strong hands-on experience with Google Cloud Platform (GCP)
  • Experience with Terraform for GCP resource provisioning
  • CI/CD experience with Jenkins / GitHub Actions
  • Hands-on experience with Docker and Kubernetes (GKE)
  • Good understanding of Linux and shell scripting
  • Knowledge of cloud networking concepts (TCP/IP, DNS, Load Balancers)
  • Experience with monitoring, logging, and alerting

Good to Have

  • Experience with Hybrid or Multi-cloud architectures
  • Knowledge of DevSecOps practices
  • Experience with SRE concepts
  • GCP certifications (Associate Cloud Engineer / Professional DevOps Engineer)

Why Join Us

  • Work on modern GCP-based cloud infrastructure
  • Opportunity to design and own end-to-end DevOps pipelines
  • Learning and growth opportunities in cloud and automation


Read more
AI Powered Software Development (Product Company)

AI Powered Software Development (Product Company)

Agency job
via Recruiting Bond by Pavan Kumar
Bengaluru (Bangalore), Delhi, Gurugram, Noida, Hyderabad, Pune, Mumbai, India
3 - 8 yrs
₹15L - ₹30L / yr
DevOps
Reliability engineering
CloudOps
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+20 more

🚀 RECRUITING BOND HIRING


Role: CLOUD OPERATIONS & MONITORING ENGINEER - (THE GUARDIAN OF UPTIME)


⚡ THIS IS NOT A MONITORING ROLE


THIS IS A COMMAND ROLE

You don’t watch dashboards.

You control outcomes.


You don’t react to incidents.

You eliminate them before they escalate.


This role powers an AI-driven SaaS + IoT platform where:

---> Uptime is non-negotiable

---> Latency is hunted

---> Failures are never allowed to repeat


Incidents don’t grow.

Problems don’t hide.

Uptime is enforced.


🧠 WHAT YOU’LL OWN

(Real Work. Real Impact.)


🔍 Total Observability

---> Real-time visibility across cloud, application, database & infrastructure

---> High-signal dashboards (Grafana + cloud-native tools)

---> Performance trends tracked before growth breaks systems

🚨 Smart Alerting (No Noise)

---> Alerts that fire only when action is required

---> Zero false positives. Zero alert fatigue

Right signal → right person → right time


⚙ Automation as a Weapon

---> End-to-end automation of operational tasks

---> Standardized logging, metrics & alerting

---> Systems that scale without human friction


🧯 Incident Command & Reliability

---> First responder for critical incidents (on-call rotation)

---> Root cause analysis across network, app, DB & storage

Fix fast — then harden so it never breaks the same way again

📘 Operational Excellence

---> Battle-tested runbooks

---> Documentation that actually works under pressure

Every incident → a stronger platform


🛠️ TECHNOLOGIES YOU’LL MASTER

☁ Cloud: AWS | Azure | Google Cloud

📊 Monitoring: Grafana | Metrics | Traces | Logs

📡 Alerting: Production-grade alerting systems

🌐 Networking: DNS | Routing | Load Balancers | Security

🗄 Databases: Production systems under real pressure

⚙ DevOps: Automation | Reliability Engineering


🎯 WHO WE’RE LOOKING FOR

Engineers who take uptime personally.


You bring:

---> 3+ years in Cloud Ops / DevOps / SRE

---> Live production SaaS experience

---> Deep AWS / Azure / GCP expertise

---> Strong monitoring & alerting experience

---> Solid networking fundamentals

---> Calm, methodical incident response

---> Bonus (Highly Preferred):

---> B2B SaaS + IoT / hybrid platforms

---> Strong automation mindset

---> Engineers who think in systems, not tickets


💼 JOB DETAILS

📍 Bengaluru

🏢 Hybrid (WFH)

💰 (Final CTC depends on experience & interviews)


🌟 WHY THIS ROLE?

Most cloud teams manage uptime. We weaponize it.

Your work won’t just keep systems running — it will keep customers confident, operations flawless, and competitors wondering how it all works so smoothly.


📩 APPLY / REFER : 🔗 Know someone who lives for reliability, observability & cloud excellence?

Read more
AI Powered - Software Development (Product Company)

AI Powered - Software Development (Product Company)

Agency job
via Recruiting Bond by Pavan Kumar
India, Bengaluru (Bangalore), Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Hyderabad, Mumbai, Pune
3 - 7 yrs
₹15L - ₹30L / yr
DevOps
Reliability engineering
CloudOps
Cloud Operations
Monitoring
+25 more

🚀 RECRUITING BOND HIRING


Role: CLOUD OPERATIONS & MONITORING ENGINEER - (THE GUARDIAN OF UPTIME)


⚡ THIS IS NOT A MONITORING ROLE


THIS IS A COMMAND ROLE

You don’t watch dashboards.

You control outcomes.


You don’t react to incidents.

You eliminate them before they escalate.


This role powers an AI-driven SaaS + IoT platform where:

---> Uptime is non-negotiable

---> Latency is hunted

---> Failures are never allowed to repeat


Incidents don’t grow.

Problems don’t hide.

Uptime is enforced.


🧠 WHAT YOU’LL OWN

(Real Work. Real Impact.)


🔍 Total Observability

---> Real-time visibility across cloud, application, database & infrastructure

---> High-signal dashboards (Grafana + cloud-native tools)

---> Performance trends tracked before growth breaks systems

🚨 Smart Alerting (No Noise)

---> Alerts that fire only when action is required

---> Zero false positives. Zero alert fatigue

Right signal → right person → right time


⚙ Automation as a Weapon

---> End-to-end automation of operational tasks

---> Standardized logging, metrics & alerting

---> Systems that scale without human friction


🧯 Incident Command & Reliability

---> First responder for critical incidents (on-call rotation)

---> Root cause analysis across network, app, DB & storage

Fix fast — then harden so it never breaks the same way again

📘 Operational Excellence

---> Battle-tested runbooks

---> Documentation that actually works under pressure

Every incident → a stronger platform


🛠️ TECHNOLOGIES YOU’LL MASTER

☁ Cloud: AWS | Azure | Google Cloud

📊 Monitoring: Grafana | Metrics | Traces | Logs

📡 Alerting: Production-grade alerting systems

🌐 Networking: DNS | Routing | Load Balancers | Security

🗄 Databases: Production systems under real pressure

⚙ DevOps: Automation | Reliability Engineering


🎯 WHO WE’RE LOOKING FOR

Engineers who take uptime personally.


You bring:

---> 3+ years in Cloud Ops / DevOps / SRE

---> Live production SaaS experience

---> Deep AWS / Azure / GCP expertise

---> Strong monitoring & alerting experience

---> Solid networking fundamentals

---> Calm, methodical incident response

---> Bonus (Highly Preferred):

---> B2B SaaS + IoT / hybrid platforms

---> Strong automation mindset

---> Engineers who think in systems, not tickets


💼 JOB DETAILS

📍 Bengaluru

🏢 Hybrid (WFH)

💰 (Final CTC depends on experience & interviews)


🌟 WHY THIS ROLE?

Most cloud teams manage uptime. We weaponize it.

Your work won’t just keep systems running — it will keep customers confident, operations flawless, and competitors wondering how it all works so smoothly.


📩 APPLY / REFER : 🔗 Know someone who lives for reliability, observability & cloud excellence?

Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Gurugram
5 - 10 yrs
₹15L - ₹20L / yr
DevOps
Bare metal
Physical server
Onpremises
AWS
+2 more

Job Title: Senior DevOps Engineer

Location: Gurgaon – Sector 39

Work Mode: 5 Days Onsite

Experience: 5+ Years

About the Role

We are looking for an experienced Senior DevOps Engineer to build, manage, and maintain highly reliable, scalable, and secure infrastructure. The role involves deploying product updates, handling production issues, implementing customer integrations, and leading DevOps best practices across teams.

Key Responsibilities

  • Manage and maintain production-grade infrastructure ensuring high availability and performance.
  • Deploy application updates, patches, and bug fixes across environments.
  • Handle Level-2 support and resolve escalated production issues.
  • Perform root cause analysis and implement preventive solutions.
  • Build automation tools and scripts to improve system reliability and efficiency.
  • Develop monitoring, logging, alerting, and reporting systems.
  • Ensure secure deployments following data encryption and cybersecurity best practices.
  • Collaborate with development, product, and QA teams for smooth releases.
  • Lead and mentor a small DevOps team (3–4 engineers).

Core Focus Areas

Server Setup & Management (60%)

  • Hands-on management of bare-metal servers.
  • Server provisioning, configuration, and lifecycle management.
  • Network configuration including redundancy, bonding, and performance tuning.

Queue Systems – Kafka / RabbitMQ (15%)

  • Implementation and management of message queues for distributed systems.

Storage Systems – SAN / NAS (15%)

  • Setup and management of enterprise storage systems.
  • Ensure backup, recovery, and data availability.

Database Knowledge (5%)

  • Working experience with Redis, MySQL/PostgreSQL, MongoDB, Elasticsearch.
  • Basic database administration and performance tuning.

Telecom Exposure (Good to Have – 5%)

  • Experience with SMS, voice systems, or real-time data processing environments.

Technical Skills Required

  • Linux administration & Shell scripting
  • CI/CD tools – Jenkins
  • Git (GitHub / SVN) and branching strategies
  • Docker & Kubernetes
  • AWS cloud services
  • Ansible for configuration management
  • Databases: MySQL, MariaDB, MongoDB
  • Web servers: Apache, Tomcat
  • Load balancing & HA: HAProxy, Keepalived
  • Monitoring tools: Nagios and related observability stacks


Read more
Ekloud INC
ashwini rathod
Posted by ashwini rathod
india
6 - 15 yrs
₹15L - ₹25L / yr
DevOps
API
Meta-data management
CI/CD
CI\CD VERSION
+12 more

Salesforce DevOps Engineer


Responsibilities

  • Support the design and implement of the DevOps strategy. This includes but is not limited to the CI/CD workflow (Version Control, and automated deployments) ,Sandbox Management , Documenting DevOps releases , Overseeing the Developer Workflow and ensuring Code Reviews take place.
  • Work closely with QA, Tech Leads, Senior Devs and Architects to ensure the smooth delivery of build artefacts into Salesforce environments.
  • Implement scripts utilising the Salesforce Metadata API and SFDX
  • Refine technical user stories as required, articulate clearly the technical solution required to meet a specific DevOps requirement.
  • Support the Tech Lead with ensuring best practises are adhered to, providing feedback as required.
  • Maintain the development workflow, guide and effectively communicate the workflow to Development teams
  • Design, implement, and maintain CI/CD pipelines.
  • Automate infrastructure provisioning and configuration management.
  • Monitor system performance and troubleshoot issues.
  • Ensure security and compliance across all environments.


Required Skills & Experience

  • Proficiency in CI/CD tools such as GitHub Actions.
  • 5+ years in Salesforce Development
  • Strong experience CI/CD technologies, Git (version control), Salesforce Metadata API, SFDX
  • Expertise in large-scale integration using SOAP, REST, Streaming (including Lightning Events), and Metadata APIs, facilitating the seamless connection of Salesforce with other systems.
  • Excellent technical documentation skills
  • Excellent communication skills


Desired Skills

  • Comfortable and effective in leading developer, ensuring project success and team cohesion
  • Financial Services industry experience

●Experience in working in both agile and waterfall methodologies.



Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹60L - ₹80L / yr
DevOps
Apache Spark
Apache Airflow
skill iconMachine Learning (ML)
Pipeline management
+13 more

Review Criteria:

  • Strong MLOps profile
  • 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
  • 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
  • 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
  • Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
  • Must have hands-on Python for pipeline & automation development
  • 4+ years of experience in AWS cloud, with recent companies
  • (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth

 

Preferred:

  • Hands-on in Docker deployments for ML workflows on EKS / ECS
  • Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
  • Experience with CI / CD / CT using GitHub Actions / Jenkins.
  • Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
  • Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.

 

Job Specific Criteria:

  • CV Attachment is mandatory
  • Please provide CTC Breakup (Fixed + Variable)?
  • Are you okay for F2F round?
  • Have candidate filled the google form?

 

Role & Responsibilities:

We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.

 

Key Responsibilities:

  • Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
  • Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
  • Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
  • Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
  • Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
  • Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
  • Collaborate with data scientists to productionize notebooks, experiments, and model deployments.

 

Ideal Candidate:

  • 8+ years in MLOps/DevOps with strong ML pipeline experience.
  • Strong hands-on experience with AWS:
  • Compute/Orchestration: EKS, ECS, EC2, Lambda
  • Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
  • Workflow: MWAA/Airflow, Step Functions
  • Monitoring: CloudWatch, OpenSearch, Grafana
  • Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
  • Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
  • Strong Linux, scripting, and troubleshooting skills.
  • Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.

 

Education:

  • Master’s degree in computer science, Machine Learning, Data Engineering, or related field. 
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort