50+ DevOps Jobs in Bangalore (Bengaluru) | DevOps Job openings in Bangalore (Bengaluru)
Apply to 50+ DevOps Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest DevOps Job opportunities across top companies like Google, Amazon & Adobe.
Albert's mission is to Digitalize the World of Chemistry, using data and machine learning to drastically accelerate the invention of new materials. We are building a world-class engineering organization to power this vision. If you are passionate about building platforms from the ground up, growing high-impact engineering teams, and shaping the technical foundation of a rapidly maturing SaaS product, we want you on our team
About the Role
We are seeking a Head of Backend Platform Engineering to consolidate and grow a newly unified platform organization — bringing together our Backend Platform Services, Data, and Search teams into a cohesive, high-functioning engineering group. This is a formative leadership role: the org exists, the talent is here, but the shared identity, operating model, and strategic direction need to be built.
You will define what this platform org becomes — setting strategy, establishing engineering standards, creating the conditions for strong technical execution, and growing the people and teams in your charge. You will be a credible technical partner to your engineers, capable of shaping architecture decisions and rolling up your sleeves when it matters, while primarily leading through others.
A key dimension of this role is your partnership with Enterprise Engineering — the team responsible for customer integrations, migrations, and connections. As Albert matures, the boundary between bespoke customer solutions and core product capabilities is an active frontier. You will work closely with Enterprise Engineering to pull proven patterns and emerging integration needs into the platform, ensuring that what gets built for one customer becomes repeatable infrastructure for all.
This is not a steady-state management role. It requires someone who can build organizational clarity out of ambiguity, earn technical credibility with senior engineers, and execute at the intersection of platform strategy and day-to-day delivery
Responsibilities:
Platform Organization Building
- Unite the Backend Platform Services, Data, and Search teams under a shared operating model with clear ownership boundaries, team identities, and collaboration norms.
- Establish the platform org's mission, roadmap, and success metrics — defining what “platform health” means at Albert and making it visible across engineering.
- Build a culture of technical rigor, psychological safety, and continuous improvement across your teams.
- Grow your teams: hire strong engineers and managers, develop existing talent, and build a leadership pipeline below you.
- Own resourcing, capacity planning, and prioritization across the platform organization.
Technical Leadership & Architectural Direction
- Serve as a credible technical voice for the platform — capable of engaging deeply on architecture trade-offs, reviewing designs, and guiding engineers toward sound decisions.
- Set architectural guardrails and standards that enable secure, scalable, resilient, and event-driven systems across product and platform teams.
- Guide the platform’s evolution across cloud infrastructure (AWS), shared backend services, data pipelines, search capabilities, and asynchronous processing patterns.
- Identify and eliminate architectural inconsistencies, duplication, and toil across the platform surface area.
- Get hands-on when needed — in design reviews, incident response, or critical technical decisions — without becoming a bottleneck.
Enterprise Engineering Partnership
- Build a close working relationship with the Head of Enterprise Engineering to ensure that customer integration patterns, migration workflows, and connection frameworks feed back into the platform as first-class capabilities.
- Define the process by which bespoke customer solutions get evaluated and productized — distinguishing one-off implementations from patterns worth standardizing.
- Align platform roadmap investments with the integration and deployment needs surfaced by Enterprise Engineering and Solutions teams.
- Ensure the platform provides the primitives — APIs, connectors, data ingress/egress, event systems — that Enterprise Engineering needs to move fast with enterprise customers.
Platform Strategy & Cloud Architecture
- Own and evolve the long-term platform strategy, with AWS as the primary environment.
- Drive the transition from bespoke implementations to repeatable, productized platform capabilities that scale across Albert’s enterprise customer base.
- Ensure platform investments directly support business priorities and customer outcomes, in close partnership with Product and Engineering leadership.
- Guide platform decisions with a pragmatic view of cost efficiency, build vs. buy trade-offs, and long-term maintainability.
Engineering Management & Delivery Excellence
- Lead multiple platform teams through senior managers and technical leads, establishing clear delivery plans, milestones, and accountability.
- Ensure large, cross-team initiatives are delivered on time, with high quality, and with transparent communication of progress, risks, and trade-offs.
- Balance short-term delivery needs with long-term platform health and technical sustainability.
Developer Experience & Operational Enablement
- Set direction for developer experience, DevOps, and internal platform tooling that accelerates product teams while reducing operational friction.
- Ensure best practices for infrastructure as code, CI/CD, observability, and cloud cost management are consistently applied.
- Embed operational excellence into platform design, including availability targets, incident management, and performance monitoring.
Security & Compliance
- Partner with Security and Compliance leadership to ensure security and privacy requirements are designed into the platform by default.
- Oversee platform alignment with SOC 2 requirements as a live obligation, and GDPR as a data handling standard.
- Drive consistent implementation of security architectures, policies, and controls across teams.
Requirements:
- Demonstrated track record of building and scaling platform or infrastructure engineering organizations — ideally from an early or fragmented state — including hiring, developing managers, and establishing operating models.
- Proven ability to lead multi-team engineering orgs through senior managers or technical leads, with a bias toward distributed ownership over centralized control.
- Strong architectural command of cloud-native, distributed, and event-driven systems, with hands-on AWS depth. You can review a design, identify the flaw, and articulate a better path — not just approve what’s put in front of you.
- Experience operating in SaaS environments with enterprise customers, including exposure to the integration, deployment, and extensibility challenges they introduce.
- Track record of translating ambiguous technical or business challenges into clear platform strategy and executable plans.
- Demonstrated ability to build credibility with senior technical individual contributors — engineers respect your judgment, not just your title.
Good to Have:
- Experience working closely with a customer-facing or enterprise integration engineering team, and systematically converting customer-specific solutions into platform capabilities.
- Background in chemistry, materials science, life sciences, or scientific computing — or demonstrated curiosity about domain-specific platform requirements in technical industries.
- Exposure to multi-cloud environments (Azure, GCP) in contexts where it was a real architectural requirement, not incidental.
- Experience with GxP, ISO 27001, or other regulated-industry compliance frameworks beyond SOC 2.
- Familiarity with audit-ready system design and working with compliance or security teams to build scalable controls
About Albert Invent
Albert Invent is a cutting-edge AI-driven software company headquartered in Oakland, California, on a mission to empower scientists and innovators in chemistry and materials science to invent the future faster. Every day, scientists in 30+ countries use Albert to accelerate R&D with AI trained like a chemist, bringing better products to market, faster.
Why Join Albert
Invent As Head of Backend Platform Engineering, you will shape the foundation that powers Albert's future — not inherit a finished org, but build one. You will have the scope to define how platform engineering works at Albert: how teams collaborate, how architecture decisions get made, how the platform grows from a collection of services into a coherent, productized capability. This is a rare opportunity to have genuine organizational and technical impact at a company where the platform is still being invented.
Role & Responsibilities
- Develop and deliver automation software to build and improve platform functionality
- Ensure reliability, availability, and manageability of applications and cloud platforms
- Champion adoption of Infrastructure as Code (IaC) practices
- Design and build self-service, self-healing, monitoring, and alerting platforms
- Automate development and testing workflows through CI/CD pipelines (Git, Jenkins, SonarQube, Artifactory, Docker containers)
- Build and manage container hosting platforms using Kubernetes
Requirements
- Strong experience deploying and maintaining GCP cloud infrastructure
- Well-versed in service-oriented and cloud-based architecture design patterns
- Knowledge of cloud services including compute, storage, networking, messaging, and automation tools (e.g., CloudFormation/Terraform equivalents)
- Experience with relational and NoSQL databases (Postgres, Cassandra)
- Hands-on experience with automation/configuration tools (Puppet, Chef, Ansible, Terraform)
Additional Skills
- Strong Linux system administration and troubleshooting skills
- Programming/scripting exposure (Bash, Python, Core Java, or Scala)
- CI/CD pipeline experience (Jenkins, Git, Maven, etc.)
- Experience integrating solutions in multi-region environments
- Familiarity with Agile/Scrum/DevOps methodologies
Criteria
Mandatory
Strong Senior / Staff DevOps Engineer Profile
Mandatory (Experience 1): Must have 9+ years in DevOps / SRE / Infrastructure roles with hands-on experience (clear scale signals like traffic, uptime, latency, infra size should be mentioned)
Mandatory (Experience 2): Must have worked in Staff / Lead DevOps / SRE / Platform Engineer role OR demonstrated ownership of infra/platform across teams (not just execution role)
Mandatory (Experience 3): Must have B2B SaaS company experience with multi-tenant architecture OR multiple production stacks (multi-env / multi-client systems)
Mandatory (Tech Skills 1 - Cloud & Infra): AWS (VPC, EKS, EC2, RDS, networking), Kubernetes (EKS) at scale, Designing high availability, multi-region systems
Mandatory (Tech Skills 2 - Automation & IaC): Terraform (must-have), Helm / GitOps, Strong scripting (Python / Go / Bash)
Mandatory (Tech Skills 3 - CI/CD & Release): Scalable CI/CD pipelines (GitHub Actions / Jenkins), Zero/low downtime deployments
Mandatory (Tech Skills 4 - Reliability & Observability): SRE principles (SLOs, SLIs, error budgets), Monitoring tools (Prometheus, Grafana, Datadog), Alerting, on-call, incident management
Mandatory (Education): BTech in Computer Science or related fields
Mandatory (Company): Strong B2B SaaS product companies only
Job description:
Job Description – Full Stack Developer (.NET + React + AI)
Job Title: Full Stack Developer
Experience: 8–12 Years
Location: Chennai / Bangalore / Hyderabad / Pune / Gurgaon (Hybrid)
Employment Type: Contract – 1 Year
Job Summary
We are hiring a Full Stack Developer with strong expertise in Microsoft technologies and modern frontend frameworks. The role involves end-to-end ownership of application development, leading teams, and delivering scalable enterprise solutions integrating Microservices, MFE, and Agentic AI ecosystems.
Key Responsibilities
- Lead end-to-end full stack development using .NET Core/.NET 8, React, Azure
- Drive technical architecture, design reviews, and implementation
- Build scalable Microservices & Micro-Frontend (MFE) architectures
- Develop AI-powered solutions using Agentic AI, Copilot, AutoGen, CrewAI, LangGraph
- Manage Scrum ceremonies and Agile delivery processes
- Own project tracking: dashboards, velocity, burn-down, reporting
- Ensure adherence to SDLC, SOLID principles, and best practices
- Collaborate with cross-functional teams (Architects, BAs, Product Owners, UX)
- Lead and mentor engineering teams; resolve technical/non-technical issues
- Implement CI/CD pipelines and DevOps practices
- Ensure system performance, observability, and reliability
Key Skills RequiredCore Stack
- .NET Core / .NET 8, C#
- React, Redux, TypeScript
- REST APIs, Microservices
Cloud & DevOps
- Azure (App Services, AKS, Functions, Service Bus)
- Docker, Kubernetes
- CI/CD (GitHub, Jenkins, TFS)
AI & Emerging Tech
- Agentic AI
- Microsoft Copilot / Copilot Studio
- OpenAI, AutoGen, CrewAI, LangGraph
Database
- SQL Server, PostgreSQL
- Database design & optimization
Other Skills
- Micro Frontends (MFE)
- Agile (Scrum/Kanban)
- Supply Chain domain knowledge (preferred)
Qualifications
- 8–12 years of full stack development experience
- Strong experience in Microsoft technology ecosystem
- Proven experience in product/application development
Here is what you will do:
Develop and deploy scalable serverless applications using Azure Functions Work with Azure Queue Storage for asynchronous processing Integrate solutions with SharePoint / SPFx Design and build robust APIs using .NET (preferred) Manage Azure services such as Storage, Application Insights, Key Vault, and Azure AD (App Registration) Implement CI/CD pipelines for deploying Function Apps Contribute to end-to-end solution design and architecture decisions.
Key Skills to Succeed in This Role:
3–8 years of relevant experience Strong proficiency in .NET (preferred) Hands-on experience with Azure services (Functions, Storage, Key Vault, Application Insights) Experience with Azure AD (App Registration) Familiarity with Git and CI/CD pipelines Ability to design and implement scalable, end-to-end solutions Job TitleAzure .Net Developerwhat is the job role
Job Overview
We are looking for a DevOps Engineer with hands-on experience in GCP (Google Cloud Platform) to join our growing team. The ideal candidate should have experience in cloud infrastructure, CI/CD pipelines, and containerization, along with a strong understanding of DevOps best practices.
🚀 Key Responsibilities
- Design, implement, and manage cloud infrastructure on Google Cloud Platform (GCP)
- Build and maintain CI/CD pipelines for faster and reliable deployments
- Work with containerization and orchestration tools like Docker & Kubernetes
- Automate infrastructure provisioning using Terraform / Infrastructure as Code (IaC)
- Monitor system performance and ensure high availability and scalability
- Collaborate with development and QA teams for smooth release cycles
- Troubleshoot production issues and optimize system reliability
🛠️ Required Skills
- Hands-on experience with GCP services (Compute Engine, GKE, Cloud Storage, IAM)
- Strong knowledge of:
- Docker & Kubernetes
- CI/CD tools (Jenkins / GitHub Actions / GitLab CI)
- Terraform or similar IaC tools
- Scripting knowledge (Python / Bash)
- Familiarity with monitoring tools (Prometheus, Grafana, etc.)
- Good understanding of Linux systems
🎯 Preferred Candidate Profile
- 2+ years of experience in DevOps / Cloud Engineering
- Immediate joiners or candidates serving notice period preferred
- Strong problem-solving and communication skills
Location: Bangalore
Experience: 2–5 years
Type: Full-time | On-site
Open Roles: 1
Start: Immediate
Why this role exists
Most engineering teams choose between speed and stability.
We need both.
Today:
- Deployments carry risk
- Cloud costs are higher than they should be
- Compliance is reactive, not built-in
This role exists to build a platform where:
- We can deploy fast without breaking production
- We can scale without runaway cost
- We can pass enterprise InfoSec reviews without firefighting
What you’ll do
You will not just manage infrastructure.
You will build the platform that engineering runs on.
1. Drive cloud cost efficiency
- Reduce Azure compute spend by 40%
- Implement:
- Reserved Instances / savings plans
- Right-sizing of workloads
- Scheduling for non-critical workloads
- Continuously monitor and optimize cost vs performance
2. Build zero-downtime deployment systems
- Ship a deployment pipeline that supports:
- 5+ production deployments per week
- Zero customer-visible downtime
- Implement:
- Blue-green / canary deployments
- Automated health checks
- Safe rollout strategies
3. Enable fast and safe releases
- Reduce time-to-launch significantly
- Ensure:
- High reliability in every release
- Ability to rollback instantly if something breaks
- Create systems where:
- Scaling up is seamless when things go right
- Failures are contained when they don’t
4. Build disaster recovery and compliance readiness
- Create DR/BCP systems that pass enterprise audits from:
- HDFC Life, SBI Life
- Ensure:
- Backup and recovery processes are defined and tested
- Failover strategies are documented and executable
- Build compliance as part of the system, not an afterthought
5. Embed security into the pipeline
- Integrate:
- SAST (Static Application Security Testing)
- DAST (Dynamic Application Security Testing)
- SCA (Software Composition Analysis)
- Secret scanning
- Container scanning
- IaC scanning
- Ensure vulnerabilities are caught before deployment
6. Enforce policy-as-code
- Implement:
- OPA / Gatekeeper
- Azure Policy
- Prevent non-compliant infrastructure from being deployed
- Ensure consistency across environments
7. Build a scalable platform layer
- Create systems that:
- Support increasing deployment frequency
- Maintain reliability under scale
- Work closely with backend and SRE teams to:
- Improve system stability
- Reduce operational overhead
What success looks like
- Cloud costs reduce by ≥ 40%
- Deployments are:
- Frequent
- Safe
- Invisible to customers
- Rollbacks are instant and reliable
- DR/BCP passes enterprise audits in the first attempt
- Security is embedded in the pipeline, not patched later
- Engineering teams ship faster with confidence
Who you are
- You have 2-5 years of experience in DevOps / Platform Engineering
- You have worked with:
- Cloud platforms (Azure preferred)
- CI/CD systems
- Infrastructure as Code
- You think in:
- Systems
- Trade-offs (speed vs reliability vs cost)
- You are comfortable owning:
- Production infrastructure
- Deployment systems
What will make you stand out
- Experience with:
- High-frequency deployment systems
- Cost optimization at scale
- Security-first pipelines
- Strong understanding of:
- Kubernetes / container orchestration
- Monitoring and observability
- Distributed system reliability
- Experience passing enterprise security/compliance audits
Why join
- You will define how engineering ships and scales
- Your work directly impacts:
- Reliability
- Cost
- Deployment velocity
- You will build a platform that moves from:
- Fragile → predictable and scalable
What this role is not
- Not manual infra management
- Not reactive firefighting
- Not limited to CI/CD maintenance
What this role is
- A builder of deployment systems
- A driver of cost efficiency
- A guardian of reliability and compliance
One question to self-evaluate
Can you build a platform where we deploy faster, spend less, and never break production?

Location: Bangalore
Experience: 2–5 years
Type: Full-time | On-site
Start: Immediate
Why this role exists
Most systems don’t fail because of one big outage.
They fail because reliability is treated as an afterthought.
Right now, uptime depends too much on individual heroics.
That doesn’t scale.
This role exists to build a reliability system where:
- Uptime is predictable
- Failures are contained
- Escalations don’t depend on leadership
What you’ll do
You will not just monitor systems.
You will own reliability as a product.
1. Drive uptime to production-grade reliability
- Improve system uptime to 99.9% customer-facing SLA within 4 months
- Define and track:
- SLAs / SLOs / error budgets
- Ensure reliability is measured from the customer’s perspective, not internal metrics
2. Build incident response as a system
- Set up a 24/7 incident response rotation across 3 engineers
- Eliminate dependency on leadership (no single escalation point)
- Define:
- Incident severity levels
- Response playbooks
- Escalation protocols
- Ensure fast detection → containment → resolution
3. Contain and fix erratic system behavior
- Identify and resolve:
- Latency spikes
- Downtime incidents
- Integration failures
- Build guardrails to prevent recurrence
- Focus on root cause elimination, not temporary fixes
4. Create continuous reliability feedback loops
- Work closely with engineering teams to:
- Surface recurring failure patterns
- Improve build quality
- Reduce production bugs
- Ensure learnings from incidents directly improve future releases
5. Improve observability and monitoring
- Build dashboards and alerts for:
- System health
- Performance metrics
- Failure signals
- Ensure issues are detected before customers report them
6. Reduce operational fragility
- Remove single points of failure (people, systems, workflows)
- Improve system resilience across:
- Deployments
- Integrations
- Runtime environments
What success looks like
- Uptime reaches 99.9%+ reliably
- Incidents are:
- Detected early
- Contained quickly
- Resolved permanently
- No dependency on a single individual for escalation
- System behavior becomes predictable and stable
- Engineering teams ship with higher reliability confidence
Who you are
- You have 2-5 years of experience in SRE / DevOps / backend systems
- You have worked on production systems with real uptime expectations
- You think in:
- Systems
- Failure modes
- Trade-offs
- You are comfortable debugging live, high-pressure environments
What will make you stand out
- Experience with:
- Distributed systems
- Cloud infrastructure (AWS / Azure / GCP)
- Monitoring & alerting tools
- Have built or improved:
- Incident response systems
- Reliability frameworks
- Strong debugging skills across:
- Infra
- Application
- Integrations
Compensation
₹60,000/month (fixed)
(Aligned with role scope and impact expectations)
Why join
- You will define reliability standards for a production AI platform
- Your work directly impacts:
- Customer trust
- Product performance
- Enterprise readiness
- You will move the system from reactive → predictable
What this role is not
- Not just monitoring dashboards
- Not limited to handling tickets
- Not dependent on escalation to leadership
What this role is
- A builder of reliability systems
- A guardian of uptime and performance
- A multiplier of engineering quality
One question to self-evaluate
Can you build a system where downtime is rare, predictable, and never dependent on a single person?
Job Title : DevOps Engineer
Experience : 3+ Years
Location : Indiranagar, Bengaluru (Work From Office – 5 Days)
Employment Type : Full-Time
Work Timings : 11:00 AM to 7:00 PM IST
Notice Period : Immediate Joiners Preferred
Role Overview :
We are seeking a skilled DevOps Engineer with 3+ years of experience in building and managing scalable cloud-native infrastructure.
The ideal candidate will have strong expertise in Kubernetes and Helm, along with hands-on experience in deploying and maintaining production-grade systems on cloud platforms.
This role offers an opportunity to work in a high-growth startup environment, contributing to both existing systems and new infrastructure development.
Key Responsibilities :
- Design, deploy, and manage scalable infrastructure using Kubernetes.
- Build and maintain CI/CD pipelines for efficient and automated deployments.
- Manage and optimize cloud environments (preferably GCP).
- Implement Infrastructure as Code using Helm/Terraform.
- Monitor system performance and ensure high availability and reliability.
- Handle bug fixes, system improvements, and performance optimization.
- Collaborate with engineering teams to design scalable microservices architecture.
- Implement logging, monitoring, and alerting solutions.
- Ensure security best practices including IAM, secrets management, and network policies.
Mandatory Skills :
- Strong hands-on experience with Kubernetes.
- Expertise in Helm Charts.
- Experience with Google Cloud Platform (GCP).
- Hands-on experience with ArgoCD or similar CI/CD tools.
- Knowledge of CI/CD tools like Jenkins, GitHub Actions, GitLab CI.
- Experience in database hosting and scaling.
Nice to Have :
- Exposure to other cloud platforms (AWS/Azure).
- Experience with modern DevOps and automation tools.
- Ability to quickly learn and adapt to new technologies.
Team & Work Scope :
- No dedicated DevOps team currently – high ownership role.
- Work on both existing systems (maintenance & improvements) and new system builds (greenfield projects).
- Opportunity to shape DevOps practices and infrastructure from scratch.
Preferred Candidate Profile :
- 3+ years of relevant DevOps experience.
- Strong problem-solving and debugging skills.
- Experience working in fast-paced startup environments.
- Understanding of scalability, security, and performance optimization.
- Good communication and collaboration skills.
Hiring Process :
- Profile Screening
- GT Assessment
- Technical Interview – Round 1
- Technical Interview – Round 2
- Final Round (if required with US team)
Key Responsibilities:
• Work on highly distributed and scalable system architecture
• Design, develop, test, and maintain high-quality software solutions
• Ensure performance, security, and maintainability of applications
• Collaborate with cross-functional teams and stakeholders
• Perform system testing and resolve technical issues
Required Skills:
• Strong experience in ASP.NET, C#, .NET Core, MVC
• Hands-on experience with SQL Server / PostgreSQL
• Experience in Angular / React (Frontend technologies)
• Knowledge of microservices architecture & RESTful APIs
• Familiarity with CQRS pattern
• Exposure to AWS / Docker / Kubernetes
• Experience with CI/CD pipelines (Azure DevOps, Jenkins)
• Knowledge of Node.js is an added advantage
• Understanding of Agile methodology
• Good exposure to cybersecurity and compliance
Technology Stack:
• Microsoft .NET technologies (primary)
• Cloud platforms: AWS (SaaS/PaaS/IaaS)
• Databases: MSSQL, MongoDB, PostgreSQL
• Caching: Redis, Memcached
• Messaging queues: RabbitMQ, Kafka, SQS
Lead Cloud Reliability Engineer
Job Responsibilities
● Lead and manage the Cloud Reliability teams to provide strong Managed Services support to end-customers.
● Isolate, troubleshoot and resolve issues reported by CMS clients in their cloud environment
● Drive the communication with the customer providing details about the issue, current steps, next plan of action, ETA
● Gather client's requirements related to use of specic cloud services and provide assistance in seing them up and resolving issues
● Create SOPs and knowledge articles for use by the L1 teams to resolve common issues
● Identify recurring issues, perform root cause analysis and propose/implement preventive actions
● Follow change management procedure to identify, record and implement changes
● Plan and deploy OS, security patches in Windows/Linux environment and upgrade k8s clusters
● Identify the recurring manual activities and contribute to automation
● Provide technical guidance and educate team members on development and operations. Monitor metrics and develop ways to improve.
● System troubleshooting and problem-solving across plaorm and application domains. Ability to use a wide variety of open-source technologies and cloud services.
● Build, maintain, and monitor conguration standards.
● Ensuring critical system security through using best-in-class cloud security solutions.
Qualifications
● 4-7 years experience in Cloud Infrastructure and Operations domains and IT operational experience preferably in a global enterprise environment.
● Specialize in one or two cloud deployment platforms: AWS, GCP
● Hands on experience with AWS/GCP services (EKS, ECS, EC2, VPC, RDS, Lambda, GKE, Compute Engine)
● Understanding of one or more programming languages (Python, JavaScript, Ruby, Java, .Net)
● Logging and Monitoring tools (ELK, Stackdriver, CloudWatch)
● Knowledge on Conguration Management tools such as Ansible, Terraform, Puppet, Chef
● Experience working with deployment and orchestration technologies (such as Docker, Kubernetes, Mesos)
● Good analytical, communication, problem solving, and learning skills.
● Knowledge on programming against cloud plaorms such as Google Cloud Platform and lean development methodologies.
● Strong service aitude and a commitment to quality.
● Willingness to work in shifts.
Role & Responsibilities
- Develop and deliver automation software to build and improve platform functionality
- Ensure reliability, availability, and manageability of applications and cloud platforms
- Champion adoption of Infrastructure as Code (IaC) practices
- Design and build self-service, self-healing, monitoring, and alerting platforms
- Automate development and testing workflows through CI/CD pipelines (Git, Jenkins, SonarQube, Artifactory, Docker containers)
- Build and manage container hosting platforms using Kubernetes
Requirements
- Strong experience deploying and maintaining GCP cloud infrastructure
- Well-versed in service-oriented and cloud-based architecture design patterns
- Knowledge of cloud services including compute, storage, networking, messaging, and automation tools (e.g., CloudFormation/Terraform equivalents)
- Experience with relational and NoSQL databases (Postgres, Cassandra)
- Hands-on experience with automation/configuration tools (Puppet, Chef, Ansible, Terraform)
Additional Skills
- Strong Linux system administration and troubleshooting skills
- Programming/scripting exposure (Bash, Python, Core Java, or Scala)
- CI/CD pipeline experience (Jenkins, Git, Maven, etc.)
- Experience integrating solutions in multi-region environments
- Familiarity with Agile/Scrum/DevOps methodologies
What are we looking for?
- You have a good understanding and work experience in AKS, Kubernetes, and EKS.
- You are able to manage multi region clusters for disaster recovery.
- You have a good understanding of AWS stack.
- You have experience of production level in Kubernetes.
- You are comfortable coding/programming and can do so whenever required.
- You have worked with programmable infrastructure in some way - Built a CI/CD pipeline, Provisioned infrastructure programmatically or Provisioned monitoring and logging infrastructure for large sets of machines.
- You love automating things, sometimes even what seems like you can’t automate - such as one of our engineers used Ansible to set up the Ubuntu workstation and runs a playbook every time something has to be installed.
- You don’t throw around words such as “high availability” or “resilient systems” without understanding at least their basics. Because you know that words are easy to talk about but there is a fair amount of work to build such a system in practice.
- You love coaching people - about the 12-factor apps or the latest tool that reduced your time of doing a task by X times and so on. You lead by example when it comes to technical work and community.
- You understand the areas you have worked on very well but, you are curious about many systems that you may not have worked on and want to fiddle with them.
- You know that understanding applications and the runtime technologies gives you a better perspective - you never looked at them as two different things.
What you will be learning and doing?
- You will be working with customers trying to transform their applications and adopt cloud-native technologies. The technologies used will be Kubernetes, Prometheus, Service Mesh, Distributed tracing and public cloud technologies or on-premise infrastructure.
- The problems and solutions are continuously evolving in space but fundamentally you will be solving problems with simplest and scalable automation.
- You will be building open source tools for problems that you think are common across customers and industry. No one ever benefited from re-inventing the wheel, did they?
- You will be hacking around open source projects, understand their capabilities, limitations and apply the right tool for the right job.
- You will be educating the customers - from their operations engineers to developers on scalable ways to build and operate applications in modern cloud-native infrastructure.

Mid Size Product Engineering Services Company
This role will report to the Chief Technology Officer
You Will Be Responsible For
* Driving decision-making on enterprise architecture and component-level software design to our software platforms' timely build and delivery.
* Leading a team in building a high-performing and scalable SaaS product.
* Conducting code reviews to maintain code quality and follow best practices
* DevOps practice development on promoting automation, including asset creation, enterprise strategy definition, and training teams
* Developing and building microservices leveraging cloud services
* Working on application security aspects
* Driving innovation within the engineering team, translating product roadmaps into clear development priorities, architectures, and timely release plans to drive business growth.
* Creating a culture of innovation that enables the continued growth of individuals and the company
* Working closely with Product and Business teams to build winning solutions
* Led talent management, including hiring, developing, and retaining a world-class team
Ideal Profile
* You possess a Degree in Engineering or a related field and have at least 20+ years of experience as a Software Engineer, with a 10+ years of experience leading teams and at least 4 Years of experience in building a SaaS / Fintech platform.
* Proficiency in MERN / Java / Full Stack.
* Led a team in optimizing the performance and scalability of a product
* You have extensive experience with DevOps environment and CI/CD practices and can train teams.
* You're a hands-on leader, visionary, and problem solver with a passion for excellence.
* You can work in fast-paced environments and communicate asynchronously with geographically distributed teams.
What's on Offer?
* Exciting opportunity to drive the Engineering efforts of a reputed organisation
* Work alongside & learn from best in class talent
* Competitive compensation + ESOPs
Required Skills
- 8+ years of DevOps / Cloud Engineering experience
- Strong hands-on experience with AWS services (EC2, S3, RDS, IAM, VPC, etc.)
- Expertise in Kubernetes (deployment, scaling, cluster management)
- Strong experience in PostgreSQL and AWS RDS administration
- Proficiency in Terraform for infrastructure automation
- Experience building and maintaining CI/CD pipelines (Jenkins, GitLab CI, etc.)
- Strong knowledge of Java (mandatory) and application deployment lifecycle
- Experience with Docker and containerization
- Solid understanding of networking, security, and system architecture
- Strong troubleshooting and problem-solving skills
📌 Job Title: Portfolio Analyst – ITOF Platform Services
📍 Location: Bangalore
📄 Type: Contract (6 months / extendable) (confirm if needed)
🕒 Experience: 5+ Years
🔍 Role Summary
We are hiring a Portfolio Analyst to support IT Operations & Foundation portfolio governance, performance tracking, and data-driven decision-making. This role focuses on portfolio analytics, reporting, and strategic alignment across initiatives.
🛠 Key Responsibilities
- Support portfolio governance, planning, and performance tracking
- Prioritize initiatives based on strategy & resource capacity
- Analyze portfolio, delivery & financial data for insights and risks
- Build dashboards and reports using Power BI
- Ensure data quality across tools like ADO & TargetProcess
- Track dependencies, risks, and performance metrics
- Collaborate with stakeholders (Portfolio Managers, Product Leaders, EPMO)
- Translate complex data into business insights
✅ Must-Have Skills
- Strong experience in Portfolio Management / Governance / Analytics
- Hands-on with:
- Azure DevOps (ADO)
- TargetProcess
- Power BI (Dashboards, Data Modeling)
- Strong data analysis & reporting skills
- Stakeholder communication & problem-solving
⭐ Good to Have
- SQL / Advanced Excel
- Financial / capacity planning exposure
- Scrum / Kanban knowledge
- Executive-level reporting experience
Job Title: Cloud Development & Linux Debugging Engineer
Experience: 5–10 Years
Location: Bangalore / Chennai
Job Summary
We are looking for an experienced Cloud Development & Linux Debugging Engineer with strong expertise in Linux internals, system-level programming, and cloud technologies. The ideal candidate will have hands-on experience in developing, debugging, and optimizing Linux-based systems along with exposure to DevOps tools and containerized environments.
Key Responsibilities
- Develop and debug software at the Linux system level (kernel/user space).
- Work on Linux internals, low-level system components, and performance optimization.
- Design, develop, and maintain applications using Python and C/C++.
- Troubleshoot complex issues in Linux and cloud-based environments.
- Collaborate with cross-functional teams in an Agile/Scrum environment.
- Contribute to automation and infrastructure using DevOps tools.
- Work with containerized and cloud platforms such as Kubernetes and OpenStack.
Required Skills
- Strong experience in Linux software development (Linux internals, system-level programming).
- Proficiency in Python and C/C++.
- Solid debugging and analytical skills.
- Hands-on experience with Ansible, Puppet, and DevOps practices.
- Experience working with OpenStack and Kubernetes.
- Good understanding of Agile/Scrum methodologies.
- Excellent communication and teamwork skills.
Preferred Skills (Good to Have)
- Experience with Go / Golang and Go templating.
- Knowledge of Kubernetes Operators and Helm.
- Exposure to containerization technologies (Docker, Kubernetes).
- Contributions to open-source projects.
- Experience with cloud-native architectures.
Qualifications
- Bachelor’s/Master’s degree in Computer Science, Engineering, or related field.
- Self-driven individual with a strong learning mindset.
- Ability to work independently and in collaborative team environments.
Job Details
- Job Title: Senior Backend Engineer
- Industry: SAAS
- Function – Information Technology
- Experience Required: 5-8 years
- Working Days: 6 days a week, (5 days-in-office, Saturdays WFH)
- Employment Type: Full Time
- Job Location: Bangalore
- CTC Range: Best in Industry
Preferred Skills: AWS, NodeJS, RESTful APIs, NoSQL
Criteria
· Minimum 5+ years in backend engineering with strong system design expertise
· Experience building scalable systems from scratch
· Expert-level proficiency in Node.js
· Deep understanding of distributed systems
· Strong NoSQL design skills
· Hands-on AWS cloud experience
· Proven leadership and mentoring capability
· Preferred candidates from SAAS/Software/IT Services based startups or scaleup companies
Job Description
The Role:
What You’ll Build:
1. System Architecture & Design
● Architect highly scalable backend systems from the ground up
● Define technology choices: frameworks, databases, queues, caching layers
● Evaluate microservices vs monoliths based on product stage
● Design REST, GraphQL, and real-time WebSocket APIs
● Build event-driven systems for asynchronous processing
● Architect multi-tenant systems with strict data isolation
● Maintain architectural documentation and technical specs
2. Core Backend Services
● Build high-performance APIs for 3D content, XR experiences, analytics, and user interactions
● Create 3D asset processing pipelines for uploads, conversions, and optimization
● Develop distributed job workers for CPU/GPU-intensive tasks
● Build authentication/authorization systems (RBAC)
● Implement billing, subscription, and usage metering
● Build secure webhook systems and third-party integration APIs
● Create real-time collaboration features via WebSockets/SSE
3. Data Architecture & Databases
● Design scalable schemas for 3D metadata, XR sessions, and analytics
● Model complex product catalogs with variants and hierarchies
● Implement Redis-based caching strategies
● Build search and indexing systems (Elasticsearch/Algolia)
● Architect ETL pipelines and data warehouses
● Implement sharding, partitioning, and replication strategies
● Design backup, restore, and disaster recovery workflows
4. Scalability & Performance
● Build systems designed for 10x–100x traffic growth
● Implement load balancing, autoscaling, and distributed processing
● Optimize API response times and database performance
● Implement global CDN delivery for heavy 3D assets
● Build rate limiting, throttling, and backpressure mechanisms
● Optimize storage and retrieval of large 3D files
● Profile and improve CPU, memory, and network performance
5. Infrastructure & DevOps
● Architect AWS infrastructure (EC2, S3, Lambda, RDS, ElastiCache)
● Build CI/CD pipelines for automated deployments and rollbacks
● Use IaC tools (Terraform/CloudFormation) for infra provisioning
● Set up monitoring, logging, and alerting systems
● Use Docker + Kubernetes for container orchestration
● Implement security best practices for data, networks, and secrets
● Define disaster recovery and business continuity plans
6. Integration & APIs
● Build integrations with Shopify, WooCommerce, Magento
● Design webhook systems for real-time events
● Build SDKs, client libraries, and developer tools
● Integrate payment gateways (Stripe, Razorpay)
● Implement SSO and OAuth for enterprise customers
● Define API versioning and lifecycle/deprecation strategies
7. Data Processing & Analytics
● Build analytics pipelines for engagement, conversions, and XR performance
● Process high-volume event streams at scale
● Build data warehouses for BI and reporting
● Develop real-time dashboards and insights systems
● Implement analytics export pipelines and platform integrations
● Enable A/B testing and experimentation frameworks
● Build personalization and recommendation systems
Technical Stack:
1. Backend Languages & Frameworks
● Primary: Node.js (Express, NestJS), Python (FastAPI, Django)
● Secondary: Go, Java/Kotlin (Spring)
● APIs: REST, GraphQL, gRPC
2. Databases & Storage
● SQL: PostgreSQL, MySQL
● NoSQL: MongoDB, DynamoDB
● Caching: Redis, Memcached
● Search: Elasticsearch, Algolia
● Storage/CDN: AWS S3, CloudFront
● Queues: Kafka, RabbitMQ, AWS SQS
3. Cloud & Infrastructure:
● Cloud: AWS (primary), GCP/Azure (nice to have)
● Compute: EC2, Lambda, ECS, EKS
● Infrastructure: Terraform, CloudFormation
● CI/CD: GitHub Actions, Jenkins, CircleCI
● Containers: Docker, Kubernetes
4. Monitoring & Operations
● Monitoring: Datadog, New Relic, CloudWatch
● Logging: ELK Stack, CloudWatch Logs
● Error Tracking: Sentry, Rollbar
● APM tools
5. Security & Auth
● Auth: JWT, OAuth 2.0, SAML
● Secrets: AWS Secrets Manager, Vault
● Security: Encryption (at rest/in transit), TLS/SSL, IAM
What We’re Looking For:
1. Must-Haves
● 5+ years in backend engineering with strong system design expertise
● Experience building scalable systems from scratch
● Expert-level proficiency in at least one backend stack (Node, Python, Go, Java)
● Deep understanding of distributed systems and microservices
● Strong SQL/NoSQL design skills with performance optimization
● Hands-on AWS cloud experience
● Ability to write high-quality production code daily
● Experience building and scaling RESTful APIs
● Strong understanding of caching, sharding, horizontal scaling
● Solid security and best-practice implementation experience
● Proven leadership and mentoring capability
2. Highly Desirable
● Experience with large file processing (3D, video, images)
● Background in SaaS, multi-tenancy, or e-commerce
● Experience with real-time systems (WebSockets, streams)
● Knowledge of ML/AI infrastructure
● Experience with HA systems, DR planning
● Familiarity with GraphQL, gRPC, event-driven systems
● DevOps/infrastructure engineering background
● Experience with XR/AR/VR backend systems
● Open-source contributions or technical writing
● Prior senior technical leadership experience
Technical Challenges You’ll Solve:
● Designing large-scale 3D asset processing pipelines
● Serving XR content globally with ultra-low latency
● Scaling from thousands to millions of daily requests
● Efficiently handling CPU/GPU-heavy workloads
● Architecting multi-tenancy with complete data isolation
● Managing billions of analytics events at scale
● Building future-proof APIs with backward compatibility
Why company:
● Architectural Ownership: Build foundational systems from scratch
● Deep Technical Work: Solve distributed systems and scaling challenges
● Hands-On Impact: Design and code mission-critical infrastructure
● Diverse Problems: APIs, infra, data, ML, XR, asset processing
● Massive Scale Opportunity: Build systems for exponential growth
● Modern Stack and best practices
● Product Impact: Your architecture directly powers millions of users
● Leadership Opportunity: Shape engineering culture and direction
● Learning Environment: Stay at the forefront of backend engineering
● Backed by AWS, Microsoft, Google
Location & Work Culture:
● Location: Bengaluru
● Schedule: 6 days a week, (5 days-in-office, Saturdays WFH)
● Culture: Builder mindset, strong ownership, technical excellence
● Team: Small, highly skilled backend and infra team
● Resources: AWS credits, latest tooling, learning budget
Job Description:
We are seeking a Cloud & AI Platform Engineer to design and operate AI-native infrastructure that supports large-scale machine learning, generative AI, and agentic AI systems.
This role will focus on building secure, scalable, and automated multi-cloud platforms across AWS, Azure, GCP, and hybrid on-prem environments, enabling teams to deploy LLMs, AI agents, and data-driven applications reliably in production.
You will work at the intersection of cloud engineering, MLOps, LLMOps, DevOps, and data infrastructure, helping build platforms that support RAG pipelines, vector search, AI model lifecycle management, and AI observability.
Key Responsibilities
AI & Agentic Infrastructure
- Design infrastructure to support agentic AI systems, autonomous agents, and multi-agent workflows.
- Build scalable runtime environments for LLM orchestration frameworks.
- Enable deployment of AI copilots, assistants, and autonomous decision systems.
Common frameworks may include:
- LangChain
- LlamaIndex
- AutoGPT
LLMOps & AI Model Lifecycle
Design and manage LLMOps pipelines for the full lifecycle of large language models:
- Model deployment
- Prompt management
- Versioning
- Evaluation and testing
- Model monitoring
Integrate with AI platforms such as:
- Azure Machine Learning
- Amazon SageMaker
- Vertex AI
Retrieval-Augmented Generation (RAG) Infrastructure
Design and optimize RAG pipelines that integrate enterprise knowledge with LLMs.
Responsibilities include:
- Document ingestion pipelines
- Embedding generation workflows
- Knowledge indexing
- Query orchestration
- Retrieval optimization
- Support scalable semantic search architectures.
Vector Database & Knowledge Infrastructure
Deploy and manage vector databases used for AI applications and semantic retrieval.
Common technologies include:
- Pinecone
- Weaviate
- Milvus
- FAISS
Responsibilities include:
- Index optimization
- Query latency tuning
- Scalable embedding storage
- Hybrid search architecture
Multi-Cloud AI Infrastructure
Design and maintain AI-ready infrastructure across:
- Amazon Web Services
- Microsoft Azure
- Google Cloud Platform
Key responsibilities include:
- GPU infrastructure management
- Distributed training environments
- Hybrid cloud integrations with on-prem data centers
- Infrastructure scaling for AI workloads
Data Platforms & Integration
- Support deployment and optimization of data lakes, data warehouses, and streaming platforms.
- Work with data engineering teams to ensure secure and scalable data infrastructure.
Cloud Architecture & Infrastructure
- Design and implement scalable multi-cloud infrastructure across Azure, AWS, and Google Cloud.
- Build hybrid cloud architectures integrating on-premise environments with cloud platforms.
- Implement high availability, disaster recovery, and auto-scaling architectures for AI workloads.
DevOps, Platform Engineering & Automation
Build automated cloud infrastructure using modern DevOps practices.
Tools may include:
- Terraform
- Docker
- Kubernetes
- GitHub Actions
Responsibilities include:
- Infrastructure as Code (IaC)
- Automated deployments
- CI/CD pipelines for AI models and services
- Platform reliability and scalability
AI Observability & Monitoring
Implement observability frameworks to monitor AI systems in production.
This includes:
- Model performance monitoring
- Prompt evaluation
- Hallucination detection
- Latency and throughput analysis
- Cost monitoring for LLM usage
Tools may include:
- Arize AI
- WhyLabs
- Weights & Biases
Security, Governance & Responsible AI
Ensure AI systems follow strong governance and security practices.
Responsibilities include:
- Data privacy and compliance
- Model governance frameworks
- Secure model deployment
- Monitoring model bias and drift
- AI risk management
Support enterprise frameworks for Responsible AI and AI compliance.
Data & Security
- Experience with data lake architectures, distributed storage, and ETL pipelines
- Knowledge of data security, encryption, IAM, and compliance frameworks
- Familiarity with AI governance and responsible AI practices
Required Skills
Cloud & Infrastructure
- Strong experience in Azure (must have), AWS or GCP
- Hybrid and multi-cloud architecture
- GPU infrastructure management
DevOps & Automation
- Kubernetes
- Docker
- Terraform
- CI/CD pipelines
AI / ML Platforms
- MLOps pipelines
- Model deployment
- Model monitoring
AI Application Infrastructure
- Vector databases
- RAG pipelines
- LLM orchestration frameworks
Programming
Experience in one or more languages:
- Python
- Go
- Java
- TypeScript
Preferred Qualifications
- Experience building AI copilots or autonomous agents
- Knowledge of distributed model training - Knowledge of GPU infrastructure and distributed training
- Familiarity with AI evaluation frameworks - Familiarity with model monitoring, drift detection, and AI observability
- Experience building enterprise AI platforms
Education & Experience
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
- 4–8+ years experience in cloud infrastructure, DevOps, or platform engineering
- Experience working in data-driven or AI-focused environments
What Success Looks Like
- Reliable ML model deployment pipelines - Reliable infrastructure for LLMs and AI agents, Scalable RAG knowledge platforms
- Efficient multi-cloud infrastructure management - Fast deployment cycles for AI products
- Secure and scalable AI-ready cloud platforms
- Strong automation and governance across cloud and AI systems
Job Description:
We are looking for a skilled Ethical Hacker (Penetration Tester) who will be responsible for identifying vulnerabilities in systems, networks, and applications before malicious hackers can exploit them. The role involves conducting security assessments, penetration testing, and recommending security improvements to strengthen the organization’s cybersecurity posture.
Key Responsibilities
· Conduct penetration testing on web applications, mobile applications, APIs, and networks.
· Identify security vulnerabilities and weaknesses in systems and infrastructure.
· Perform vulnerability assessments using automated tools and manual techniques.
· Simulate cyberattacks to evaluate the effectiveness of existing security measures.
· Prepare detailed security reports highlighting risks, vulnerabilities, and remediation strategies.
· Collaborate with development, DevOps, and IT teams to fix security gaps.
· Ensure compliance with security standards and frameworks such as OWASP, ISO 27001, and NIST.
· Conduct security audits and risk assessments across digital platforms.
· Stay updated on the latest hacking techniques, security vulnerabilities, and cyber threats.
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Cybersecurity, Information Technology, or related field.
- 4+ years of experience in ethical hacking, penetration testing, or cybersecurity.
- Strong knowledge of network security, system security, and application security.
- Experience with security tools such as:
- Burp Suite
- Metasploit
- Nmap
- Wireshark
- Kali Linux
- Knowledge of OWASP Top 10 vulnerabilities.
- Understanding of Linux, Windows, and cloud security environments.
- Strong analytical and problem-solving skills.
Preferred Certifications
- CEH (Certified Ethical Hacker)
- OSCP (Offensive Security Certified Professional)
- CompTIA Security+
- CISSP (optional but valuable)
Key Competencies
- Cybersecurity risk assessment
- Vulnerability management
- Penetration testing methodologies
- Incident response awareness
- Strong documentation and reporting skills
Nice to Have
- Experience in cloud security (AWS, Azure, GCP)
FULL STACK DEVELOPER
JOB DESCRIPTION – FULL STACK DEVELOPER
Location: Bangalore
Key Responsibilities:
Establish processes, SLAs, and escalation protocols for the support & maintenance of web applications
Manage stakeholders with effective communication & collaborate with cross functional teams to address issues and maintain business continuity.
Design, implement, unit test, and build business applications using React, React-Native, .Net Core, .Net 8, Azure/AWS and leveraging an agile methodology and latest tech like Agentic AI & Gihub Copilot.
Facilitate scrum ceremonies including sprint planning, retrospectives, reviews, and daily stand-ups·
Facilitate discussion, assessment of alternatives or different approaches, decision making, and conflict resolution within the development team
Develop and administer CI/CD pipelines in cloud-hosted Git repositories, and source control artifacts via Git in alignment with common branching strategies and workflows
Assist Software Designer/Implementers with the creation of detailed software design specifications
Participate in the system specification review process to ensure system requirements can be translated into valid software architecture
Integrate internal and external product designs into a cohesive user experience
Identify and keep track of metrics that indicate how software is performing
Handle technical and non-technical queries from the development team and stakeholders
Ensure that all development practices follow best practices and any relevant policies / procedures
Other Duties· Maintain project reporting including dashboards, status reports, road maps, burn down, velocity, and resource utilization.
Own the technical solution and ensure all technical aspects are implemented as designed. ·
Partner with the customer success team and aid in triaging and troubleshooting customer support issues spanning across a range of software components, infrastructure, integrations, and services, some of which target 24/7/365 availability
Flexible to work in rotational shift
Required Qualification
Previous experience of leading full stack technology projects with scrum teams and stakeholder management·
BTech or MTech in computer science, or related field·
3-5 years of experience.
Required Knowledge, Skills and Abilities: (Include any required computer skills, certifications, licenses, languages, etc)·
With Proficiency in .NET Core/.Net 8/, React, React-Native, Redux, Material, Bootstrap, Typescript, SCSS, Microservices, EF, LINQ, SQL, Azure/AWS, CI CD, Agile, Agentic AI, Github Copilot·
Azure Dev Ops, Design System, Micro front ends, Data Science·
Stakeholder management & excellent communication skills.
Must have skills
React - 3 years
React Native - 3 years
Redux - 1 years
Material UI - 1 years
Typescript - 1 years
Bootstrap - 1 years
Microservices - 2 years
SQL - 1 years
Azure - 1 years
Nice to have skills
.NET Core - 3 years
NET 8 - 3 years
AWS - 1 years
LINQ - 1 years
Strong Lead DevOps / Infrastructure Engineer Profiles.
Mandatory (Experience 1) – Must have 7+ years of hands-on experience working as a DevOps / Infrastructure Engineer.
Mandatory (Experience 2) – Candidate’s current title must be Lead DevOps Engineer (or equivalent Lead role) in the current organization
Mandatory (Experience 3) – Must have minimum 2+ years of team management / technical leadership experience, including mentoring engineers, driving infrastructure decisions, or leading DevOps initiatives.
Mandatory (Experience 4) – Must have strong hands-on experience with Kubernetes (container orchestration) including deployment, scaling, and cluster management.
Mandatory (Experience 5) – Must have experience with Infrastructure as Code (IaC) tools such as Terraform, Ansible, Chef, or Puppet.
Mandatory (Experience 6) – Must have strong scripting and automation experience using Python, Go, Bash, or similar scripting languages.
Mandatory (Experience 7) – Must have working experience with distributed databases or data systems such as MongoDB, Redis, Cassandra, Elasticsearch, or Puppet.
Mandatory (Experience 8) – Must have strong hands-on experience in Observability & Monitoring, CI/CD architecture, and Networking concepts in production environments.
Mandatory (Company) – Candidates must be from Good / Well Funded / Early Stage Product-based companies.
Mandatory (Education) –B.E/ B.Tech
Job opportunity for Developer -Python Full Stack with Siemens at Bangalore.
Interview Process:
1st round of interview - F2F (in-Person)-Technical
2nd round of interview – F2F /Virtual Interview - Technical
3rd round of interview – Virtual Interview – Technical + HR
Job Title / Designation: Developer -Python Full Stack
Employment Type: Full Time, Permanent
Location: Bangalore
Experience: 3-5 Years Job Description: : Developer -Python Full Stack
We are looking for a python full stack expert who has proven 5+ years of experience in developing automating solutions on Linux based environments. You should be capable of developing python-based web applications or automation solutions and have with excellent knowledge on DB handling and decent knowledge of the K8-based deployment environment.
Required Skills:
- Solid experience in Python back-end technology
- Sound experience in web application development
- Decent knowledge and experience in UI development using JavaScript, React/Angular or related tech stack.
- Strong understanding of software design patterns and testing principles
- Ability to learn and adapt to working with multiple programming languages.
- Experience Docker, ArgoCD, Kubernetes and Terraform
- Understanding of ETL processes to extract data from different data sources is a plus.
- Proven experience in Linux development environments using Python.
- Excellent knowledge in interacting with database systems (SQL, NoSQL) and webservices (REST)
- Experienced in establishing an optimized CI / CD environment relevant to the project.
- Good knowledge on repository management tools like Git, Bit Bucket, etc.
- Excellent debugging skills/strategies.
- Excellent communication skills
- Experienced in working in an Agile environment.
Nice to have
- Good Knowledge in eclipse IDE, developed add-ons/ plugins on eclipse Platform.
- Knowledge of 93K Semiconductor test platforms
- Good know-how of agile management tools like Jira, Azure DevOps.
- Good knowledge of RHEL
- Knowledge of JIRA administration
NOW HIRING · WORLD-CLASS TALENT Backend Tech Lead (Senior Level Engineering Leadership)
Placed by Recruiting Bond on behalf of a Confidential Digital Platform Leader
📍Location: Bengaluru, India (Hybrid / On-Site)
🏢Sector: Technology, Information & Media
👥Company Size: 500 – 1,000 Employees
💼Employment: Full-Time, Permanent
🎯Experience: 6 – 9 Years (Backend Engineering)
🚀 Level: Tech Lead
ABOUT THIS MANDATE
Recruiting Bond has been exclusively retained by one of India's most well-established digital platform organisations — a company operating at the intersection of Technology, Information, and Media — to identify and place a world-class Backend Tech Lead who can drive a transformational engineering agenda at scale.
This is not an ordinary role. The organisation is executing a high-stakes, large-scale modernisation of its backend infrastructure — migrating from legacy monolithic systems to resilient, cloud-native, AI-augmented distributed architectures that serve millions of concurrent users. The person in this seat will be a core pillar of that transformation.
We are looking exclusively for the top 1% — engineers who think in systems, own outcomes, and lead by example.
THE OPPORTUNITY AT A GLANCE
🏗️ Architecture Ownership
Drive system design decisions across the entire backend platform. Shape the future of distributed, fault-tolerant architecture.
🤖 AI-Augmented Engineering
Embed GenAI and LLM tooling directly into the SDLC. Champion automation-first development practices across squads.
🎓 Engineering Leadership
Mentor and grow the next generation of backend engineers. Lead hiring, reviews, and cross-functional technical alignment.
KEY RESPONSIBILITIES
1. Architecture & Platform Modernisation
- Lead the full migration of legacy monolithic systems to a scalable, cloud-native microservices architecture
- Design and own distributed, fault-tolerant backend systems with sub-millisecond SLO targets
- Architect API-first and event-driven platforms using async messaging patterns (Kafka, Pub/Sub, SQS)
- Resolve systemic performance bottlenecks, concurrency conflicts, and scalability ceilings
- Establish backend design standards, coding guidelines, and architectural review processes
2. Distributed Systems Engineering (Production-Grade)
- Design and implement Webhook reliability frameworks with intelligent retry and exponential backoff strategies
- Build idempotent, versioned APIs with enterprise-grade rate limiting and throttling controls
- Implement circuit breakers, bulkheads, and resilience patterns using Resilience4j / Hystrix or equivalents
- Engineer Dead-Letter Queue (DLQ) strategies and event reprocessing pipelines with guaranteed delivery semantics
- Apply Saga orchestration and choreography patterns for distributed transaction integrity
- Execute zero-downtime deployments and canary release strategies with rollback capability
- Design and enforce multi-region disaster recovery and business continuity protocols
3. AI-Driven Engineering Practices
- Champion LLM and GenAI adoption as first-class tooling across the software development lifecycle
- Apply prompt engineering techniques for automated code generation, review, and documentation workflows
- Utilise AI-assisted debugging, root cause analysis, and predictive performance optimisation
- Build automation-first pipelines that reduce toil and accelerate delivery velocity
- Evaluate and integrate emerging AI developer tools into the engineering ecosystem
4. Engineering Leadership & Culture
- Own backend platforms end-to-end with full accountability across development, stability, and performance
- Actively mentor, coach, and elevate engineers at all levels (L3–L6) through structured 1:1s and code reviews
- Drive and lead technical hiring — from designing assessments to final hire decisions
- Partner with Product, Data, DevOps, and Security stakeholders to align engineering with business objectives
- Represent the engineering org in cross-functional roadmap planning and architecture decision reviews
- Foster a culture of technical excellence, psychological safety, and high-velocity delivery
TECHNOLOGY STACK (HANDS-ON PROFICIENCY REQUIRED)
Languages: Java (primary) · Go · Python · Node.js · PHP · Rust
Cloud: AWS · GCP · Azure (Multi-cloud exposure preferred)
Containers: Docker · Kubernetes · Helm · Service Mesh (Istio / Linkerd)
Databases: PostgreSQL · MySQL · MongoDB · Cassandra · Redis · Elasticsearch
Messaging: Apache Kafka · RabbitMQ · AWS SQS/SNS · Google Pub/Sub
Observability: Datadog · Prometheus · Grafana · OpenTelemetry · Jaeger · ELK Stack
CI/CD & IaC: GitHub Actions · Jenkins · ArgoCD · Terraform · Ansible
AI & GenAI: OpenAI / Claude APIs · LangChain · RAG Pipelines · GitHub Copilot · Cursor
QUALIFICATIONS & CANDIDATE PROFILE
Education
- B.E. / B.Tech or M.E. / M.Tech from a Tier-I or Tier-II Institution — CS, IS, ECE, AI/ML streams strongly preferred
- Exceptional real-world engineering track record may be considered in lieu of institution pedigree
Experience
- 6 to 9 years of progressive backend engineering experience with demonstrable ownership and impact
- Proven track record of shipping and scaling production SaaS / Product systems at significant user load
- Exposure to and success within start-up, mid-size, and large-scale product organisations — the full spectrum
- Strong computer science fundamentals: algorithms, data structures, distributed systems theory, OS internals
- Demonstrated career stability — minimum 2 years average tenure per organisation
- The Ideal Candidate Exemplifies
- System-level thinking with an ability to hold context across code, architecture, product, and business
- An ownership mindset — no task is 'not my job'; outcomes and quality are personal commitments
- Strong written and verbal communication skills for asynchronous, cross-functional collaboration
- Intellectual curiosity: actively follows engineering trends, contributes to the community (OSS, blogs, talks)
- Bias for automation, observability, and engineering efficiency at every level
- A mentor's instinct — genuine desire to grow others and raise the capability of the team around them
WHY THIS ROLE STANDS APART
✅ Transformational Scope
Lead platform modernisation at scale. Your architectural choices will define systems serving millions of users for years.
✅ AI-Forward Engineering Culture
Be at the forefront of AI-augmented development. This org invests in tools and practices that make great engineers exceptional.
✅ Established, Stable Platform
Join a company with 500–1,000 employees, proven product-market fit, and the resources to execute on a serious technical vision.
✅ Career-Defining Leadership
Operate with strategic influence, direct access to senior leadership, and a clear path toward Principal / Staff / VP Engineering.
HOW TO APPLY
This search is being managed exclusively by Recruiting Bond
Submit your application with an updated resume
Only shortlisted candidates will be contacted. All applications are treated with the strictest confidentiality.
⚡ We move fast — qualified candidates can expect a response within 48–72 business hours.
Recruiting Bond | Bengaluru, Karnataka, India | 2026
Hiring: AWS DevOps Developer
📍 Location: Bangalore
🧑💻 Experience: 4–7 Years
📌 Job Summary
We are looking for a skilled AWS DevOps Developer with strong experience in AWS cloud infrastructure, CI/CD automation, containerization, and Infrastructure as Code. The ideal candidate should have hands-on experience building scalable and secure cloud environments.
🛠 Required Technical Skills
☁️ AWS Services
- Amazon EC2
- Amazon S3
- IAM
- VPC
- Amazon EKS
- RDS
- Route 53
- CloudWatch
- AWS Lambda
🔄 DevOps & CI/CD
- Jenkins (Pipelines, Shared Libraries)
- Git / GitHub
- Maven / Build tools
- CI/CD pipeline design & implementation
🐳 Containers & Orchestration
- Docker
- Kubernetes (EKS preferred)
- Helm
🏗 Infrastructure as Code
- Terraform
- Ansible
📊 Monitoring & Logging
- CloudWatch
- Prometheus
- Grafana
📋 Roles & Responsibilities
- Design and implement scalable AWS infrastructure
- Build and maintain CI/CD pipelines
- Deploy containerized applications using Docker & Kubernetes
- Automate infrastructure provisioning using Terraform
- Implement monitoring and alerting solutions
- Ensure security, compliance, and cost optimization
- Troubleshoot production issues and improve system reliability
➕ Good to Have
- AWS Certification (Solutions Architect / DevOps Engineer)
- Experience with Microservices architecture
- Knowledge of DevSecOps practices
- Experience in Agile methodology

Consumer Internet, Technology & Travel and Tourism Platform
Job Details
- Job Title: Lead DevOps Engineer
- Industry: Consumer Internet, Technology & Travel and Tourism Platform
- Function - IT
- Experience Required: 7-10 years
- Employment Type: Full Time
- Job Location: Bengaluru
- CTC Range: Best in Industry
Criteria:
- Strong Lead DevOps / Infrastructure Engineer Profiles.
- Must have 7+ years of hands-on experience working as a DevOps / Infrastructure Engineer.
- Candidate’s current title must be Lead DevOps Engineer (or equivalent Lead role) in the current organization
- Must have minimum 2+ years of team management / technical leadership experience, including mentoring engineers, driving infrastructure decisions, or leading DevOps initiatives.
- Must have strong hands-on experience with Kubernetes (container orchestration) including deployment, scaling, and cluster management.
- Must have experience with Infrastructure as Code (IaC) tools such as Terraform, Ansible, Chef, or Puppet.
- Must have strong scripting and automation experience using Python, Go, Bash, or similar scripting languages.
- Must have working experience with distributed databases or data systems such as MongoDB, Redis, Cassandra, Elasticsearch, or Puppet.
- Must have strong hands-on experience in Observability & Monitoring, CI/CD architecture, and Networking concepts in production environments.
- (Company) – Must be from B2C Product Companies only.
- (Education) – B.E/ B.Tech
Preferred
- Experience working in microservices architecture and event-driven systems.
- Exposure to cloud infrastructure, scalability, reliability, and cost optimization practices.
- (Skills) – Understanding of programming languages such as Go, Python, or Java.
- (Environment) – Experience working in high-growth startup or large-scale production environments.
Job Description
As a DevOps Engineer, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
- Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
- Codify our infrastructure
- Do what it takes to keep the uptime above 99.99%
- Understand the bigger picture and sail through the ambiguities
- Scale technology considering cost and observability and manage end-to-end processes
- Understand DevOps philosophy and evangelize the principles across the organization
- Strong communication and collaboration skills to break down the silos

Consumer Internet, Technology & Travel and Tourism Platform
Job Details
- Job Title: Senior DevOps Engineer
- Industry: Consumer Internet, Technology & Travel and Tourism Platform
- Function - IT
- Experience Required: 4-7 years
- Employment Type: Full Time
- Job Location: Bengaluru
- CTC Range: Best in Industry
Criteria:
- Strong DevOps / Infrastructure Engineer Profiles.
- Must have 4+ years of hands-on experience working as a DevOps Engineer / Infrastructure Engineer / SRE / DevOps Consultant.
- Must have hands-on experience with Kubernetes and Docker, including deployment, scaling, or containerized application management.
- Must have experience with Infrastructure as Code (IaC) or configuration management tools such as Terraform, Ansible, Chef, or Puppet.
- Must have strong automation and scripting experience using Python, Go, Bash, Shell, or similar scripting languages.
- Must have working experience with distributed databases or data systems such as MongoDB, Redis, Cassandra, Elasticsearch, or Kafka.
- Candidate must demonstrate strong expertise in at least one of the following areas - Databases / Distributed Data Systems, Observability & Monitoring, CI/CD Pipelines. Networking Concepts, Kubernetes / Container Platforms
- Candidates must be from B2C Product-based companies only.
- (Education) – BE / B.Tech or equivalent
Preferred
- Experience working with microservices or event-driven architectures.
- Exposure to cloud infrastructure, monitoring, reliability, and scalability practices.
- (Skills) – Understanding of programming languages such as Go, Python, or Java.
- Preferred (Environment) – Experience working in high-scale production or fast-growing product startups.
Job Description
As a DevOps Engineer, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
- Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
- Codify our infrastructure
- Do what it takes to keep the uptime above 99.99%
- Understand the bigger picture and sail through the ambiguities
- Scale technology considering cost and observability and manage end-to-end processes
- Understand DevOps philosophy and evangelize the principles across the organization
- Strong communication and collaboration skills to break down the silos
Role & Responsibilities
As a DevOps Engineer, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems excites you, then this is your fit.
Job Responsibilities:
- Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
- Codify infrastructure
- Ensure uptime above 99.99%
- Understand the bigger picture and navigate through ambiguities
- Scale technology considering cost and observability and manage end-to-end processes
- Understand DevOps philosophy and evangelize the principles across the organization
- Demonstrate strong communication and collaboration skills to break down silos
Ideal Candidate
Strong DevOps / Infrastructure Engineer Profiles
Mandatory Requirements:
Experience 1:
Must have 4+ years of hands-on experience working as a DevOps Engineer / Infrastructure Engineer / SRE / DevOps Consultant.
Experience 2:
Must have hands-on experience with Kubernetes and Docker, including deployment, scaling, or containerized application management.
Experience 3:
Must have experience with Infrastructure as Code (IaC) or configuration management tools such as Terraform, Ansible, Chef, or Puppet.
Experience 4:
Must have strong automation and scripting experience using Python, Go, Bash, Shell, or similar scripting languages.
Experience 5:
Must have working experience with distributed databases or data systems such as MongoDB, Redis, Cassandra, Elasticsearch, or Kafka.
Experience 6:
Must demonstrate strong expertise in at least one of the following areas:
- Databases / Distributed Data Systems
- Observability & Monitoring
- CI/CD Pipelines
- Networking Concepts
- Kubernetes / Container Platforms
Company Background:
Candidates must be from B2C product-based companies only.
Education:
BE / B.Tech or equivalent.
Preferred:
- Experience working with microservices or event-driven architectures.
- Exposure to cloud infrastructure, monitoring, reliability, and scalability practices.
- Understanding of programming languages such as Go, Python, or Java.
- Experience working in high-scale production or fast-growing product startups.
Perks, Benefits and Work Culture
We take our work seriously and are proud of the associations we have built along the way. But we also know how to have fun. With a seamless communication structure and a “no cubicle culture,” the people here are extremely approachable. You will have several opportunities to exercise your potential. We break the regular office monotony and believe in a free-flowing work culture. It’s a great place to be, and we are confident you will enjoy working here.
If you want, I can also convert this into a recruiter-friendly screening checklist version.

Global Digital Transformation Solutions Provider
JOB DETAILS:
* Job Title: Specialist I - DevOps Engineering
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 7-10 years
* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
Job Description
Job Summary:
As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.
The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.
Key Responsibilities:
- Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
- Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
- Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
- Define migration scope — determine how much history to migrate and plan the repository structure.
- Manage branch renaming and repository organization for optimized post-migration workflows.
- Collaborate with development teams to determine migration points and finalize migration strategies.
- Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.
Required Qualifications:
- Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
- Hands-on experience with P4-Fusion.
- Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
- Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
- Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
- Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
- Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
- Familiarity with CI/CD pipeline integration to validate workflows post-migration.
- Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
- Excellent communication and collaboration skills for cross-team coordination and migration planning.
- Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.
Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools
Must-Haves
Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)
Job Details
- Job Title: SDE-3
- Industry: Technology
- Domain - Information technology (IT)
- Experience Required: 5-8 years
- Employment Type: Full Time
- Job Location: Bengaluru
- CTC Range: Best in Industry
Role & Responsibilities
As a Software Development Engineer - 3, Backend Engineer at company, you will play a critical role in architecting, designing, and delivering robust backend systems that power our platform. You will lead by example, driving technical excellence and mentoring peers while solving complex engineering problems. This position offers the opportunity to work with a highly motivated team in a fast-paced and innovative environment.
Key Responsibilities:
Technical Leadership-
- Design and develop highly scalable, fault-tolerant, and maintainable backend systems using Java and related frameworks.
- Provide technical guidance and mentorship to junior developers, fostering a culture of learning and growth.
- Review code and ensure adherence to best practices, coding standards, and security guidelines.
System Architecture and Design-
- Collaborate with cross-functional teams, including product managers and frontend engineers, to translate business requirements into efficient technical solutions.
- Own the architecture of core modules and contribute to overall platform scalability and reliability.
- Advocate for and implement microservices architecture, ensuring modularity and reusability.
Problem Solving and Optimization-
- Analyze and resolve complex system issues, ensuring high availability and performance of the platform.
- Optimize database queries and design scalable data storage solutions.
- Implement robust logging, monitoring, and alerting systems to proactively identify and mitigate issues.
Innovation and Continuous Improvement-
- Stay updated on emerging backend technologies and incorporate relevant advancements into our systems.
- Identify and drive initiatives to improve codebase quality, deployment processes, and team productivity.
- Contribute to an advocate for a DevOps culture, supporting CI/CD pipelines and automated testing.
Collaboration and Communication-
- Act as a liaison between the backend team and other technical and non-technical teams, ensuring smooth communication and alignment.
- Document system designs, APIs, and workflows to maintain clarity and knowledge transfer across the team.
Ideal Candidate
- Strong Java Backend Engineer.
- Must have 5+ years of backend development with strong focus on Java (Spring / Spring Boot)
- Must have been SDE-2 for at least 2.5 years
- Hands-on experience with RESTful APIs and microservices architecture
- Strong understanding of distributed systems, multithreading, and async programming
- Experience with relational and NoSQL databases
- Exposure to Kafka/RabbitMQ and Redis/Memcached
- Experience with AWS / GCP / Azure, Docker, and Kubernetes
- Familiar with CI/CD pipelines and modern DevOps practices
- Product companies (B2B SAAS preferred)
- have stayed for at least 2 years with each of the previous companies
- (Education): B.Tech in computer science from Tier 1, Tier 2 colleges
JOB DETAILS:
* Job Title: DevOps Engineer (Azure)
* Industry: Technology
* Salary: Best in Industry
* Experience: 2-5 years
* Location: Bengaluru, Koramangala
Review Criteria
- Strong Azure DevOps Engineer Profiles.
- Must have minimum 2+ years of hands-on experience as an Azure DevOps Engineer with strong exposure to Azure DevOps Services (Repos, Pipelines, Boards, Artifacts).
- Must have strong experience in designing and maintaining YAML-based CI/CD pipelines, including end-to-end automation of build, test, and deployment workflows.
- Must have hands-on scripting and automation experience using Bash, Python, and/or PowerShell
- Must have working knowledge of databases such as Microsoft SQL Server, PostgreSQL, or Oracle Database
- Must have experience with monitoring, alerting, and incident management using tools like Grafana, Prometheus, Datadog, or CloudWatch, including troubleshooting and root cause analysis
Preferred
- Knowledge of containerisation and orchestration tools such as Docker and Kubernetes.
- Knowledge of Infrastructure as Code and configuration management tools such as Terraform and Ansible.
- Preferred (Education) – BE/BTech / ME/MTech in Computer Science or related discipline
Role & Responsibilities
- Build and maintain Azure DevOps YAML-based CI/CD pipelines for build, test, and deployments.
- Manage Azure DevOps Repos, Pipelines, Boards, and Artifacts.
- Implement Git branching strategies and automate release workflows.
- Develop scripts using Bash, Python, or PowerShell for DevOps automation.
- Monitor systems using Grafana, Prometheus, Datadog, or CloudWatch and handle incidents.
- Collaborate with dev and QA teams in an Agile/Scrum environment.
- Maintain documentation, runbooks, and participate in root cause analysis.
Ideal Candidate
- 2–5 years of experience as an Azure DevOps Engineer.
- Strong hands-on experience with Azure DevOps CI/CD (YAML) and Git.
- Experience with Microsoft Azure (OCI/AWS exposure is a plus).
- Working knowledge of SQL Server, PostgreSQL, or Oracle.
- Good scripting, troubleshooting, and communication skills.
- Bonus: Docker, Kubernetes, Terraform, Ansible experience.
- Comfortable with WFO (Koramangala, Bangalore).
Strong Azure DevOps Engineer Profiles.
Mandatory (Experience 1) – Must have minimum 2+ years of hands-on experience as an Azure DevOps Engineer with strong exposure to Azure DevOps Services (Repos, Pipelines, Boards, Artifacts).
Mandatory (Experience 2) – Must have strong experience in designing and maintaining YAML-based CI/CD pipelines, including end-to-end automation of build, test, and deployment workflows.
Mandatory (Experience 3) – Must have hands-on scripting and automation experience using Bash, Python, and/or PowerShell
Mandatory (Experience 4) – Must have working knowledge of databases such as Microsoft SQL Server, PostgreSQL, or Oracle Database
Mandatory (Experience 5) – Must have experience with monitoring, alerting, and incident management using tools like Grafana, Prometheus, Datadog, or CloudWatch, including troubleshooting and root cause analysis
Preferred
Job Description: Python Automation Engineer Location: Bangalore (Office-based) Experience: 1–2 Years Joining: Immediate to 30 Days Role Overview We are looking for a Python Automation Engineer who combines strong programming skills with hands-on automation expertise. This role involves developing automation scripts, designing automation frameworks, and contributing independently to automation solutions, with leads delegating tasks and solution directions. The ideal candidate is not a novice—they have solid real-world Python experience and are comfortable working across API automation, automation tooling, and CI/CD-driven environments. Key Responsibilities Design, develop, and maintain automation scripts and reusable automation frameworks using Python Build and enhance API automation for REST-based services and common backend frameworks Independently own automation tasks and deliver solutions with minimal supervision Collaborate with leads and engineering teams to understand automation requirements Maintain clean, modular, and scalable automation code Occasionally review automation code written by other team members Integrate automation suites with CI/CD pipelines Package and ship automation tools/frameworks using containerization Required Skills & Qualifications Python (Core Requirement) Strong, in-depth hands-on experience in Python, including: Object-Oriented Programming (OOP) and modular design Writing reusable libraries and frameworks Exception handling, logging, and debugging Asynchronous concepts, performance-aware coding Unit testing and test automation practices Code quality, readability, and maintainability API Automation Strong experience automating REST APIs Hands-on with common Python API libraries (e.g., requests, httpx, or equivalent) Understanding of API request/response handling, validations, and workflows Familiarity with different backend frameworks and fast APIs DevOps & Engineering Practices (Must-Have) Strong knowledge of Git Experience with CI/CD tools (Jenkins, GitHub Actions, GitLab, or similar) Ability to integrate automation suites into pipelines Hands-on experience with Docker for shipping automation tools/frameworks Good-to-Have Skills UI automation using Selenium (Page Object Model, cross-browser testing, headless execution) Exposure to Playwright for UI automation Basic working knowledge of Java and/or JavaScript (reading, writing small scripts, debugging) Understanding of API authentication, retries, mocking, and related best practices Domain Exposure Experience or interest in SaaS platforms Exposure to AI / ML-based platforms is a plus What We’re Looking For A strong engineering mindset, not just tool usage Someone who can build automation systems, not only execute test cases Comfortable working independently while aligning with technical leads Passion for clean code, scalable automation, and continuous improvement SKILLA IN 1 WORKKD TO PUT IN KEYSKILL SECTION
JOB DETAILS:
- Job Title: Senior Devops Engineer 2
- Industry: Ride-hailing
- Experience: 5-7 years
- Working Days: 5 days/week
- Work Mode: ONSITE
- Job Location: Bangalore
- CTC Range: Best in Industry
Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)
Criteria:
1. Candidate must be from a product-based or scalable app-based start-ups company with experience handling large-scale production traffic.
2. Minimum 5 yrs of experience working as a DevOps/Infrastructure Consultant
3. Own end-to-end infrastructure right from non-prod to prod environment including self-managed
4. Candidate must have experience in database migration from scratch
5. Must have a firm hold on the container orchestration tool Kubernetes
6. Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
7. Understanding programming languages like GO/Python, and Java
8. Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
9. Working experience on Cloud platform - AWS
10. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.
Description
Job Summary:
As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
● Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
● Codify our infrastructure
● Do what it takes to keep the uptime above 99.99%
● Understand the bigger picture and sail through the ambiguities
● Scale technology considering cost and observability and manage end-to-end processes
● Understand DevOps philosophy and evangelize the principles across the organization
● Strong communication and collaboration skills to break down the silos
Job Requirements:
● B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience
● Minimum 5 yrs of experience working as a DevOps/Infrastructure Consultant
● Must have a firm hold on the container orchestration tool Kubernetes
● Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
● Strong problem-solving skills, and ability to write scripts using any scripting language
● Understanding programming languages like GO/Python, and Java
● Comfortable working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
What’s there for you?
Company’s team handles everything – infra, tooling, and self-manages a bunch of databases, such as
● 150+ microservices with event-driven architecture across different tech stacks Golang/ java/ node
● More than 100,000 Request per second on our edge gateways
● ~20,000 events per second on self-managed Kafka
● 100s of TB of data on self-managed databases
● 100s of real-time continuous deployment to production
● Self-managed infra supporting
● 100% OSS
JOB DETAILS:
- Job Title: Lead DevOps Engineer
- Industry: Ride-hailing
- Experience: 6-9 years
- Working Days: 5 days/week
- Work Mode: ONSITE
- Job Location: Bangalore
- CTC Range: Best in Industry
Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)
Criteria:
1. Candidate must be from a product-based or scalable app-based start-ups company with experience handling large-scale production traffic.
2. Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant
3. Candidate must have 2 years of experience as an lead (handling team of 3 to 4 members at least)
4. Own end-to-end infrastructure right from non-prod to prod environment including self-managed
5. Candidate must have Self experience in database migration from scratch
6. Must have a firm hold on the container orchestration tool Kubernetes
7. Should have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
8. Understanding programming languages like GO/Python, and Java
9. Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
10. Working experience on Cloud platform -AWS
11. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.
Description
Job Summary:
As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
● Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
● Codify our infrastructure
● Do what it takes to keep the uptime above 99.99%
● Understand the bigger picture and sail through the ambiguities
● Scale technology considering cost and observability and manage end-to-end processes
● Understand DevOps philosophy and evangelize the principles across the organization
● Strong communication and collaboration skills to break down the silos
Job Requirements:
● B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience
● Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant
● Must have a firm hold on the container orchestration tool Kubernetes
● Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
● Strong problem-solving skills, and ability to write scripts using any scripting language
● Understanding programming languages like GO/Python, and Java
● Comfortable working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
What’s there for you?
Company’s team handles everything – infra, tooling, and self-manages a bunch of databases, such as
● 150+ microservices with event-driven architecture across different tech stacks Golang/ java/ node
● More than 100,000 Request per second on our edge gateways
● ~20,000 events per second on self-managed Kafka
● 100s of TB of data on self-managed databases
● 100s of real-time continuous deployment to production
● Self-managed infra supporting
● 100% OSS
JOB DETAILS:
- Job Title: Senior Devops Engineer 1
- Industry: Ride-hailing
- Experience: 4-6 years
- Working Days: 5 days/week
- Work Mode: ONSITE
- Job Location: Bangalore
- CTC Range: Best in Industry
Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)
Criteria:
1. Candidate must be from a product-based or scalable app-based startups company with experience handling large-scale production traffic.
2. Candidate must have strong Linux expertise with hands-on production troubleshooting and working knowledge of databases and middleware (Mongo, Redis, Cassandra, Elasticsearch, Kafka).
3. Candidate must have solid experience with Kubernetes.
4. Candidate should have strong knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.
5. Candidate must be an individual contributor with strong ownership.
6. Candidate must have hands-on experience with DATABASE MIGRATIONS and observability tools such as Prometheus and Grafana.
7. Candidate must have working knowledge of Go/Python and Java.
8. Candidate should have working experience on Cloud platform - AWS
9. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.
Description
Job Summary:
As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
- Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs.
- Understanding the needs of stakeholders and conveying this to developers.
- Working on ways to automate and improve development and release processes.
- Identifying technical problems and developing software updates and ‘fixes’.
- Working with software developers to ensure that development follows established processes and works as intended.
- Do what it takes to keep the uptime above 99.99%.
- Understand DevOps philosophy and evangelize the principles across the organization.
- Strong communication and collaboration skills to break down the silos
Job Requirements:
- B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience.
- Minimum 4 yrs of experience working as a DevOps/Infrastructure Consultant.
- Strong background in operating systems like Linux.
- Understands the container orchestration tool Kubernetes.
- Proficient Knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.
- Problem-solving attitude, and ability to write scripts using any scripting language.
- Understanding programming languages like GO/Python, and Java.
- Basic understanding of databases and middlewares like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
- Should be able to take ownership of tasks, and must be responsible. - Good communication skills
Candidate must be from a product-based company with experience handling large-scale production traffic.
2. Candidate must have strong Linux expertise with hands-on production troubleshooting and working knowledge of databases and middleware (Mongo, Redis, Cassandra, Elasticsearch, Kafka).
3. Candidate must have solid experience with Kubernetes.
4. Candidate should have strong knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.
5. Candidate must be an individual contributor with strong ownership.
6. Candidate must have hands-on experience with DATABASE MIGRATIONS and observability tools such as Prometheus and Grafana.
7. Candidate must have working knowledge of Go/Python and Java.
8. Candidate should have working experience on Cloud platform - AWS
9. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation
Candidate must be from a product-based company with experience handling large-scale production traffic.
2. Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant
3. Candidate must have 2 years of experience as an lead (handling team of 3 to 4 members atleast)
4. Own end-to-end infrastructure right from non-prod to prod environment including self-managed
5. Candidate must have Self experience in database migration from scratch
6. Must have a firm hold on the container orchestration tool Kubernetes
7. Should have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
8. Understanding programming languages like GO/Python, and Java
9. Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
10. Working experience on Cloud platform -AWS
11. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation
Job Title : Senior Software Engineer (Full Stack — AI/ML & Data Applications)
Experience : 5 to 10 Years
Location : Bengaluru, India
Employment Type : Full-Time | Onsite
Role Overview :
We are seeking a Senior Full Stack Software Engineer with strong technical leadership and hands-on expertise in AI/ML, data-centric applications, and scalable full-stack architectures.
In this role, you will design and implement complex applications integrating ML/AI models, lead full-cycle development, and mentor engineering teams.
Mandatory Skills :
Full Stack Development (React/Angular/Vue + Node.js/Python/Java), Data Engineering (Spark/Kafka/ETL), ML/AI Model Integration (TensorFlow/PyTorch/scikit-learn), Cloud & DevOps (AWS/GCP/Azure, Docker, Kubernetes, CI/CD), SQL/NoSQL Databases (PostgreSQL/MongoDB).
Key Responsibilities :
- Architect, design, and develop scalable full-stack applications for data and AI-driven products.
- Build and optimize data ingestion, processing, and pipeline frameworks for large datasets.
- Deploy, integrate, and scale ML/AI models in production environments.
- Drive system design, architecture discussions, and API/interface standards.
- Ensure engineering best practices across code quality, testing, performance, and security.
- Mentor and guide junior developers through reviews and technical decision-making.
- Collaborate cross-functionally with product, design, and data teams to align solutions with business needs.
- Monitor, diagnose, and optimize performance issues across the application stack.
- Maintain comprehensive technical documentation for scalability and knowledge-sharing.
Required Skills & Experience :
- Education : B.E./B.Tech/M.E./M.Tech in Computer Science, Data Science, or equivalent fields.
- Experience : 5+ years in software development with at least 2+ years in a senior or lead role.
- Full Stack Proficiency :
- Front-end : React / Angular / Vue.js
- Back-end : Node.js / Python / Java
- Data Engineering : Experience with data frameworks such as Apache Spark, Kafka, and ETL pipeline development.
- AI/ML Expertise : Practical exposure to TensorFlow, PyTorch, or scikit-learn and deploying ML models at scale.
- Databases : Strong knowledge of SQL & NoSQL systems (PostgreSQL, MongoDB) and warehousing tools (Snowflake, BigQuery).
- Cloud & DevOps : Working knowledge of AWS, GCP, or Azure; containerization & orchestration (Docker, Kubernetes); CI/CD; MLflow/SageMaker is a plus.
- Visualization : Familiarity with modern data visualization tools (D3.js, Tableau, Power BI).
Soft Skills :
- Excellent communication and cross-functional collaboration skills.
- Strong analytical mindset with structured problem-solving ability.
- Self-driven with ownership mentality and adaptability in fast-paced environments.
Preferred Qualifications (Bonus) :
- Experience deploying distributed, large-scale ML or data-driven platforms.
- Understanding of data governance, privacy, and security compliance.
- Exposure to domain-driven data/AI use cases in fintech, healthcare, retail, or e-commerce.
- Experience working in Agile environments (Scrum/Kanban).
- Active open-source contributions or a strong GitHub technical portfolio.
Job Title: Senior DevOps Engineer
Experience: 8+ Years
Joining: Immediate Joiner
Location: Bangalore (Onsite/Hybrid – as applicable)
Job Description:
We are looking for a highly experienced Senior DevOps Engineer with 8+ years of hands-on experience to join our team immediately. The ideal candidate will be responsible for designing, implementing, and managing scalable, secure, and highly available infrastructure.
Key Responsibilities:
- Design, build, and maintain CI/CD pipelines for application deployment
- Manage cloud infrastructure (AWS/Azure/GCP) and optimize cost and performance
- Automate infrastructure using Infrastructure as Code (Terraform/CloudFormation)
- Manage containerized applications using Docker and Kubernetes
- Monitor system performance, availability, and security
- Collaborate closely with development, QA, and security teams
- Troubleshoot production issues and perform root cause analysis
- Ensure high availability, disaster recovery, and backup strategies
Required Skills:
- 8+ years of experience in DevOps / Site Reliability Engineering
- Strong expertise in Linux/Unix administration
- Hands-on experience with AWS / Azure / GCP
- CI/CD tools: Jenkins, GitLab CI, GitHub Actions, Azure DevOps
- Containers & orchestration: Docker, Kubernetes
- Infrastructure as Code: Terraform, CloudFormation, Ansible
- Monitoring tools: Prometheus, Grafana, ELK, CloudWatch
- Strong scripting skills (Bash, Python)
- Experience with security best practices and compliance
Good to Have:
- Experience with microservices architecture
- Knowledge of DevSecOps practices
- Cloud certifications (AWS/Azure/GCP)
- Experience in high-traffic production environments
Why Join Us:
- Opportunity to work on scalable, enterprise-grade systems
- Collaborative and growth-oriented work environment
- Competitive compensation and benefits
- Immediate joiners preferred.

Global digital transformation solutions provider.
Job Details
- Job Title: DevOps and SRE -Technical Project Manager
- Industry: Global digital transformation solutions provider
- Domain - Information technology (IT)
- Experience Required: 12-15 years
- Employment Type: Full Time
- Job Location: Bangalore, Chennai, Coimbatore, Hosur & Hyderabad
- CTC Range: Best in Industry
Job Description
Company’s DevOps Practice is seeking a highly skilled DevOps and SRE Technical Project Manager to lead large-scale transformation programs for enterprise customers. The ideal candidate will bring deep expertise in DevOps and Site Reliability Engineering (SRE), combined with strong program management, stakeholder leadership, and the ability to drive end-to-end execution of complex initiatives.
Key Responsibilities
- Lead the planning, execution, and successful delivery of DevOps and SRE transformation programs for enterprise clients, including full oversight of project budgets, financials, and margins.
- Partner with senior stakeholders to define program objectives, roadmaps, milestones, and success metrics aligned with business and technology goals.
- Develop and implement actionable strategies to optimize development, deployment, release management, observability, and operational workflows across client environments.
- Provide technical leadership and strategic guidance to cross-functional engineering teams, ensuring alignment with industry standards, best practices, and company delivery methodologies.
- Identify risks, dependencies, and blockers across programs, and proactively implement mitigation and contingency plans.
- Monitor program performance, KPIs, and financial health; drive corrective actions and margin optimization where necessary.
- Facilitate strong communication, collaboration, and transparency across engineering, product, architecture, and leadership teams.
- Deliver periodic program updates to internal and client stakeholders, highlighting progress, risks, challenges, and improvement opportunities.
- Champion a culture of continuous improvement, operational excellence, and innovation by encouraging adoption of emerging DevOps, SRE, automation, and cloud-native practices.
- Support GitHub migration initiatives, including planning, execution, troubleshooting, and governance setup for repository and workflow migrations.
Requirements
- Bachelor’s degree in Computer Science, Engineering, Business Administration, or a related technical discipline.
- 15+ years of IT experience, including at least 5 years in a managerial or program leadership role.
- Proven experience leading large-scale DevOps and SRE transformation programs with measurable business impact.
- Strong program management expertise, including planning, execution oversight, risk management, and financial governance.
- Solid understanding of Agile methodologies (Scrum, Kanban) and modern software development practices.
- Deep hands-on knowledge of DevOps principles, CI/CD pipelines, automation frameworks, Infrastructure as Code (IaC), and cloud-native tooling.
- Familiarity with SRE practices such as service reliability, observability, SLIs/SLOs, incident management, and performance optimization.
- Experience with GitHub migration projects—including repository analysis, migration planning, tooling adoption, and workflow modernization.
- Excellent communication, stakeholder management, and interpersonal skills with the ability to influence and lead cross-functional teams.
- Strong analytical, organizational, and problem-solving skills with a results-oriented mindset.
- Preferred certifications: PMP, PgMP, ITIL, Agile/Scrum Master, or relevant technical certifications.
Skills: Devops Tools, Cloud Infrastructure, Team Management
Must-Haves
DevOps principles (5+ years), SRE practices (5+ years), GitHub migration (3+ years), CI/CD pipelines (5+ years), Agile methodologies (5+ years)
Notice period - 0 to 15days only
ROLES AND RESPONSIBILITIES:
You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
IDEAL CANDIDATE:
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
PREFERRED:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
- Exposure to Snowflake, Databricks, or BigQuery environments.
- Experience in high-tech, manufacturing, or enterprise data modernization programs.
Exp: 7- 10 Years
CTC: up to 35 LPA
Skills:
- 6–10 years DevOps / SRE / Cloud Infrastructure experience
- Expert-level Kubernetes (networking, security, scaling, controllers)
- Terraform Infrastructure-as-Code mastery
- Hands-on Kafka production experience
- AWS cloud architecture and networking expertise
- Strong scripting in Python, Go, or Bash
- GitOps and CI/CD tooling experience
Key Responsibilities:
- Design highly available, secure cloud infrastructure supporting distributed microservices at scale
- Lead multi-cluster Kubernetes strategy optimized for GPU and multi-tenant workloads
- Implement Infrastructure-as-Code using Terraform across full infrastructure lifecycle
- Optimize Kafka-based data pipelines for throughput, fault tolerance, and low latency
- Deliver zero-downtime CI/CD pipelines using GitOps-driven deployment models
- Establish SRE practices with SLOs, p95 and p99 monitoring, and FinOps discipline
- Ensure production-ready disaster recovery and business continuity testing
If interested Kindly share your updated resume at 82008 31681
Job Description :
We are looking for an experienced DevOps Engineer with strong expertise in Azure DevOps, CI/CD pipelines, and PowerShell scripting, who has worked extensively with .NET-based applications in a Windows environment.
Mandatory Skills
- Strong hands-on experience with Azure DevOps
- GIT version control
- CI/CD pipelines (Classic & YAML)
- Excellent experience in PowerShell scripting
- Experience working with .NET-based applications
- Understanding of Solutions, Project files, MSBuild
- Experience using Visual Studio / MSBuild tools
- Strong experience in Windows environment
- End-to-end experience in build, release, and deployment pipelines
Good to Have Skills
- Terraform (optional / good to have)
- Experience with JFrog Artifactory
- SonarQube integration knowledge
JD :
• Master’s degree in Computer Science, Computational Sciences, Data Science, Machine Learning, Statistics , Mathematics any quantitative field
• Expertise with object-oriented programming (Python, C++)
• Strong expertise in Python libraries like NumPy, Pandas, PyTorch, TensorFlow, and Scikit-learn
• Proven experience in designing and deploying ML systems on cloud platforms (AWS, GCP, or Azure).
• Hands-on experience with MLOps frameworks, model deployment pipelines, and model monitoring tools.
• Track record of scaling machine learning solutions from prototype to production.
• Experience building scalable ML systems in fast-paced, collaborative environments.
• Working knowledge of adversarial machine learning techniques and their mitigation
• Agile and Waterfall methodologies.
• Personally invested in continuous improvement and innovation.
• Motivated, self-directed individual that works well with minimal supervision.

Global digital transformation solutions provider.
Role Proficiency:
Leverage expertise in a technology area (e.g. Java Microsoft technologies or Mainframe/legacy) to design system architecture.
Knowledge Examples:
- Domain/ Industry Knowledge: Basic knowledge of standard business processes within the relevant industry vertical and customer business domain
- Technology Knowledge: Demonstrates working knowledge of more than one technology area related to own area of work (e.g. Java/JEE 5+ Microsoft technologies or Mainframe/legacy) customer technology landscape multiple frameworks (Struts JSF Hibernate etc.) within one technology area and their applicability. Consider low level details such as data structures algorithms APIs and libraries and best practices for one technology stack configuration parameters for successful deployment and configuration parameters for high performance within one technology stack
- Technology Trends: Demonstrates working knowledge of technology trends related to one technology stack and awareness of technology trends related to least two technologies
- Architecture Concepts and Principles: Demonstrates working knowledge of standard architectural principles models patterns (e.g. SOA N-Tier EDA etc.) and perspective (e.g. TOGAF Zachman etc.) integration architecture including input and output components existing integration methodologies and topologies source and external system non functional requirements data architecture deployment architecture architecture governance
- Design Patterns Tools and Principles: Applies specialized knowledge of design patterns design principles practices and design tools. Knowledge of documentation of design using tolls like EA
- Software Development Process Tools & Techniques: Demonstrates thorough knowledge of end-to-end SDLC process (Agile and Traditional) SDLC methodology programming principles tools best practices (refactoring code code package etc.)
- Project Management Tools and Techniques: Demonstrates working knowledge of project management process (such as project scoping requirements management change management risk management quality assurance disaster management etc.) tools (MS Excel MPP client specific time sheets capacity planning tools etc.)
- Project Management: Demonstrates working knowledge of project governance framework RACI matrix and basic knowledge of project metrics like utilization onsite to offshore ratio span of control fresher ratio SLAs and quality metrics
- Estimation and Resource Planning: Working knowledge of estimation and resource planning techniques (e.g. TCP estimation model) company specific estimation templates
- Working knowledge of industry knowledge management tools (such as portals wiki) company and customer knowledge management tools techniques (such as workshops classroom training self-study application walkthrough and reverse KT)
- Technical Standards Documentation & Templates: Demonstrates working knowledge of various document templates and standards (such as business blueprint design documents and test specifications)
- Requirement Gathering and Analysis: Demonstrates working knowledge of requirements gathering for ( non functional) requirements analysis for functional and non functional requirement analysis tools (such as functional flow diagrams activity diagrams blueprint storyboard) techniques (business analysis process mapping etc.) and requirements management tools (e.g.MS Excel) and basic knowledge of functional requirements gathering. Specifically identify Architectural concerns and to document them as part of IT requirements including NFRs
- Solution Structuring: Demonstrates working knowledge of service offering and products
Additional Comments:
Looking for a Senior Java Architect with 12+ years of experience. Key responsibilities include:
• Excellent technical background and end to end architecture to design and implement scalable maintainable and high performing systems integrating front end technologies with back-end services.
• Collaborate with front-end teams to architect React -based user interfaces that are robust, responsive and aligned with overall technical architecture.
• Expertise in cloud-based applications on Azure, leveraging key Azure services.
• Lead the adoption of DevOps practices, including CI/CD pipelines, automation, monitoring and logging to ensure reliable and efficient deployment cycles.
• Provide technical leadership to development teams, guiding them in building solutions that adhere to best practices, industry standards and customer requirements.
• Conduct code reviews to maintain high quality code and collaborate with team to ensure code is optimized for performance, scalability and security.
• Collaborate with stakeholders to defined requirements and deliver technical solutions aligned with business goals.
• Excellent communication skills
• Mentor team members providing guidance on technical challenges and helping them grow their skill set.
• Good to have experience in GCP and retail domain.
Skills: Devops, Azure, Java
Must-Haves
Java (12+ years), React, Azure, DevOps, Cloud Architecture
Strong Java architecture and design experience.
Expertise in Azure cloud services.
Hands-on experience with React and front-end integration.
Proven track record in DevOps practices (CI/CD, automation).
Notice period - 0 to 15days only
Location: Hyderabad, Chennai, Kochi, Bangalore, Trivandrum
Excellent communication and leadership skills.
ROLES AND RESPONSIBILITIES:
Standardization and Governance:
- Establishing and maintaining project management standards, processes, and methodologies.
- Ensuring consistent application of project management policies and procedures.
- Implementing and managing project governance processes.
Resource Management:
- Facilitating the sharing of resources, tools, and methodologies across projects.
- Planning and allocating resources effectively.
- Managing resource capacity and forecasting future needs.
Communication and Reporting:
- Ensuring effective communication and information flow among project teams and stakeholders.
- Monitoring project progress and reporting on performance.
- Communicating strategic work progress, including risks and benefits.
Project Portfolio Management:
- Supporting strategic decision-making by aligning projects with organizational goals.
- Selecting and prioritizing projects based on business objectives.
- Managing project portfolios and ensuring efficient resource allocation across projects.
Process Improvement:
- Identifying and implementing industry best practices into workflows.
- Improving project management processes and methodologies.
- Optimizing project delivery and resource utilization.
Training and Support:
- Providing training and support to project managers and team members.
- Offering project management tools, best practices, and reporting templates.
Other Responsibilities:
- Managing documentation of project history for future reference.
- Coaching project teams on implementing project management steps.
- Analysing financial data and managing project costs.
- Interfacing with functional units (Domain, Delivery, Support, Devops, HR etc).
- Advising and supporting senior management.
IDEAL CANDIDATE:
- 3+ years of proven experience in Project Management roles with strong exposure to PMO processes, standards, and governance frameworks.
- Demonstrated ability to manage project status tracking, risk assessments, budgeting, variance analysis, and defect tracking across multiple projects.
- Proficient in Project Planning and Scheduling using tools like MS Project and Advanced Excel (e.g., Gantt charts, pivot tables, macros).
- Experienced in developing project dashboards, reports, and executive summaries for senior management and stakeholders.
- Active participant in Agile environments, attending and contributing to Scrum calls, sprint planning, and retrospectives.
- Holds a Bachelor’s degree in a relevant field (e.g., Engineering, Business, IT, etc.).
- Preferably familiar with Jira, Azure DevOps, and Power BI for tracking and visualization of project data.
- Exposure to working in product-based companies or fast-paced, innovation-driven environments is a strong advantage.
Job Description: Python Engineer
Role Summary
We are looking for a talented Python Engineer to design, develop, and maintain high-quality backend applications and automation solutions. The ideal candidate should have strong programming skills, familiarity with modern development practices, and the ability to work in a fast-paced, collaborative environment.
Key Responsibilities:
Python Development & Automation
- Design, develop, and maintain Python scripts, tools, and automation frameworks.
- Build automation for operational tasks such as deployment, monitoring, system checks, and maintenance.
- Write clean, modular, and well-documented Python code following best practices.
- Develop APIs, CLI tools, or microservices when required.
Linux Systems Engineering
- Manage, configure, and troubleshoot Linux environments (RHEL, CentOS, Ubuntu).
- Perform system performance tuning, log analysis, and root-cause diagnostics.
- Work with system services, processes, networking, file systems, and security controls.
- Implement shell scripting (bash) alongside Python for system-level automation.
CI/CD & Infrastructure Support
- Support integration of Python automation into CI/CD pipelines (Jenkins).
- Participate in build and release processes for infrastructure components.
- Ensure automation aligns with established infrastructure standards and governance.
- Use Bash scripting together with Python to improve automation efficiency.
Cloud & DevOps Collaboration (if applicable)
- Collaborate with Cloud/DevOps engineers on automation for AWS or other cloud platforms.
- Integrate Python tools with configuration management tools such as Chef or Ansible, or with Terraform modules.
- Contribute to containerization efforts (Docker, Kubernetes) leveraging Python automation.
Job Title: Sr. DevOps Engineer
Experience Required: 2 to 4 years in DevOps or related fields
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.
Key Responsibilities:
Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).
CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.
Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.
Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.
Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.
Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.
Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.
Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.
Required Skills & Qualifications:
Technical Expertise:
Strong proficiency in cloud platforms like AWS, Azure, or GCP.
Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).
Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.
Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.
Proficiency in scripting languages (e.g., Python, Bash, PowerShell).
Soft Skills:
Excellent communication and leadership skills.
Strong analytical and problem-solving abilities.
Proven ability to manage and lead a team effectively.
Experience:
4 years + of experience in DevOps or Site Reliability Engineering (SRE).
4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.
Strong understanding of microservices, APIs, and serverless architectures.
Nice to Have:
Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.
Experience with GitOps tools such as ArgoCD or Flux.
Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).
Perks & Benefits:
Competitive salary and performance bonuses.
Comprehensive health insurance for you and your family.
Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.
Flexible working hours and remote work options.
Collaborative and inclusive work culture.
Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.
You can directly contact us: Nine three one six one two zero one three two
Job Description.
1. Cloud experience (Any cloud is fine although AWS is preferred. If non-AWS cloud, then the experience should reflect familiarity with the cloud's common services)
2. Good grasp of Scripting (in Linux for sure ie bash/sh/zsh etc, Windows : nice to have)
3. Python or Java or JS basic knowledge (Python Preferred)
4. Monitoring tools
5. Alerting tools
6. Logging tools
7. CICD
8. Docker/containers/(k8s/terraform nice to have)
9. Experience working on distributed applications with multiple services
10. Incident management
11. DB experience in terms of basic queries
12. Understanding of performance analysis of applications
13. Idea about data pipelines would be nice to have
14. Snowflake querying knowledge: nice to have
The person should be able to :
Monitor system issues
Create strategies to detect and address issues
Implement automated systems to troubleshoot and resolve issues.
Write and review post-mortems
Manage infrastructure for multiple product teams
Collaborate with product engineering teams to ensure best practices are being followed















