
We're Hiring: DevOps Tech Lead with 7-9 Years of Experience! š
Are you a seasoned DevOps professional with a passion for cloud technologies and automation? We have an exciting opportunity for a DevOps Tech Lead to join our dynamic team at our Gurgaon office.
š¢ ZoomOps Technolgy Solutions Private Limited
š Location: Gurgaon
š¼ Full-time position
š§ Key Skills & Requirements:
ā 7-9 years of hands-on experience in DevOps roles
ā Proficiency in Cloud Platforms like AWS, GCP, and Azure
ā Strong background in Solution Architecture
ā Expertise in writing Automation Scripts using Python and Bash
ā Ability to manage IAC tools and CM tools like Terraform, Ansible, pulumi etc..
Responsibilities:
š¹ Lead and mentor the DevOps team, driving innovation and best practices
š¹ Design and implement robust CI/CD pipelines for seamless software delivery
š¹ Architect and optimize cloud infrastructure for scalability and efficiency
š¹ Automate manual processes to enhance system reliability and performance
š¹ Collaborate with cross-functional teams to drive continuous improvement
Join us to work on exciting projects and make a significant impact in the tech space!
Apply now and take the next step in your DevOps career!

About ZoomOps Technology Solutions Private Limited
About
Similar jobs
Lightning Job By CutshortĀ ā”
Ā
As part of this feature, you can expect status updates about your application and replies within 72 hours (once the screening questions are answered)
Job Overview:
We are seeking an experienced DevOps Engineer to join our team. The successful candidate will be responsible for designing, implementing, and maintaining the infrastructure and software systems required to support our development and production environments. The ideal candidate should have a strong background in Linux, GitHub, Actions/Jenkins, ArgoCD, AWS, Kubernetes, Helm, Datadog, MongoDB, Envoy Proxy, Cert-Manager, Terraform, ELK, Cloudflare, and BigRock.
Kindly apply at https://wohlig.keka.com/careers/jobdetails/54566
Responsibilities:
⢠Design, implement and maintain CI/CD pipelines using GitHub, Actions/Jenkins, Kubernetes, Helm, and ArgoCD.
⢠Deploy and manage Kubernetes clusters using AWS.
⢠Configure and maintain Envoy Proxy and Cert-Manager to automate deployment and manage application environments.
⢠Monitor system performance using Datadog, ELK, and Cloudflare tools.
⢠Automate infrastructure management and maintenance tasks using Terraform, Ansible, or similar tools.
⢠Collaborate with development teams to design, implement and test infrastructure changes.
⢠Troubleshoot and resolve infrastructure issues as they arise.
⢠Participate in on-call rotation and provide support for production issues.
Qualifications:
⢠Bachelor's or Master's degree in Computer Science, Engineering or a related field.
⢠3+ years of experience in DevOps engineering with a focus on Linux, GitHub, Actions/CodeFresh, ArgoCD, AWS, Kubernetes, Helm, Datadog, MongoDB, Envoy Proxy, Cert-Manager, Terraform, ELK, Cloudflare, and BigRock.
⢠Strong understanding of Linux administration and shell scripting.
⢠Experience with automation tools such as Terraform, Ansible, or similar.
⢠Ability to write infrastructure as code using tools such as Terraform, Ansible, or similar.
⢠Experience with container orchestration platforms such as Kubernetes.
⢠Familiarity with container technologies such as Docker.
⢠Experience with cloud providers such as AWS.
⢠Experience with monitoring tools such as Datadog and ELK.
Skills:
⢠Strong analytical and problem-solving skills.
⢠Excellent communication and collaboration skills.
⢠Ability to work independently or in a team environment.
⢠Strong attention to detail.
⢠Ability to learn and apply new technologies quickly.
⢠Ability to work in a fast-paced and dynamic environment.
⢠Strong understanding of DevOps principles and methodologies.
Job Title:Ā AWS Devops Engineer ā Manager Business solutions
Location:Ā Gurgaon, India
Experience Required:Ā 8-12 years
Industry:Ā IT
Ā
We are looking for a seasoned AWS DevOps Engineer with robust experience in AWS middleware services and MongoDB Cloud Infrastructure Management. The role involves designing, deploying, and maintaining secure, scalable, and high-availability infrastructure, along with developing efficient CI/CD pipelines and automating operational processes.
Ā
Key DeliverablesĀ (Essential functions & Responsibilities of the Job):
Ā
Ā·Ā Ā Ā Ā Ā Ā Ā Ā Ā Design, deploy, and manage AWS infrastructure, with a focus on middleware services such as API Gateway, Lambda, SQS, SNS, ECS, and EKS.
Ā·Ā Ā Ā Ā Ā Ā Ā Ā Ā Administer and optimize MongoDB Atlas or equivalent cloud-based MongoDB solutions for high availability, security, and performance.
Ā·Ā Ā Ā Ā Ā Ā Ā Ā Ā Develop, manage, and enhance CI/CD pipelines using tools like AWS CodePipeline, Jenkins, GitHub Actions, GitLab CI/CD, or Bitbucket Pipelines.
Ā·Ā Ā Ā Ā Ā Ā Ā Ā Ā Automate infrastructure provisioning using Terraform, AWS CloudFormation, or AWS CDK.
Ā·Ā Ā Ā Ā Ā Ā Ā Ā Ā Implement monitoring and logging solutions using CloudWatch, Prometheus, Grafana, or the ELK Stack.
Ā·Ā Ā Ā Ā Ā Ā Ā Ā Ā Enforce cloud security best practices ā IAM, VPC setups, encryption, certificate management, and compliance controls.
Ā·Ā Ā Ā Ā Ā Ā Ā Ā Ā Work closely with development teams to improve application reliability, scalability, and performance.
Ā·Ā Ā Ā Ā Ā Ā Ā Ā Ā Manage containerized environments using Docker, Kubernetes (EKS), or AWS ECS.
Ā·Ā Ā Ā Ā Ā Ā Ā Ā Ā Perform MongoDB administration tasks such as backups, performance tuning, indexing, and sharding.
Ā·Ā Ā Ā Ā Ā Ā Ā Ā Ā Participate in on-call rotations to ensure 24/7 infrastructure availability and quick incident resolution.
Ā
Knowledge Skills and Abilities:
Ā·Ā Ā Ā Ā Ā Ā Ā Ā Ā 7+ years of hands-on AWS DevOps experience, especially with middleware services.
Ā·Ā Ā Ā Ā Ā Ā Ā Ā Ā Strong expertise in MongoDB Atlas or other cloud MongoDB services.
Ā·Ā Ā Ā Ā Ā Ā Ā Ā Ā Proficiency in Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or AWS CDK.
Ā·Ā Ā Ā Ā Ā Ā Ā Ā Ā Solid experience with CI/CD tools: Jenkins, CodePipeline, GitHub Actions, GitLab, Bitbucket, etc.
Ā·Ā Ā Ā Ā Ā Ā Ā Ā Ā Excellent scripting skills in Python, Bash, or PowerShell.
Ā·Ā Ā Ā Ā Ā Ā Ā Ā Ā Experience in containerization and orchestration: Docker, EKS, ECS.
Ā·Ā Ā Ā Ā Ā Ā Ā Ā Ā Familiarity with monitoring tools like CloudWatch, ELK, Prometheus, Grafana.
Ā·Ā Ā Ā Ā Ā Ā Ā Ā Ā Strong understanding of AWS networking and security: IAM, VPC, KMS, Security Groups.
Ā·Ā Ā Ā Ā Ā Ā Ā Ā Ā Ability to solve complex problems and thrive in a fast-paced environment.
Ā
Preferred Qualifications
Ā·Ā Ā Ā Ā Ā Ā Ā Ā Ā AWS Certified DevOps Engineer ā Professional or AWS Solutions Architect ā Associate/Professional.
Ā·Ā Ā Ā Ā Ā Ā Ā Ā Ā MongoDB Certified DBA or Developer.
Ā·Ā Ā Ā Ā Ā Ā Ā Ā Ā Experience with serverless services like AWS Lambda, Step Functions.
Ā·Ā Ā Ā Ā Ā Ā Ā Ā Ā Exposure to multi-cloud or hybrid cloud environments.
Mail updated resume with current salary-
Email:Ā jobs[at]glansolutions[dot]com
Satish; 88O 27 49 743
Google search: Glan management consultancy
ā Good understanding of how the web works
ā Experience with at least one language like Java, Python etc
ā Good with Shell scripting
ā Experience with *Nix based operating systems
ā Experience with k8s, containers
ā Fairly good understanding of AWS/GCP/Azure
ā Troubleshoot and fix outages and performance issues in infrastructure stack
ā Identify gap and design automation tools for all feasible functions in infrastructure
ā Good verbal and written communication skills
ā Drive SLA/SLO of team
Benefits
This is an opportunity to work on a fairly complex set of systems and improve
them. You will get a chance to learn things like āhow to think about code
simplicityā, āhow to write for maintainabilityā and several other things.
ā Comprehensive health insurance policy.
ā Flexible working hours and a very friendly work environment.
ā Flexibility to work either in the office (post Covid) or remotely.
About us:
HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.
Ā
We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.
Ā
To know more, Visit! -Ā https://www.happyfox.com/
Ā
Responsibilities
- Build and scale production infrastructure in AWS for the HappyFox platform and its products.
- Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
- Implement consistent observability, deployment and IaC setups
- Lead incident management and actively respond to escalations/incidents in the production environment from customers and the support team.
- Hire/Mentor other Infrastructure engineers and review their work to continuously ship improvements to production infrastructure and its tooling.
- Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
- Lead infrastructure security audits
Ā
Requirements
- At least 7 years of experience in handling/building Production environments in AWS.
- At least 3 years of programming experience in building API/backend services for customer-facing applications in production.
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
- Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
- Experience in security hardening of infrastructure, systems and services.
- Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
- Experience in setting up and managing test/staging environments, and CI/CD pipelines.
- Experience in IaC tools such as Terraform or AWS CDK
- Exposure/Experience in setting up or managing Cloudflare, Qualys and other related tools
- Passion for making systems reliable, maintainable, scalable and secure.
- Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
- Bonus points ā Hands-on experience with Nginx, Postgres, Postfix, Redis or Mongo systems.
Ā
Ā
Position- Cloud and Infrastructure Automation Consultant
Location- India(Pan India)-Work from Home
The position:
This exciting role in Ashnikās consulting team brings great opportunity to design and deploy automation solutions for Ashnikās enterprise customers spread across SEA and India. This role takes a lead in consulting the customers for automation of cloud and datacentre based resources. You will work hands-on with your team focusing on infrastructure solutions and to automate infrastructure deployments that are secure and compliant. You will provide implementation oversight of solutions to over the challenges of technology and business.
Responsibilities:
Ā· To lead the consultative discussions to identify challenges for the customers and suggest right fit open source tools
Ā· Independently determine the needs of the customer and create solution frameworks
Ā· Design and develop moderately complex software solutions to meet needs
Ā· Use a process-driven approach in designing and developing solutions.
Ā· To create consulting work packages, detailed SOWs and assist sales team to position them to enterprise customers
Ā· To be responsible for implementation of automation recipes (Ansible/CHEF) and scripts (Ruby, PowerShell, Python) as part of an automated installation/deployment process
Experience and skills required :
Ā· 8 to 10 year of experience in IT infrastructure
Ā· Proven technical skill in designing and delivering of enterprise level solution involving integration of complex technologies
Ā· 6+ years of experience with RHEL/windows system automation
Ā· 4+ years of experience using Python and/or Bash scripting to solve and automate common system tasks
Ā· Strong understanding and knowledge of networking architecture
Ā· Experience with Sentinel Policy as Code
Ā· Strong understanding of AWS and Azure infrastructure
Ā· Experience deploying and utilizing automation tools such as Terraform, CloudFormation, CI/CD pipelines, Jenkins, Github Actions
Ā· Experience with Hashicorp Configuration Language (HCL) for module & policy development
Ā· Knowledge of cloud tools including CloudFormation, CloudWatch, Control Tower, CloudTrail and IAM is desirable
This role requires high degree of self-initiative, working with diversified teams and working with customers spread across Southeast Asia and India region. This role requires you to be pro-active in communicating with customers and internal teams about industry trends, technology development and creating thought leadership.
Ā
About Us
Ashnik is a leading enterprise open-source solutions company in Southeast Asia and India, enabling organizations to adopt open source for their digital transformation goals. Founded in 2009, it offers a full-fledged Open-Source Marketplace, Solutions, and Services ā Consulting, Managed, Technical, Training. Over 200 leading enterprises so far have leveraged Ashnikās offerings in the space of Database platforms, DevOps & Microservices, Kubernetes, Cloud, and Analytics.
As a team culture, Ashnik is a family for its team members. Each member brings in a different perspective, new ideas and diverse background. Yet we all together strive for one goal ā to deliver the best solutions to our customers using open-source software. We passionately believe in the power of collaboration. Through an open platform of idea exchange, we create a vibrant environment for growth and excellence.
Package : upto 20L
Experience: 8 yrs
Job Description
- Implement IAM policies and configure VPCs to create a scalable and secure network for the application workloads
- Will be client point of contact for High Priority technical issues and new requirements
- Should act as Tech Lead and guide the junior members of team and mentor them
- Work with client application developers to build, deploy and run both monolithic and microservices based applications on AWS Cloud
- Analyze workload requirements and work with IT stakeholders to define proper sizing for cloud workloads on AWS
- Build, Deploy and Manage production workloads including applications on EC2 instance, APIs on Lambda Functions and more
- Work with IT stakeholders to monitor system performance and proactively improve the environment for scale and security
Qualifications
- Prefer to have at leastĀ 5+ yearsĀ of IT experience implementing enterprise applications
- Should beĀ AWS Solution Architect Associate Certified
- Must have at least 3+ years of working as aĀ Cloud Engineer focused on AWS services such as EC2, CloudFront, VPC, CloudWatch, RDS, DynamoDB, Systems Manager, Route53, WAF, API Gateway, Elastic beanstalk, ECS, ECR, Lambda, SQS, SNS, S3 bucket, Elastic Search, DocumentDB IAM,Ā etc.
- Must have a strong understanding of EC2 instances, types and deploying applications to the cloud
- Must have a strong understanding of IAM policies, VPC creation, and other security/networking principles
- Must have through experience in doing on prem to AWS cloud workload migration
- Should be comfortable in using AWS and other migrations tools
- Should have experience is working on AWS performance, Cost and Security optimisation
- Should be experience in implementing automated patching and hardening of the systems
- Should be involved in P1 tickets and also guide team wherever needed
- Creating Backups and Managing Disaster Recovery
- Experience in using Infra as a code automation using scripts & tools like CloudFormation and Terraform
- Any exposure towards creating CI/CD pipelines on AWS using CodeBuild, CodeDeploy, etc. is an advantage
- Experience with Docker, Bitbucket, ELK and deploying applications on AWS
- Good understanding of Containerisation technologies like Docker, Kubernetes etc.
- Should be experience in using and configuring cloud monitoring tools and ITSM ticketing tools
- Good exposure to Logging & Monitoring tools like Dynatrace, Prometheus, Grafana, ELF/EFK
About RaRa Delivery
Not just a delivery companyā¦
RaRa Delivery is revolutionising instant delivery for e-commerce in Indonesia through data driven logistics.
RaRa Delivery is making instant and same-day deliveries scalable and cost-effective by leveraging a differentiated operating model and real-time optimisation technology. RaRa makes it possible for anyone, anywhere to get same day delivery in Indonesia. While others are focusing on āone-to-oneā deliveries, the company has developed proprietary, real-time batching tech to do āmany-to-manyā deliveries within a few hours.. RaRa is already in partnership with some of the top eCommerce players in Indonesia like Blibli, Sayurbox, Kopi Kenangan and many more.
We are a distributed team with the company headquartered in Singapore šøš¬ , core operations in Indonesia š®š© and technology team based out of India š®š³
Future of eCommerce Logistics.
- Data driven logistics company that is bringing in same day delivery revolution in Indonesia š®š©
- Revolutionising delivery as an experience
- Empowering D2C Sellers with logistics as the core technology
- Build and maintain CI/CD tools and pipelines.
- Designing and managing highly scalable, reliable, and fault-tolerant infrastructure & networking that forms the backbone of distributed systems at RaRa Delivery.
- Continuously improve code quality, product execution, and customer delight.
- Communicate, collaborate and work effectively across distributed teams in a global environment.
- Operate to strengthen teams across their product with their knowledge base
- Contribute to improving team relatedness, and help build a culture of camaraderie.
- Continuously refactor applications to ensure high-quality design
- Pair with team members on functional and non-functional requirements and spread design philosophy and goals across the team
- Excellent bash, and scripting fundamentals and hands-on with scripting in programming languages such as Python, Ruby, Golang, etc.
- Good understanding of distributed system fundamentals and ability to troubleshoot issues in a larger distributed infrastructure
- Working knowledge of the TCP/IP stack, internet routing, and load balancing
- Basic understanding of cluster orchestrators and schedulers (Kubernetes)
- Deep knowledge of Linux as a production environment, container technologies. e.g. Docker, Infrastructure As Code such as Terraform, K8s administration at large scale.
- Have worked on production distributed systems and have an understanding of microservices architecture, RESTful services, CI/CD.
What you will do:
- Handling Configuration Management, Web Services Architectures, DevOps Implementation, Build & Release Management, Database management, Backups and monitoring
- Logging, metrics and alerting management
- Creating Docker files
- Performing root cause analysis for production errors
Ā
What you need to have:
- 12+ years of experience in Software Development/ QA/ Software Deployment with 5+ years of experience in managing high performing teams
- Proficiency in VMware, AWS & cloud applications development, deployment
- Good knowledge in Java, Node.js
- Experience working with RESTful APIs, JSON etc
- Experience with Unit/ Functional automation is a plus
- Experience with MySQL, Mango DB, Redis, Rabbit MQ
- Proficiency in Jenkins. Ansible, Terraform/Chef/Ant
- Proficiency in Linux based Operating Systems
- Proficiency of Cloud Infrastructure like Dockers, Kubernetes
- Strong problem solving and analytical skills
- Good written and oral communication skills
- Sound understanding in areas of Computer Science such as algorithms, data structures, object oriented design, databases
- Proficiency in monitoring and observabilityĀ Ā Ā
As an Infrastructure Engineer at Navi, you will be building a resilient infrastructure platform, using modern Infrastructure engineering practices.
Ā
You will be responsible for the availability, scaling, security, performance and monitoring of the navi Cloud platform. Youāll be joining a team that follows best practices in infrastructure as code
Ā
Your Key Responsibilities
- Build out the Infrastructure components like API Gateway, Service Mesh, Service Discovery, container orchestration platform like kubernetes.
- Developing reusable Infrastructure code and testing frameworks
- Build meaningful abstractions to hide the complexities of provisioning modern infrastructure components
- Design a scalable Centralized Logging and Metrics platform
- Drive solutions to reduce Mean Time To Recovery(MTTR), enable High Availability.
What to Bring
- Good to have experience in managing large scale cloud infrastructure, preferable AWS and Kubernetes
- Experience in developing applications using programming languages like Java, Python and Go
- Experience in handling logs and metrics at a high scale.
- Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive.
Experience: 5+yrs
Skills Required: -
Experience in Azure Administration, Configuration and Deployment of WindowsLinux VMContainer
based infrastructure Scripting Programming in Python, JavaScriptTypeScript, C Scripting PowerShell ,
Azure CLI and shell Scripts Identity, Access Management and RBAC model Virtual Networking, storage,
and Compute Resources
Azure Database Technologies. Monitoring and Analytics Tools in Azure
Azure DevOps based CICD Build pipeline integrated with GitHub ā Java and Node.js
Test Automation and other CICD Tools
Azure Infrastructure using ARM template Terrafor









