
The brand is associated with some of the major icons across categories and tie-ups with industries covering fashion, sports, and music, of course. The founders are Marketing grads, with vast experience in the consumer lifestyle products and other major brands. With their vigorous efforts toward quality and marketing, they have been able to strike a chord with major E-commerce brands and even consumers.
What you will do:
- Defining and documenting best practices and strategies regarding application deployment and infrastructure maintenance
- Providing guidance, thought leadership and mentorship to development teams to build cloud competencies
- Ensuring application performance, uptime, and scale, maintaining high standards of code quality and thoughtful design
- Managing cloud environments in accordance with company security guidelines
- Developing and implementing technical efforts to design, build and deploy AWS applications at the direction of lead architects, including large-scale data processing, computationally intensive statistical modeling and advanced analytics
- Participating in all aspects of the software development life cycle for AWS solutions, including planning, requirements, development, testing, and quality assurance
- Troubleshooting incidents, identifying root cause, fixing and documenting problems and implementing preventive measures
- Educating teams on the implementation of new cloud-based initiatives, providing associated training as required
Desired Candidate Profile
What you need to have:- Bachelor’s degree in computer science, information technology
- 2+ years of experience as architect, designing, developing, and implementing cloud solutions on AWS platforms
- Experience in several of the following areas: database architecture, ETL, business intelligence, big data, machine learning, advanced analytic
- Proven ability to collaborate with multi-disciplinary teams of business analysts, developers, data scientists and subject matter experts
- Self-motivation with the ability to drive features to delivery
- Strong analytical and problem solving skills
- Excellent oral and written communication skills
- Good logical sense, strong technical skills and the ability to learn new technologies quickly
- AWS certifications are a plus
- Knowledge of web services, API, REST, and RPC

Similar jobs
Senior DevOps Engineer (8–10 years)
Location: Mumbai
Role Summary
As a Senior DevOps Engineer, you will own end-to-end platform reliability and delivery automation for mission-critical lending systems. You’ll architect cloud infrastructure, standardize CI/CD, enforce DevSecOps controls, and drive observability at scale—ensuring high availability, performance, and compliance consistent with BFSI standards.
Key Responsibilities
Platform & Cloud Infrastructure
- Design, implement, and scale multi-account, multi-VPC cloud architectures on AWS and/or Azure (compute, networking, storage, IAM, RDS, EKS/AKS, Load Balancers, CDN).
- Champion Infrastructure as Code (IaC) using Terraform (and optionally Pulumi/Crossplane) with GitOps workflows for repeatable, auditable deployments.
- Lead capacity planning, cost optimization, and performance tuning across environments.
CI/CD & Release Engineering
- Build and standardize CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps, ArgoCD) for microservices, data services, and frontends; enable blue‑green/canary releases and feature flags.
- Drive artifact management, environment promotion, and release governance with compliance-friendly controls.
Containers, Kubernetes & Runtime
- Operate production-grade Kubernetes (EKS/AKS), including cluster lifecycle, autoscaling, ingress, service mesh, and workload security; manage Docker/containerd images and registries.
Reliability, Observability & Incident Management
- Implement end-to-end monitoring, logging, and tracing (Prometheus, Grafana, ELK/EFK, CloudWatch/Log Analytics, Datadog/New Relic) with SLO/SLI error budgets.
- Establish on-call rotations, run postmortems, and continuously improve MTTR and change failure rate.
Security & Compliance (DevSecOps)
- Enforce cloud and container hardening, secrets management (AWS Secrets Manager / HashiCorp Vault), vulnerability scanning (Snyk/SonarQube), and policy-as-code (OPA/Conftest).
- Partner with infosec/risk to meet BFSI regulatory expectations for DR/BCP, audits, and data protection.
Data, Networking & Edge
- Optimize networking (DNS, TCP/IP, routing, OSI layers) and edge delivery (CloudFront/Fastly), including WAF rules and caching strategies.
- Support persistence layers (MySQL, Elasticsearch, DynamoDB) for performance and reliability.
Ways of Working & Leadership
- Lead cross-functional squads (Product, Engineering, Data, Risk) and mentor junior DevOps/SREs.
- Document runbooks, architecture diagrams, and operating procedures; drive automation-first culture.
Must‑Have Qualifications
- 8–10 years of total experience with 5+ years hands-on in DevOps/SRE roles.
- Strong expertise in AWS and/or Azure, Linux administration, Kubernetes, Docker, and Terraform.
- Proven track record building CI/CD with Jenkins/GitHub Actions/Azure DevOps/ArgoCD.
- Solid grasp of networking fundamentals (DNS, TLS, TCP/IP, routing, load balancing).
- Experience implementing observability stacks and responding to production incidents.
- Scripting in Bash/Python; ability to automate ops workflows and platform tasks.
- Good‑to‑Have / Preferred
- Exposure to BFSI/fintech systems and compliance standards; DR/BCP planning.
- Secrets management (Vault), policy-as-code (OPA), and security scanning (Snyk/SonarQube).
- Experience with GitOps patterns, service tiering, and SLO/SLI design. [illbeback.ai]
- Knowledge of CDNs (CloudFront/Fastly) and edge caching/WAF rule authoring.
- Education
- Bachelor’s/Master’s in Computer Science, Information Technology, or related field (or equivalent experience).
Job Title: AWS Devops Engineer – Manager Business solutions
Location: Gurgaon, India
Experience Required: 8-12 years
Industry: IT
We are looking for a seasoned AWS DevOps Engineer with robust experience in AWS middleware services and MongoDB Cloud Infrastructure Management. The role involves designing, deploying, and maintaining secure, scalable, and high-availability infrastructure, along with developing efficient CI/CD pipelines and automating operational processes.
Key Deliverables (Essential functions & Responsibilities of the Job):
· Design, deploy, and manage AWS infrastructure, with a focus on middleware services such as API Gateway, Lambda, SQS, SNS, ECS, and EKS.
· Administer and optimize MongoDB Atlas or equivalent cloud-based MongoDB solutions for high availability, security, and performance.
· Develop, manage, and enhance CI/CD pipelines using tools like AWS CodePipeline, Jenkins, GitHub Actions, GitLab CI/CD, or Bitbucket Pipelines.
· Automate infrastructure provisioning using Terraform, AWS CloudFormation, or AWS CDK.
· Implement monitoring and logging solutions using CloudWatch, Prometheus, Grafana, or the ELK Stack.
· Enforce cloud security best practices — IAM, VPC setups, encryption, certificate management, and compliance controls.
· Work closely with development teams to improve application reliability, scalability, and performance.
· Manage containerized environments using Docker, Kubernetes (EKS), or AWS ECS.
· Perform MongoDB administration tasks such as backups, performance tuning, indexing, and sharding.
· Participate in on-call rotations to ensure 24/7 infrastructure availability and quick incident resolution.
Knowledge Skills and Abilities:
· 7+ years of hands-on AWS DevOps experience, especially with middleware services.
· Strong expertise in MongoDB Atlas or other cloud MongoDB services.
· Proficiency in Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or AWS CDK.
· Solid experience with CI/CD tools: Jenkins, CodePipeline, GitHub Actions, GitLab, Bitbucket, etc.
· Excellent scripting skills in Python, Bash, or PowerShell.
· Experience in containerization and orchestration: Docker, EKS, ECS.
· Familiarity with monitoring tools like CloudWatch, ELK, Prometheus, Grafana.
· Strong understanding of AWS networking and security: IAM, VPC, KMS, Security Groups.
· Ability to solve complex problems and thrive in a fast-paced environment.
Preferred Qualifications
· AWS Certified DevOps Engineer – Professional or AWS Solutions Architect – Associate/Professional.
· MongoDB Certified DBA or Developer.
· Experience with serverless services like AWS Lambda, Step Functions.
· Exposure to multi-cloud or hybrid cloud environments.
Mail updated resume with current salary-
Email: jobs[at]glansolutions[dot]com
Satish; 88O 27 49 743
Google search: Glan management consultancy
Classplus is India's largest B2B ed-tech start-up, enabling 1 Lac+ educators and content creators to create their digital identity with their own branded apps. Starting in 2018, we have grown more than 10x in the last year, into India's fastest-growing video learning platform.
Over the years, marquee investors like Tiger Global, Surge, GSV Ventures, Blume, Falcon, Capital, RTP Global, and Chimera Ventures have supported our vision. Thanks to our awesome and dedicated team, we achieved a major milestone in March this year when we secured a “Series-D” funding.
Now as we go global, we are super excited to have new folks on board who can take the rocketship higher🚀. Do you think you have what it takes to help us achieve this? Find Out Below!
What will you do?
• Define the overall process, which includes building a team for DevOps activities and ensuring that infrastructure changes are reviewed from an architecture and security perspective
• Create standardized tooling and templates for development teams to create CI/CD pipelines
• Ensure infrastructure is created and maintained using terraform
• Work with various stakeholders to design and implement infrastructure changes to support new feature sets in various product lines.
• Maintain transparency and clear visibility of costs associated with various product verticals, environments and work with stakeholders to plan for optimization and implementation
• Spearhead continuous experimenting and innovating initiatives to optimize the infrastructure in terms of uptime, availability, latency and costs
You should apply, if you
1. Are a seasoned Veteran: Have managed infrastructure at scale running web apps, microservices, and data pipelines using tools and languages like JavaScript(NodeJS), Go, Python, Java, Erlang, Elixir, C++ or Ruby (experience in any one of them is enough)
2. Are a Mr. Perfectionist: You have a strong bias for automation and taking the time to think about the right way to solve a problem versus quick fixes or band-aids.
3. Bring your A-Game: Have hands-on experience and ability to design/implement infrastructure with GCP services like Compute, Database, Storage, Load Balancers, API Gateway, Service Mesh, Firewalls, Message Brokers, Monitoring, Logging and experience in setting up backups, patching and DR planning
4. Are up with the times: Have expertise in one or more cloud platforms (Amazon WebServices or Google Cloud Platform or Microsoft Azure), and have experience in creating and managing infrastructure completely through Terraform kind of tool
5. Have it all on your fingertips: Have experience building CI/CD pipeline using Jenkins, Docker for applications majorly running on Kubernetes. Hands-on experience in managing and troubleshooting applications running on K8s
6. Have nailed the data storage game: Good knowledge of Relational and NoSQL databases (MySQL,Mongo, BigQuery, Cassandra…)
7. Bring that extra zing: Have the ability to program/script is and strong fundamentals in Linux and Networking.
8. Know your toys: Have a good understanding of Microservices architecture, Big Data technologies and experience with highly available distributed systems, scaling data store technologies, and creating multi-tenant and self hosted environments, that’s a plus
Being Part of the Clan
At Classplus, you’re not an “employee” but a part of our “Clan”. So, you can forget about being bound by the clock as long as you’re crushing it workwise😎. Add to that some passionate people working with and around you, and what you get is the perfect work vibe you’ve been looking for!
It doesn’t matter how long your journey has been or your position in the hierarchy (we don’t do Sirs and Ma’ams); you’ll be heard, appreciated, and rewarded. One can say, we have a special place in our hearts for the Doers! ✊🏼❤️
Are you a go-getter with the chops to nail what you do? Then this is the place for you.
We're Hiring: DevOps Tech Lead with 7-9 Years of Experience! 🚀
Are you a seasoned DevOps professional with a passion for cloud technologies and automation? We have an exciting opportunity for a DevOps Tech Lead to join our dynamic team at our Gurgaon office.
🏢 ZoomOps Technolgy Solutions Private Limited
📍 Location: Gurgaon
💼 Full-time position
🔧 Key Skills & Requirements:
✔ 7-9 years of hands-on experience in DevOps roles
✔ Proficiency in Cloud Platforms like AWS, GCP, and Azure
✔ Strong background in Solution Architecture
✔ Expertise in writing Automation Scripts using Python and Bash
✔ Ability to manage IAC tools and CM tools like Terraform, Ansible, pulumi etc..
Responsibilities:
🔹 Lead and mentor the DevOps team, driving innovation and best practices
🔹 Design and implement robust CI/CD pipelines for seamless software delivery
🔹 Architect and optimize cloud infrastructure for scalability and efficiency
🔹 Automate manual processes to enhance system reliability and performance
🔹 Collaborate with cross-functional teams to drive continuous improvement
Join us to work on exciting projects and make a significant impact in the tech space!
Apply now and take the next step in your DevOps career!
Job Description
- Implement IAM policies and configure VPCs to create a scalable and secure network for the application workloads
- Will be client point of contact for High Priority technical issues and new requirements
- Should act as Tech Lead and guide the junior members of team and mentor them
- Work with client application developers to build, deploy and run both monolithic and microservices based applications on AWS Cloud
- Analyze workload requirements and work with IT stakeholders to define proper sizing for cloud workloads on AWS
- Build, Deploy and Manage production workloads including applications on EC2 instance, APIs on Lambda Functions and more
- Work with IT stakeholders to monitor system performance and proactively improve the environment for scale and security
Qualifications
- Prefer to have at least 5+ years of IT experience implementing enterprise applications
- Should be AWS Solution Architect Associate Certified
- Must have at least 3+ years of working as a Cloud Engineer focused on AWS services such as EC2, CloudFront, VPC, CloudWatch, RDS, DynamoDB, Systems Manager, Route53, WAF, API Gateway, Elastic beanstalk, ECS, ECR, Lambda, SQS, SNS, S3 bucket, Elastic Search, DocumentDB IAM, etc.
- Must have a strong understanding of EC2 instances, types and deploying applications to the cloud
- Must have a strong understanding of IAM policies, VPC creation, and other security/networking principles
- Must have through experience in doing on prem to AWS cloud workload migration
- Should be comfortable in using AWS and other migrations tools
- Should have experience is working on AWS performance, Cost and Security optimisation
- Should be experience in implementing automated patching and hardening of the systems
- Should be involved in P1 tickets and also guide team wherever needed
- Creating Backups and Managing Disaster Recovery
- Experience in using Infra as a code automation using scripts & tools like CloudFormation and Terraform
- Any exposure towards creating CI/CD pipelines on AWS using CodeBuild, CodeDeploy, etc. is an advantage
- Experience with Docker, Bitbucket, ELK and deploying applications on AWS
- Good understanding of Containerisation technologies like Docker, Kubernetes etc.
- Should be experience in using and configuring cloud monitoring tools and ITSM ticketing tools
- Good exposure to Logging & Monitoring tools like Dynatrace, Prometheus, Grafana, ELF/EFK
Senior DevOps Engineer (8-12 yrs Exp)
Job Description:
We are looking for an experienced and enthusiastic DevOps Engineer. As our new DevOps
Engineer, you will be in charge of the specification and documentation of the new project
features. In addition, you will be developing new features and writing scripts for automation
using Java/BitBucket/Python/Bash.
Roles and Responsibilities:
• Deploy updates and fixes
• Utilize various open source technologies
• Need to have hands on experience on automation tools like Docker / Jenkins /
Puppet etc.
• Build independent web based tools, micro-services and solutions
• Write scripts and automation using Java/BitBucket/Python/Bash.
• Configure and manage data sources like MySQL, Mongo, Elastic search, Redis etc
• Understand how various systems work
• Manage code deployments, fixes, updates and related processes.
• Understand how IT operations are managed
• Work with CI and CD tools, and source control such as GIT and SVN.
• Experience with project management and workflow tools such as Agile, Redmine,
WorkFront, Scrum/Kanban/SAFe, etc.
• Build tools to reduce occurrences of errors and improve customer experience
• Develop software to integrate with internal back-end systems
• Perform root cause analysis for production errors
• Design procedures for system troubleshooting and maintenance
Requirements:
• More than six years of experience in a DevOps Engineer role (or similar role);
experience in software development and infrastructure development is a mandatory.
• Bachelor’s degree or higher in engineering or related field
• Proficiency in deploying and maintaining web applications
• Ability to construct and execute network, server, and application status monitoring
• Knowledge of software automation production systems, including code deployment
• Working knowledge of software development methodologies
• Previous experience with high-performance and high-availability open source web
technologies
• Strong experience with Linux-based infrastructures, Linux/Unix administration, and
AWS.
• Strong communication skills and ability to explain protocol and processes with team
and management.
• Solid team player.
Experience and Education
• Bachelor’s degree in engineering or equivalent.
Work experience
• 4+ years of infrastructure and operations management
Experience at a global scale.
• 4+ years of experience in operations management, including monitoring, configuration management, automation, backup, and recovery.
• Broad experience in the data center, networking, storage, server, Linux, and cloud technologies.
• Broad knowledge of release engineering: build, integration, deployment, and provisioning, including familiarity with different upgrade models.
• Demonstratable experience with executing, or being involved of, a complete end-to-end project lifecycle.
Skills
• Excellent communication and teamwork skills – both oral and written.
• Skilled at collaborating effectively with both Operations and Engineering teams.
• Process and documentation oriented.
• Attention to details. Excellent problem-solving skills.
• Ability to simplify complex situations and lead calmly through periods of crisis.
• Experience implementing and optimizing operational processes.
• Ability to lead small teams: provide technical direction, prioritize tasks to achieve goals, identify dependencies, report on progress.
Technical Skills
• Strong fluency in Linux environments is a must.
• Good SQL skills.
• Demonstratable scripting/programming skills (bash, python, ruby, or go) and the ability to develop custom tool integrations between multiple systems using their published API’s / CLI’s.
• L3, load balancer, routing, and VPN configuration.
• Kubernetes configuration and management.
• Expertise using version control systems such as Git.
• Configuration and maintenance of database technologies such as Cassandra, MariaDB, Elastic.
• Designing and configuration of open-source monitoring systems such as Nagios, Grafana, or Prometheus.
• Designing and configuration of log pipeline technologies such as ELK (Elastic Search Logstash Kibana), FluentD, GROK, rsyslog, Google Stackdriver.
• Using and writing modules for Infrastructure as Code tools such as Ansible, Terraform, helm, customize.
• Strong understanding of virtualization and containerization technologies such as VMware, Docker, and Kubernetes.
• Specific experience with Google Cloud Platform or Amazon EC2 deployments and virtual machines.c
Role and responsibilities
- Expertise in AWS (Most typical Services),Docker & Kubernetes.
- Strong Scripting knowledge, Strong Devops Automation, Good at Linux
- Hands on with CI/CD (CircleCI preferred but any CI/CD tool will do). Strong Understanding of GitHub
- Strong understanding of AWS networking and. Strong with Security & Certificates.
Nice-to-have skills
- Involved in Product Engineering
Technical Experience/Knowledge Needed :
- Cloud-hosted services environment.
- Proven ability to work in a Cloud-based environment.
- Ability to manage and maintain Cloud Infrastructure on AWS
- Must have strong experience in technologies such as Dockers, Kubernetes, Functions, etc.
- Knowledge in orchestration tools Ansible
- Experience with ELK Stack
- Strong knowledge in Micro Services, Container-based architecture and the corresponding deployment tools and techniques.
- Hands-on knowledge of implementing multi-staged CI / CD with tools like Jenkins and Git.
- Sound knowledge on tools like Kibana, Kafka, Grafana, Instana and so on.
- Proficient in bash Scripting Languages.
- Must have in-depth knowledge of Clustering, Load Balancing, High Availability and Disaster Recovery, Auto Scaling, etc.
-
AWS Certified Solutions Architect or/and Linux System Administrator
- Strong ability to work independently on complex issues
- Collaborate efficiently with internal experts to resolve customer issues quickly
- No objection to working night shifts as the production support team works on 24*7 basis. Hence, rotational shifts will be assigned to the candidates weekly to get equal opportunity to work in a day and night shifts. But if you get candidates willing to work the night shift on a need basis, discuss with us.
- Early Joining
- Willingness to work in Delhi NCR
What we are looking for
Work closely with product & engineering groups to identify and document
infrastructure requirements.
Design infrastructure solutions balancing requirements, operational
constraints and architecture guidelines.
Implement infrastructure including network connectivity, virtual machines
and monitoring.
Implement and follow security guidelines, both policy and technical to
protect our customers.
Resolve incidents as escalated from monitoring solutions and lower tiers.
Identify root cause for issues and develop long term solutions to fix recurring
issues.
Ability to automate recurring tasks to increase velocity and quality.
Partner with the engineering team to build software tolerance for
infrastructure failure or issues.
Research emerging technologies, trends and methodologies and enhance
existing systems and processes.
Qualifications
Master’s/Bachelors degree in Computer Science, Computer Engineering,
Electrical Engineering, or related technical field, and two years of experience
in software/systems or related.
5+ years overall experience.
Work experience must have included:
Proven track record in deploying, configuring and maintaining Ubuntu server
systems on premise and in the cloud.
Minimum of 4 years’ experience designing, implementing and troubleshooting
TCP/IP networks, VPN, Load Balancers & Firewalls.
Minimum 3 years of experience working in public clouds like AWS & Azure.
Hands on experience in any of the configuration management tools like Anisble,
Chef & Puppet.
Strong in performing production operation activities.
Experience with Container & Container Orchestrator tools like Kubernetes, Docker
Swarm is plus.
Good at source code management tools like Bitbucket, GIT.
Configuring and utilizing monitoring and alerting tools.
Scripting to automate infrastructure and operational processes.
Hands on work to secure networks and systems.
Sound problem resolution, judgment, negotiating and decision making skills
Ability to manage and deliver multiple project phases at the same time
Strong analytical and organizational skills
Excellent written and verbal communication skills
Interview focus areas
Networks, systems, monitoring
AWS (EC2, S3, VPC)
Problem solving, scripting, network design, systems administration and
troubleshooting scenarios
Culture fit, agility, bias for action, ownership, communication








