
Cloud DevOps Architect
· Practices self-leadership and promotes learning in others by building relationships with cross- functional stakeholders; communicating information and providing advice to drive projects forward; adapting to competing demands and new responsibilities; providing feedback to others; mentoring junior team members; creating and executing plans to capitalize on strengths and improve opportunity areas; and adapting to and learning from change, difficulties, and feedback.
· Ensure appropriate translation of business requirements and functional specifications into physical program designs, code modules, stable application systems, and software solutions by partnering with Business Analysts and other team members to understand business needs and functional specifications.
· Build use cases/scenarios and reference architectures to enable rapid adoption of cloud services
in Product’s cloud journey.
· Provide insight into recommendations for technical solutions that meet design and functional needs.
· Experience or familiarity with Firewall/NGFW deployed in a variety of form factors (Checkpoint, Imperva, Palo Alto, Azure Firewall).
· Establish credibility & build deep relationships with senior technical individuals to enable them to be cloud advocates.
· Participate in deep architectural discussions to build confidence and ensure engineering success when building new and migrating existing applications, software. and services to AWS and GCP.
· Conduct deep-dive hands-on education/training sessions to transfer knowledge to DevOps and engineering teams considering or already using public cloud services.
· Be a cloud (Amazon Web Services, Google Cloud Platform) and DevOps evangelist and advise the stakeholders on cloud readiness, workload identification, migration and identifying the right multi cloud mix to effectively accomplish business objectives.
· Understands engineering requirements and architect scalable solutions adopting DevOps and leveraging advanced technologies such as AWS CodePipelines, AWS Code-Commit, ECS containers, API Gateway, CloudFormation Templates, AWS Kinesis, Splunk, Dome9, AWS-SQS, AWS-SNS, SonarCube, Microservices, and Kubernetes to realize stronger benefits and future proof outcomes for customer-facing applications.
· Build use cases/scenarios and reference architectures to enable rapid adoption of cloud services in product’s cloud journey.
· Be an integral part of the technology and architecture community in the public cloud partners (AWS, GCP, Azure) and bring in new services launched by cloud providers into 8K Miles Product Platform scope.
· Capture and share best-practice knowledge amongst the DevOps and Cloud community.
· Act as a technical liaison between product management, service engineering, and support teams.
· Qualification:
o Master’s Degree in Computer Science/Engineering with 12+ years’ experience in information technology (networking, infrastructure, database).
o Strong and recent exposure to AWS/GCP/Azure Cloud platforms and designing hybrid multi cloud solutions. Preferred to be Certified AWS Architect professional or similar
· Working knowledge of UNIX shell scripting.
· Strong hands-on programming experience in Python
· Working knowledge of data visualization tools – Tableau.
· Experience working in cloud environment — AWS.
· Experience working with modern tools in the Agile Software Development Life Cycle.
· Version Control Systems (Ex. Git, Github, Stash/Bitbucket), Knowledge Management (Ex. Confluence, Google Docs), Development Workflow (Ex. Jira), Continuous Integration (Ex. Bamboo), Real Time Collaboration (Ex. Hipchat, Slack).

Similar jobs
JOB DETAILS:
* Job Title: DevOps Engineer (Azure)
* Industry: Technology
* Salary: Best in Industry
* Experience: 2-5 years
* Location: Bengaluru, Koramangala
Review Criteria
- Strong Azure DevOps Engineer Profiles.
- Must have minimum 2+ years of hands-on experience as an Azure DevOps Engineer with strong exposure to Azure DevOps Services (Repos, Pipelines, Boards, Artifacts).
- Must have strong experience in designing and maintaining YAML-based CI/CD pipelines, including end-to-end automation of build, test, and deployment workflows.
- Must have hands-on scripting and automation experience using Bash, Python, and/or PowerShell
- Must have working knowledge of databases such as Microsoft SQL Server, PostgreSQL, or Oracle Database
- Must have experience with monitoring, alerting, and incident management using tools like Grafana, Prometheus, Datadog, or CloudWatch, including troubleshooting and root cause analysis
Preferred
- Knowledge of containerisation and orchestration tools such as Docker and Kubernetes.
- Knowledge of Infrastructure as Code and configuration management tools such as Terraform and Ansible.
- Preferred (Education) – BE/BTech / ME/MTech in Computer Science or related discipline
Role & Responsibilities
- Build and maintain Azure DevOps YAML-based CI/CD pipelines for build, test, and deployments.
- Manage Azure DevOps Repos, Pipelines, Boards, and Artifacts.
- Implement Git branching strategies and automate release workflows.
- Develop scripts using Bash, Python, or PowerShell for DevOps automation.
- Monitor systems using Grafana, Prometheus, Datadog, or CloudWatch and handle incidents.
- Collaborate with dev and QA teams in an Agile/Scrum environment.
- Maintain documentation, runbooks, and participate in root cause analysis.
Ideal Candidate
- 2–5 years of experience as an Azure DevOps Engineer.
- Strong hands-on experience with Azure DevOps CI/CD (YAML) and Git.
- Experience with Microsoft Azure (OCI/AWS exposure is a plus).
- Working knowledge of SQL Server, PostgreSQL, or Oracle.
- Good scripting, troubleshooting, and communication skills.
- Bonus: Docker, Kubernetes, Terraform, Ansible experience.
- Comfortable with WFO (Koramangala, Bangalore).
About MyOperator
MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
Job Summary
We are looking for a skilled and motivated DevOps Engineer with 3+ years of hands-on experience in AWS cloud infrastructure, CI/CD automation, and Kubernetes-based deployments. The ideal candidate will have strong expertise in Infrastructure as Code, containerization, monitoring, and automation, and will play a key role in ensuring high availability, scalability, and security of production systems.
Key Responsibilities
- Design, deploy, manage, and maintain AWS cloud infrastructure, including EC2, RDS, OpenSearch, VPC, S3, ALB, API Gateway, Lambda, SNS, and SQS.
- Build, manage, and operate Kubernetes (EKS) clusters and containerized workloads.
- Containerize applications using Docker and manage deployments with Helm charts
- Develop and maintain CI/CD pipelines using Jenkins for automated build and deployment processes
- Provision and manage infrastructure using Terraform (Infrastructure as Code)
- Implement and manage monitoring, logging, and alerting solutions using Prometheus and Grafana
- Write and maintain Python scripts for automation, monitoring, and operational tasks
- Ensure high availability, scalability, performance, and cost optimization of cloud resources
- Implement and follow security best practices across AWS and Kubernetes environments
- Troubleshoot production issues, perform root cause analysis, and support incident resolution
- Collaborate closely with development and QA teams to streamline deployment and release processes
Required Skills & Qualifications
- 3+ years of hands-on experience as a DevOps Engineer or Cloud Engineer.
- Strong experience with AWS services, including:
- EC2, RDS, OpenSearch, VPC, S3
- Application Load Balancer (ALB), API Gateway, Lambda
- SNS and SQS.
- Hands-on experience with AWS EKS (Kubernetes)
- Strong knowledge of Docker and Helm charts
- Experience with Terraform for infrastructure provisioning and management
- Solid experience building and managing CI/CD pipelines using Jenkins
- Practical experience with Prometheus and Grafana for monitoring and alerting
- Proficiency in Python scripting for automation and operational tasks
- Good understanding of Linux systems, networking concepts, and cloud security
- Strong problem-solving and troubleshooting skills
Good to Have (Preferred Skills)
- Exposure to GitOps practices
- Experience managing multi-environment setups (Dev, QA, UAT, Production)
- Knowledge of cloud cost optimization techniques
- Understanding of Kubernetes security best practices
- Experience with log aggregation tools (e.g., ELK/OpenSearch stack)
Language Preference
- Fluency in English is mandatory.
- Fluency in Hindi is preferred.
About the Role
We are looking for a DevOps Engineer to build and maintain scalable, secure, and high-
performance infrastructure for our next-generation healthcare platform. You will be
responsible for automation, CI/CD pipelines, cloud infrastructure, and system reliability,
ensuring seamless deployment and operations.
Responsibilities
1. Infrastructure & Cloud Management
• Design, deploy, and manage cloud-based infrastructure (AWS, Azure, GCP)
• Implement containerization (Docker, Kubernetes) and microservices orchestration
• Optimize infrastructure cost, scalability, and performance
2. CI/CD & Automation
• Build and maintain CI/CD pipelines for automated deployments
• Automate infrastructure provisioning using Terraform, Ansible, or CloudFormation
• Implement GitOps practices for streamlined deployments
3. Security & Compliance
• Ensure adherence to ABDM, HIPAA, GDPR, and healthcare security standards
• Implement role-based access controls, encryption, and network security best
practices
• Conduct Vulnerability Assessment & Penetration Testing (VAPT) and compliance
audits
4. Monitoring & Incident Management
• Set up monitoring, logging, and alerting systems (Prometheus, Grafana, ELK,
Datadog, etc.)
• Optimize system reliability and automate incident response mechanisms
• Improve MTTR (Mean Time to Recovery) and system uptime KPIs
5. Collaboration & Process Improvement
• Work closely with development and QA teams to streamline deployments
• Improve DevSecOps practices and cloud security policies
• Participate in architecture discussions and performance tuning
Required Skills & Qualifications
• 2+ years of experience in DevOps, cloud infrastructure, and automation
• Hands-on experience with AWS and Kubernetes
• Proficiency in Docker and CI/CD tools (Jenkins, GitHub Actions, ArgoCD, etc.)
• Experience with Terraform, Ansible, or CloudFormation
• Strong knowledge of Linux, shell scripting, and networking
• Experience with cloud security, monitoring, and logging solutions
Nice to Have
• Experience in healthcare or other regulated industries
• Familiarity with serverless architectures and AI-driven infrastructure automation
• Knowledge of big data pipelines and analytics workflows
What You'll Gain
• Opportunity to build and scale a mission-critical healthcare infrastructure
• Work in a fast-paced startup environment with cutting-edge technologies
• Growth potential into Lead DevOps Engineer or Cloud Architect roles
About Company:
The company is a global leader in secure payments and trusted transactions. They are at the forefront of the digital revolution that is shaping new ways of paying, living, doing business and building relationships that pass on trust along the entire payments value chain, enabling sustainable economic growth. Their innovative solutions, rooted in a rock-solid technological base, are environmentally friendly, widely accessible and support social transformation.
- Role Overview
- Senior Engineer with a strong background and experience in cloud related technologies and architectures. Can design target cloud architectures to transform existing architectures together with the in-house team. Can actively hands-on configure and build cloud architectures and guide others.
- Key Knowledge
- 3-5+ years of experience in AWS/GCP or Azure technologies
- Is likely certified on one or more of the major cloud platforms
- Strong experience from hands-on work with technologies such as Terraform, K8S, Docker and orchestration of containers.
- Ability to guide and lead internal agile teams on cloud technology
- Background from the financial services industry or similar critical operational experience
Responsibility :
- Install, configure, and maintain Kubernetes clusters.
- Develop Kubernetes-based solutions.
- Improve Kubernetes infrastructure.
- Work with other engineers to troubleshoot Kubernetes issues.
Kubernetes Engineer Requirements & Skills
- Kubernetes administration experience, including installation, configuration, and troubleshooting
- Kubernetes development experience
- Linux/Unix experience
- Strong analytical and problem-solving skills
- Excellent communication and interpersonal skills
- Ability to work independently and as part of a team
- Essentail Skills:
- Docker
- Jenkins
- Python dependency management using conda and pip
- Base Linux System Commands, Scripting
- Docker Container Build & Testing
- Common knowledge of minimizing container size and layers
- Inspecting containers for un-used / underutilized systems
- Multiple Linux OS support for virtual system
- Has experience as a user of jupyter / jupyter lab to test and fix usability issues in workbenches
- Templating out various configurations for different use cases (we use Python Jinja2 but are open to other languages / libraries)
- Jenkins PIpeline
- Github API Understanding to trigger builds, tags, releases
- Artifactory Experience
- Nice to have: Kubernetes, ArgoCD, other deployment automation tool sets (DevOps)
Role : DevOps Engineer
Experience : 1 to 2 years and 2 to 5 Years as DevOps Engineer (2 Positions)
Location : Bangalore. 5 Days Working.
Education Qualification : Tech/B.E for Tier-1/Tier-2/Tier-3 Colleges or equivalent institutes
Skills :- DevOps Engineering, Ruby On Rails or Python and Bash/Shell skills, Docker, rkt or similar container engine, Kubernetes or similar clustering solutions
As DevOps Engineer, you'll be part of the team building the stage for our Software Engineers to work on, helping to enhance our product performance and reliability.
Responsibilities:
- Build & operate infrastructure to support website, backed cluster, ML projects in the organization.
- Helping teams become more autonomous and allowing the Operation team to focus on improving the infrastructure and optimizing processes.
- Delivering system management tooling to the engineering teams.
- Working on your own applications which will be used internally.
- Contributing to open source projects that we are using (or that we may start).
- Be an advocate for engineering best practices in and out of the company.
- Organizing tech talks and participating in meetups and representing Box8 at industry events.
- Sharing pager duty for the rare instances of something serious happening. ∙ Collaborate with other developers to understand & setup tooling needed for Continuous Integration/Delivery/Deployment (CI/CD) practices.
Requirements:
- 1+ Years Of Industry Experience Scale existing back end systems to handle ever increasing amounts of traffic and new product requirements.
- Ruby On Rails or Python and Bash/Shell skills.
- Experience managing complex systems at scale.
- Experience with Docker, rkt or similar container engine.
- Experience with Kubernetes or similar clustering solutions.
- Experience with tools such as Ansible or Chef Understanding of the importance of smart metrics and alerting.
- Hands on experience with cloud infrastructure provisioning, deployment, monitoring (we are on AWS and use ECS, ELB, EC2, Elasticache, Elasticsearch, S3, CloudWatch).
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Knowledge of data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience in working on linux based servers.
- Managing large scale production grade infrastructure on AWS Cloud.
- Good Knowledge on scripting languages like ruby, python or bash.
- Experience in creating in deployment pipeline from scratch.
- Expertise in any of the CI tools, preferably Jenkins.
- Good knowledge of docker containers and its usage.
- Using Infra/App Monitoring tools like, CloudWatch/Newrelic/Sensu.
Good to have:
- Knowledge of Ruby on Rails based applications and its deployment methodologies.
- Experience working on Container Orchestration tools like Kubernetes/ECS/Mesos.
- Extra Points For Experience With Front-end development NewRelic GCP Kafka, Elasticsearch.

2. Has done Infrastructure coding using Cloudformation/Terraform and Configuration also understands it very clearly
3. Deep understanding of the microservice design and aware of centralized Caching(Redis),centralized configuration(Consul/Zookeeper)
4. Hands-on experience of working on containers and its orchestration using Kubernetes
5. Hands-on experience of Linux and Windows Operating System
6. Worked on NoSQL Databases like Cassandra, Aerospike, Mongo or
Couchbase, Central Logging, monitoring and Caching using stacks like ELK(Elastic) on the cloud, Prometheus, etc.
7. Has good knowledge of Network Security, Security Architecture and Secured SDLC practices







