Devops Engineer Position - 3+ years
Kubernetes, Helm - 3+ years (dev & administration)
Monitoring platform setup experience - Prometheus, Grafana
Azure/ AWS/ GCP Cloud experience - 1+ years.
Ansible/Terraform/Puppet - 1+ years
CI/CD - 3+ years

About Sela Technology Solutions Pvt Ltd
About
Connect with the team
Similar jobs
About MyOperator
MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
Job Summary
We are looking for a skilled and motivated DevOps Engineer with 3+ years of hands-on experience in AWS cloud infrastructure, CI/CD automation, and Kubernetes-based deployments. The ideal candidate will have strong expertise in Infrastructure as Code, containerization, monitoring, and automation, and will play a key role in ensuring high availability, scalability, and security of production systems.
Key Responsibilities
- Design, deploy, manage, and maintain AWS cloud infrastructure, including EC2, RDS, OpenSearch, VPC, S3, ALB, API Gateway, Lambda, SNS, and SQS.
- Build, manage, and operate Kubernetes (EKS) clusters and containerized workloads.
- Containerize applications using Docker and manage deployments with Helm charts
- Develop and maintain CI/CD pipelines using Jenkins for automated build and deployment processes
- Provision and manage infrastructure using Terraform (Infrastructure as Code)
- Implement and manage monitoring, logging, and alerting solutions using Prometheus and Grafana
- Write and maintain Python scripts for automation, monitoring, and operational tasks
- Ensure high availability, scalability, performance, and cost optimization of cloud resources
- Implement and follow security best practices across AWS and Kubernetes environments
- Troubleshoot production issues, perform root cause analysis, and support incident resolution
- Collaborate closely with development and QA teams to streamline deployment and release processes
Required Skills & Qualifications
- 3+ years of hands-on experience as a DevOps Engineer or Cloud Engineer.
- Strong experience with AWS services, including:
- EC2, RDS, OpenSearch, VPC, S3
- Application Load Balancer (ALB), API Gateway, Lambda
- SNS and SQS.
- Hands-on experience with AWS EKS (Kubernetes)
- Strong knowledge of Docker and Helm charts
- Experience with Terraform for infrastructure provisioning and management
- Solid experience building and managing CI/CD pipelines using Jenkins
- Practical experience with Prometheus and Grafana for monitoring and alerting
- Proficiency in Python scripting for automation and operational tasks
- Good understanding of Linux systems, networking concepts, and cloud security
- Strong problem-solving and troubleshooting skills
Good to Have (Preferred Skills)
- Exposure to GitOps practices
- Experience managing multi-environment setups (Dev, QA, UAT, Production)
- Knowledge of cloud cost optimization techniques
- Understanding of Kubernetes security best practices
- Experience with log aggregation tools (e.g., ELK/OpenSearch stack)
Language Preference
- Fluency in English is mandatory.
- Fluency in Hindi is preferred.
Senior DevOps Engineer (8–10 years)
Location: Mumbai
Role Summary
As a Senior DevOps Engineer, you will own end-to-end platform reliability and delivery automation for mission-critical lending systems. You’ll architect cloud infrastructure, standardize CI/CD, enforce DevSecOps controls, and drive observability at scale—ensuring high availability, performance, and compliance consistent with BFSI standards.
Key Responsibilities
Platform & Cloud Infrastructure
- Design, implement, and scale multi-account, multi-VPC cloud architectures on AWS and/or Azure (compute, networking, storage, IAM, RDS, EKS/AKS, Load Balancers, CDN).
- Champion Infrastructure as Code (IaC) using Terraform (and optionally Pulumi/Crossplane) with GitOps workflows for repeatable, auditable deployments.
- Lead capacity planning, cost optimization, and performance tuning across environments.
CI/CD & Release Engineering
- Build and standardize CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps, ArgoCD) for microservices, data services, and frontends; enable blue‑green/canary releases and feature flags.
- Drive artifact management, environment promotion, and release governance with compliance-friendly controls.
Containers, Kubernetes & Runtime
- Operate production-grade Kubernetes (EKS/AKS), including cluster lifecycle, autoscaling, ingress, service mesh, and workload security; manage Docker/containerd images and registries.
Reliability, Observability & Incident Management
- Implement end-to-end monitoring, logging, and tracing (Prometheus, Grafana, ELK/EFK, CloudWatch/Log Analytics, Datadog/New Relic) with SLO/SLI error budgets.
- Establish on-call rotations, run postmortems, and continuously improve MTTR and change failure rate.
Security & Compliance (DevSecOps)
- Enforce cloud and container hardening, secrets management (AWS Secrets Manager / HashiCorp Vault), vulnerability scanning (Snyk/SonarQube), and policy-as-code (OPA/Conftest).
- Partner with infosec/risk to meet BFSI regulatory expectations for DR/BCP, audits, and data protection.
Data, Networking & Edge
- Optimize networking (DNS, TCP/IP, routing, OSI layers) and edge delivery (CloudFront/Fastly), including WAF rules and caching strategies.
- Support persistence layers (MySQL, Elasticsearch, DynamoDB) for performance and reliability.
Ways of Working & Leadership
- Lead cross-functional squads (Product, Engineering, Data, Risk) and mentor junior DevOps/SREs.
- Document runbooks, architecture diagrams, and operating procedures; drive automation-first culture.
Must‑Have Qualifications
- 8–10 years of total experience with 5+ years hands-on in DevOps/SRE roles.
- Strong expertise in AWS and/or Azure, Linux administration, Kubernetes, Docker, and Terraform.
- Proven track record building CI/CD with Jenkins/GitHub Actions/Azure DevOps/ArgoCD.
- Solid grasp of networking fundamentals (DNS, TLS, TCP/IP, routing, load balancing).
- Experience implementing observability stacks and responding to production incidents.
- Scripting in Bash/Python; ability to automate ops workflows and platform tasks.
- Good‑to‑Have / Preferred
- Exposure to BFSI/fintech systems and compliance standards; DR/BCP planning.
- Secrets management (Vault), policy-as-code (OPA), and security scanning (Snyk/SonarQube).
- Experience with GitOps patterns, service tiering, and SLO/SLI design. [illbeback.ai]
- Knowledge of CDNs (CloudFront/Fastly) and edge caching/WAF rule authoring.
- Education
- Bachelor’s/Master’s in Computer Science, Information Technology, or related field (or equivalent experience).
Job Description
Role: Sr. DevOps – Architect
Location: Bangalore
Who are we looking for?
A senior level DevOps consultant with deep DevOps related expertise. The Individual should be passionate about technology and demonstrate depth and breadth of expertise in similar roles and Enterprise Systems/Enterprise Architecture Frameworks;
Technical Skills:
• 8+ years of relevant DevOps /Operations/Development experience working under Agile DevOps culture on large scale distributed systems.
• Experience in building a DevOps platform in integrating DevOps tool chain using REST/SOAP/ESB technologies.
• Required hands on programming skills on developing automation modules using one of these scripting languages Python/Perl/Ruby/Bash
• Require hands on experience with public cloud such as AWS, Azure, Openstack, Pivotal Cloud Foundry etc though Azure experience is must.
• Experience in working more than one of the configuration management tools like Chef/Puppet/Ansible and building own cookbook/manifest is required.
• Experience with Docker and Kubernetes.
• Experience in Building CI/CD pipelines using any of the continuous integration tools like Jenkins, Bamboo etc.
• Experience with planning tools like Jira, Rally etc.
• Hands on experience on continuous integration and build tools like (Jenkins, Bamboo, CruiseControl etc.) along with version control system like (GIT, SVN, GITHUB, TFS etc.), build automation tools like Maven/Gradle/ANT and dependency management tools like Artifactory/Nexus.
• Experience with more than one deployment automation tools like IBM urban code, CA automic, XL Deploy etc.
• Experience on setting up and managing DevOps tools on Repository, Monitoring, Log Analysis etc. using tools like (New Relic, Splunk, App Dynamics etc.)
• Understanding of Applications, Networking and Open source tools.
• Experience on security side of DevOps i.e. DevSecOps
• Good to have understanding of Micro services architecture.
• Experience working with remote/offshore teams is a huge plus
• Experience in building a Dashboard based on latest JS technologies like NodeJS
• Experience with NoSQL database like MongoDB
• Experience in working with REST APIs
• Experience with tools like NPM, Gulp
Process Skills:
• Ability in performing rapid assessments of clients’ internal technology landscape and targeting use cases and deployment targets
• Develop and create program blueprint, case study, supporting technical documentations for DevOps to be commercialized and duplicate work across different business customers
• Compile, deliver, and evangelize roadmaps that guide the evolution of services
• Grasp and communicate big-picture enterprise-wide issues to team
• Experience working in an Agile / Scrum / SAFe environment preferred
Behavioral Skills :
• Should have directly worked on creating enterprise level operating models, architecture options
• Model as-is and to-be architectures based on business requirements
• Good communication & presentation skills
• Self-driven + Disciplined + Organized + Result Oriented + Focused & Passionate about work
• Flexible for short term travel
Primary Duties / Responsibilities:
• Build Automations and modules for DevOps platform
• Build integrations between various DevOps tools
• Interface with another teams to provide support and understand the overall vision of the transformation platform.
• Understand the customer deployment scenarios, and Continuously improve and update the platform based on agile requirements.
• Preparing HLDs and LLDs.
• Presenting status to leadership and key stakeholders at regular intervals
Qualification:
• Somebody who has at least 12+ years of work experience in software development.
• 5+ years industry experience in DevOps architecture related to Continuous Integration/Delivery solutions, Platform Automation including technology consulting experience
• Education qualification: B.Tech, BE, BCA, MCA, M. Tech or equivalent technical degree from a reputed college
We are seeking a skilled DevOps Engineer with 3+ years of experience to join our team on a permanent work-from-home basis.
Responsibilities:
- Develop and maintain infrastructure using Ansible.
- Write Ansible playbooks.
- Implement CI/CD pipelines.
- Manage GitLab repositories.
- Monitor and troubleshoot infrastructure issues.
- Ensure security and compliance.
- Document best practices.
Qualifications:
- Proven DevOps experience.
- Expertise with Ansible and CI/CD pipelines.
- Proficient with GitLab.
- Strong scripting skills.
- Excellent problem-solving and communication skills.
Regards,
Aishwarya M
Associate HR
Head Kubernetes
Docker, Kubernetes Engineer – Remote (Pan India)
ABOUT US
Established in 2009, Ashnik is a leading open-source solutions and consulting company in South East Asia and India, headquartered in Singapore. We enable digital transformation for large enterprises through our design, architecting, and solution skills. Over 100 large enterprises in the region have acknowledged our expertise in delivering solutions using key open-source technologies. Our offerings form critical part of Digital transformation, Big Data platform, Cloud and Web acceleration and IT modernization. We represent EDB, Pentaho, Docker, Couchbase, MongoDB, Elastic, NGINX, Sysdig, Redis Labs, Confluent, and HashiCorp as their key partners in the region. Our team members bring decades of experience in delivering confidence to enterprises in adopting open source software and are known for their thought leadership.
THE POSITION
Ashnik is looking for talented and passionate Technical consultant to be part of the training team and work with customers on DevOps Solution. You will be responsible for implementation and consultations related work for customers across SEA and India. We are looking for personnel with personal qualities like -
- Passion for working for different customers and different environments.
- Excellent communication and articulation skills
- Aptitude for learning new technology and willingness to understand technologies which he/she is not directly working on.
- Willingness to travel within and outside the country.
- Ability to independently work at the customer site and navigate through different teams.
SKILLS AND EXPERIENCE
ESSENTIAL SKILLS
- 3+ years of experience with B.E/ B.Tech, MCA, Graduation Degree with higher education in the technical field.
- Must have prior experience with docker containers, swarm and/or Kubernetes.
- Understanding of Operating System, processes, networking, and containers.
- Experience/exposure in writing and managing Dockerfile or creating container images.
- Experience of working on Linux platform.
- Ability to perform installations and system/software configuration on Linux.
- Should be aware of networking and SSL/TLS basics.
- Should be aware of tools used in complex enterprise IT infra e.g. LDAP, AD, Centralized Logging solution etc.
A preferred candidate will have
- Prior experience with open source Kubernetes or another container management platform like OpenShift, EKS, ECS etc.
- CNCF Kubernetes Certified Administrator/Developer
- Experience in container monitoring using Prometheus, Datadog etc.
- Experience with CI/CD tools and their usage
- Knowledge of scripting language e.g., shell scripting
RESPONSIBILITIES
First 2 months:
- Get an in-depth understanding of containers, Docker Engine and runtime, Swarm and Kubernetes.
- Get hands-on experience with various features of Mirantis Kubernetes Engine, and Mirantis Secure Registry
After 2 months:
- Work with Ashnik’s pre-sales and solution architects to deploy Mirantis Kubernetes Engine for customers
- Work with customer teams and provide them guidance on how Kubernetes Platform can be integrated with other tools in their environment (CI/CD, identity management, storage etc)
- Work with customers to containerize their applications.
- Write Dockerfile for customers during the implementation phase.
- Help customers design their network and security policy for effective management of applications deployed using Swarm and/or Kubernetes.
- Help customer design their deployments either with Swarm services or Kubernetes Deployments.
- Work with pre-sales and sales team to help customers during their evaluation of Mirantis Kubernetes platform
- Conduct workshops for customers as needed for technical hand-holding and technical handover.
Package: upto30 L
Office: Mumbai
We are looking for a Senior Platform Engineer responsible for handling our GCP/AWS clouds. The candidate will be responsible for automating the deployment of cloud infrastructure and services to support application development and hosting (architecting, engineering, deploying, and operationally managing the underlying logical and physical cloud computing infrastructure).
Job Description:
● Collaborate with teams to build and deliver solutions implementing serverless, microservice-based, IaaS, PaaS, and containerized architectures in GCP/AWS environments.
●Responsible for deploying highly complex, distributed transaction processing systems.
● Work on continuous improvement of the products through innovation and learning. Someone with a knack for benchmarking and optimization
● Hiring, developing, and cultivating a high and reliable cloud support team ● Building and operating complex CI/CD pipelines at scale
● Work with GCP Services, Private Service Connect, Cloud Run, Cloud Functions, Pub/Sub, Cloud Storage, Networking
● Collaborate with Product Management and Product Engineering teams to drive excellence in Google Cloud products and features.
● Ensures efficient data storage and processing functions by company security policies and best practices in cloud security.
● Ensuring scaled database setup/monitoring with near zero downtime
Position: DevOps Lead
Job Description
● Research, evangelize and implement best practices and tools for GitOps, DevOps, continuous integration, build automation, deployment automation, configuration management, infrastructure as code.
● Develop software solutions to support DevOps tooling; including investigation of bug fixes, feature enhancements, and software/tools updates
● Participate in the full systems life cycle with solution design, development, implementation, and product support using Scrum and/or other Agile practices
● Evaluating, implementing, and streamlining DevOps practices.
● Design and drive the implementation of fully automated CI/CD pipelines.
● Designing and creating Cloud services and architecture for highly available and scalable environments. Lead the monitoring, debugging, and enhancing pipelines for optimal operation and performance. Supervising, examining, and handling technical operations.
Qualifications
● 5 years of experience in managing application development, software delivery lifecycle, and/or infrastructure development and/or administration
● Experience with source code repository management tools, code merge and quality checks, continuous integration, and automated deployment & management using tools like Bitbucket, Git, Ansible, Terraform, Artifactory, Service Now, Sonarqube, Selenium.
● Minimum of 4 years of experience with approaches and tooling for automated build, delivery, and release of the software
● Experience and/or knowledge of CI/CD tools: Jenkins, Bitbucket Pipelines, Gitlab CI, GoCD.
● Experience with Linux systems: CentOS, RHEL, Ubuntu, Secure Linux... and Linux Administration.
● Minimum of 4 years experience with managing medium/large teams including progress monitoring and reporting
● Experience and/or knowledge of Docker, Cloud, and Orchestration: GCP, AWS, Kubernetes.
● Experience and/or knowledge of system monitoring, logging, high availability, redundancy, autoscaling, and failover.
● Experience automating manual and/or repetitive processes.
● Experience and/or knowledge with networking and load balancing: Nginx, Firewall, IP network
- Hands-on experience building database-backed web applications using Python based frameworks
- Excellent knowledge of Linux and experience developing Python applications that are deployed in Linux environments
- Experience building client-side and server-side API-level integrations in Python
- Experience in containerization and container orchestration systems like Docker, Kubernetes, etc.
- Experience with NoSQL document stores like the Elastic Stack (Elasticsearch, Logstash, Kibana)
- Experience in using and managing Git based version control systems - Azure DevOps, GitHub, Bitbucket etc.
- Experience in using project management tools like Jira, Azure DevOps etc.
- Expertise in Cloud based development and deployment using cloud providers like AWS or Azure
- 3+ years experience leading a team of DevOps engineers
- 8+ years experience managing DevOps for large engineering teams developing cloud-native software
- Strong in networking concepts
- In-depth knowledge of AWS and cloud architectures/services.
- Experience within the container and container orchestration space (Docker, Kubernetes)
- Passion for CI/CD pipeline using tools such as Jenkins etc.
- Familiarity with config management tools like Ansible Terraform etc
- Proven record of measuring and improving DevOps metrics
- Familiarity with observability tools and experience setting them up
- Passion for building tools and productizing services that empower development teams.
- Excellent knowledge of Linux command-line tools and ability to write bash scripts.
- Strong in Unix / Linux administration and management,
KEY ROLES/RESPONSIBILITIES:
- Own and manage the entire cloud infrastructure
- Create the entire CI/CD pipeline to build and release
- Explore new technologies and tools and recommend those that best fit the team and organization
- Own and manage the site reliability
- Strong decision-making skills and metric-driven approach
- Mentor and coach other team members











