Position Level: Senior Engineer
Company Overview:
AskSid.ai is a 4 years old start-up based in Bangalore, is fast growing and cofounded by
two ex-Mindtree employees each with 20+ years of experience. We were rated the No1
emerging SaaS company in India and won the NASSCOM EMERGE 50- League of 10
awards in 2019. Also got rated as the most innovative AI company in India for 2020 by
CII and Accenture Ventures. As a growing company, we are looking for passionate
engineers who aspire to build world class technology products of internet scale.
Job purpose:
Setup, optimize, and maintain Kubernetes clusters on Microsoft Azure Cloud.
Responsibilities
● Setup, maintain, optimize, and secure various Kubernetes clusters on MS Azure
Cloud
● Setup and maintain containers, container availability, auto-scalability, storage
management, DNS, Proxy setup and maintain firewall, app gateway, and load
balancers on MS Azure Cloud.
● Build and manage backup, restore, and DR activities
Knowledge and skills
Education and Experience
- Engineering in computer science
- 3-5 years of experience in setup and management of Kubernetes infrastructure
- Expert level skills in analytical & problem solving
- Ability to communicate clearly in English
- Microsoft Azure
- Kubernetes, AKS services as well as custom clusters on bare metal infrastructure
- Linux internals & services
- Docker, Docker Registry
- NGINX, Load Balancing, Firewall, Security, PKI
- Shell & Awk Script, Azure Templates & scripting, Python Scripting
- Knowledge of NoSQL Databases

About AskSid
About
Connect with the team
Similar jobs
Hiring for SRE Lead
Exp: 7 - 12 yrs
Work Location : Mumbai ( Kurla West )
WFO
Skills :
Proficient in cloud platforms (AWS, Azure, or GCP), containerization (Kubernetes/Docker), and Infrastructure as Code (Terraform, Ansible, or Puppet).
Coding/Scripting: Strong programming or scripting skills in at least one language (e.g., Python, Go, Java) for automation and tooling development.
System Knowledge: Deep understanding of Linux/Unix fundamentals, networking concepts, and distributed systems.
Job Title: Senior Devops Engineer (Full-time)
Location: Mumbai, Onsite
Experience Required: 5+ Years
Required Qualifications
● Experience:
○ 5+ years of hands-on experience as a DevOps Engineer or similar role, with
proven expertise in building and customizing Helm charts from scratch (not just
using pre-existing ones).
○ Demonstrated ability to design and whiteboard DevOps pipelines, including
CI/CD workflows for microservices applications.
○ Experience packaging and deploying applications with stateful dependencies
(e.g., databases, persistent storage) in varied environments: on-prem (air-gapped
and non-air-gapped), single-tenant cloud, multi-tenant cloud, and developer trials.
○ Proficiency in managing deployments in Kubernetes clusters, including offline
installations, upgrades via Helm, and adaptations for client restrictions (e.g., no
additional tools or VMs).
○ Track record of handling client interactions, such as asking probing questions
about infrastructure (e.g., OS versions, storage solutions, network restrictions)
and explaining technical concepts clearly.
● Technical Skills:
○ Strong knowledge of Helm syntax and functionalities (e.g., Go templating, hooks,
subcharts, dependency management).
○ Expertise in containerization with Docker, including image management
(save/load, registries like Harbor or ECR).
○ Familiarity with CI/CD tools such as Jenkins, ArgoCD, GitHub Actions, and
GitOps for automated and manual deployments.
○ Understanding of storage solutions for on-prem and cloud, including object/file
storage (e.g., MinIO, Ceph, NFS, cloud-native like S3/EBS).
○ In-depth knowledge of Kubernetes concepts: StatefulSets, PersistentVolumes,
namespaces, HPA, liveness/readiness probes, network policies, and RBAC.
○ Solid grasp of cloud networking: VPCs (definition, boundaries, virtualization via
SDN, differences from private clouds), bare metal vs. virtual machines
(advantages like resource efficiency, flexibility, and scalability).
○ Ability to work in air-gapped environments, preparing offline artifacts and
ensuring self-contained deployment
About the Role
We are seeking an accomplished DevOps Lead with 12+ years of experience in cloud infrastructure, automation, Blockchain, and CI/CD processes. The DevOps Lead will play a pivotal role in architecting scalable cloud environments, driving automation, ensuring secure deployments, and enabling efficient software delivery pipelines. The role involves working with AWS, Huawei Cloud, Kubernetes, Terraform, blockchain-based infrastructure, and modern DevOps toolchains while providing leadership, technical guidance, and client-facing communication.
Key Responsibilities
Leadership & Team Management
● Lead, mentor, and grow a team of DevOps engineers, setting technical direction and ensuring adherence to best practices.
● Facilitate collaboration across engineering, QA, security, and blockchain development teams.
● Act as the primary technical liaison with clients, managing expectations, requirements, and solution delivery.
Infrastructure Automation & Management
● Architect, implement, and manage infrastructure as code (IaC) using Terraform across multi-cloud environments.
● Standardize environments across AWS, Digital Ocean, Huawei Cloud with a focus on scalability, reliability, and security.
● Manage provisioning, scaling, monitoring, and cost optimization of infrastructure resources.
CI/CD & Automation
● Build, maintain, and optimize CI/CD pipelines supporting multiple applications and microservices.
● Integrate automated testing, static code analysis, and security scans into the pipelines.
● Implement blue-green / canary deployments and ensure zero downtime release strategies.
● Promote DevSecOps by embedding security policies into every phase of the delivery pipeline.
Containerization & Orchestration
● Deploy, manage, and monitor applications on Kubernetes clusters (EKS, CCE, or equivalent).
● Utilize Helm charts, Kustomize, and operators for environment consistency.
● Optimize container performance and manage networking, storage, and secrets.
Monitoring, Logging & Incident Response
● Implement and manage monitoring and alerting solutions (Prometheus, Grafana, ELK, CloudWatch, Loki).
● Define SLOs, SLIs, and SLAs for production systems.
● Lead incident response, root cause analysis, and implement preventative measures.
Governance, Security & Compliance
● Implement best practices for secrets management, key rotation, and role-based access control.
● Integrate vulnerability scanning and security audits into pipelines.
Required Skills & Qualifications
● 12+ years of experience in DevOps, with at least 5+ years in a lead capacity.
● Proven expertise with Terraform and IaC across multiple environments.
● Strong hands-on experience with AWS and Huawei Cloud infrastructure services.
● Deep expertise in Kubernetes cluster administration, scaling, monitoring, and networking.
● Advanced experience designing CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI, or similar.
● Solid background in automated deployments, configuration management, and version control (Git, Ansible, Puppet, or Chef).
● Strong scripting and automation skills (Python, Bash, Go, or similar).
● Proficiency with monitoring/observability tools (Prometheus, Grafana, ELK, CloudWatch, Datadog).
● Strong understanding of blockchain infrastructure, node operations, staking setups, and deployment automation.
● Knowledge of container security, network policies, and zero-trust principles.
● Excellent communication, client handling, and stakeholder management skills with proven ability to present complex DevOps concepts to non-technical audiences.
● Ability to design and maintain highly available, scalable, and fault-tolerant systems in production environments.
Job Description
What does a successful Senior DevOps Engineer do at Fiserv?
This role’s focus will be on contributing and enhancing our DevOps environment within Issuer Solution group, where our cross functional Scrum teams are delivering solutions built on cutting-edge mobile technology and products. You will be expected to support across the wider business unit, leading DevOps practices and initiatives.
What will you do:
• Build, manage, and deploy CI/CD pipelines.
• DevOps Engineer - Helm Chart, Rundesk, Openshift
• Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline.
• Implementing various development, testing, automation tools, and IT infrastructure
• Optimize and automate release/development cycles and processes.
• Be part of and help promote our DevOps culture.
• Identify and implement continuous improvements to the development practice
What you must have:
• 3+ years of experience in devops with hands-on experience in the following:
- Writing automation scripts for deployments and housekeeping using shell scripts (bash) and ansible playbooks
- Building docker images and running/managing docker instances
- Building Jenkins pipelines using groovy scripts
- Working knowledge on kubernetes including application deployments, managing application configurations and persistence volumes
• Has good understanding on infrastructure as code
• Ability to write and update documentation
• Demonstrate a logical, process orientated approach to problems and troubleshooting
• Ability to collaborate with multi development teams
What you are preferred to have:
• 8+ years of development experience
• Jenkins administration experience
• Hands-on experience in building and deploying helm charts
Process Skills:
• Should have worked in Agile Project
Behavioral Skills :
• Good Communication skills
Skills
PRIMARY COMPETENCY : Cloud Infra PRIMARY SKILL : DevOps PRIMARY SKILL PERCENTAGE : 100
What you will do:
- Handling Configuration Management, Web Services Architectures, DevOps Implementation, Build & Release Management, Database management, Backups and monitoring
- Logging, metrics and alerting management
- Creating Docker files
- Performing root cause analysis for production errors
What you need to have:
- 12+ years of experience in Software Development/ QA/ Software Deployment with 5+ years of experience in managing high performing teams
- Proficiency in VMware, AWS & cloud applications development, deployment
- Good knowledge in Java, Node.js
- Experience working with RESTful APIs, JSON etc
- Experience with Unit/ Functional automation is a plus
- Experience with MySQL, Mango DB, Redis, Rabbit MQ
- Proficiency in Jenkins. Ansible, Terraform/Chef/Ant
- Proficiency in Linux based Operating Systems
- Proficiency of Cloud Infrastructure like Dockers, Kubernetes
- Strong problem solving and analytical skills
- Good written and oral communication skills
- Sound understanding in areas of Computer Science such as algorithms, data structures, object oriented design, databases
- Proficiency in monitoring and observability
This company is a network of the world's best developers - full-time, long-term remote software jobs with better compensation and career growth. We enable our clients to accelerate their Cloud Offering, and Capitalize on Cloud. We have our own IOT/AI platform and we provide professional services on that platform to build custom clouds for their IOT devices. We also build mobile apps, run 24x7 devops/site reliability engineering for our clients.
We are looking for very hands-on SRE (Site Reliability Engineering) engineers with 3 to 6 years of experience. The person will be part of team that is responsible for designing & implementing automation from scratch for medium to large scale cloud infrastructure and providing 24x7 services to our North American / European customers. This also includes ensuring ~100% uptime for almost 50+ internal sites. The person is expected to deliver with both high speed and high quality as well as work for 40 Hours per week (~6.5 hours per day, 6 days per week) in shifts which will rotate every month.
This person MUST have:
- B.E Computer Science or equivalent
- 2+ Years of hands-on experience troubleshooting/setting up of the Linux environment, who can write shell scripts for any given requirement.
- 1+ Years of hands-on experience setting up/configuring AWS or GCP services from SCRATCH and maintaining them.
- 1+ Years of hands-on experience setting up/configuring Kubernetes & EKS and ensuring high availability of container orchestration.
- 1+ Years of hands-on experience setting up CICD from SCRATCH in Jenkins & Gitlab.
- Experience configuring/maintaining one monitoring tool.
- Excellent verbal & written communication skills.
- Candidates with certifications - AWS, GCP, CKA, etc will be preferred
- Hands-on experience with databases (Cassandra, MongoDB, MySQL, RDS).
Experience:
- Min 3 years of experience as SRE automation engineer building, running, and maintaining production sites. Not looking for candidates who have experience only as L1/L2 or Build & Deploy..
Location:
- Remotely, anywhere in India
Timings:
- The person is expected to deliver with both high speed and high quality as well as work for 40 Hours per week (~6.5 hours per day, 6 days per week) in shifts which will rotate every month.
Position:
- Full time/Direct
- We have great benefits such as PF, medical insurance, 12 annual company holidays, 12 PTO leaves per year, annual increments, Diwali bonus, spot bonuses and other incentives etc.
- We dont believe in locking in people with large notice periods. You will stay here because you love the company. We have only a 15 days notice period.
MTX Group Inc. is seeking a motivated Lead DevOps Engineer to join our team. MTX Group Inc. is a global implementation partner enabling organizations to become fit enterprises. MTX provides expertise across various platforms and technologies, including Google Cloud, Salesforce, artificial intelligence/machine learning, data integration, data governance, data quality, analytics, visualization and mobile technology. MTX’s very own Artificial Intelligence platform Maverick, enables clients to accelerate processes and critical decisions by leveraging a Cognitive Decision Engine, a collection of purpose-built Artificial Neural Networks designed to leverage the power of Machine Learning. The Maverick Platform includes Smart Asset Detection and Monitoring, Chatbot Services, Document Verification, to name a few.
Responsibilities:
- Be responsible for software releases, configuration, monitoring and support of production system components and infrastructure.
- Troubleshoot technical or functional issues in a complex environment to provide timely resolution, with various applications and platforms that are global.
- Bring experience on Google Cloud Platform.
- Write scripts and automation tools in languages such as Bash/Python/Ruby/Golang.
- Configure and manage data sources like PostgreSQL, MySQL, Mongo, Elasticsearch, Redis, Cassandra, Hadoop, etc
- Build automation and tooling around Google Cloud Platform using technologies such as Anthos, Kubernetes, Terraform, Google Deployment Manager, Helm, Cloud Build etc.
- Bring a passion to stay on top of DevOps trends, experiment with and learn new CI/CD technologies.
- Work with users to understand and gather their needs in our catalogue. Then participate in the required developments
- Manage several streams of work concurrently
- Understand how various systems work
- Understand how IT operations are managed
What you will bring:
- 5 years of work experience as a DevOps Engineer.
- Must possess ample knowledge and experience in system automation, deployment, and implementation.
- Must possess experience in using Linux, Jenkins, and ample experience in configuring and automating the monitoring tools.
- Experience in the software development process and tools and languages like SaaS, Python, Java, MongoDB, Shell scripting, Python, MySQL, and Git.
- Knowledge in handling distributed data systems. Examples: Elasticsearch, Cassandra, Hadoop, and others.
What we offer:
- Group Medical Insurance (Family Floater Plan - Self + Spouse + 2 Dependent Children)
- Sum Insured: INR 5,00,000/-
- Maternity cover upto two children
- Inclusive of COVID-19 Coverage
- Cashless & Reimbursement facility
- Access to free online doctor consultation
- Personal Accident Policy (Disability Insurance) -
- Sum Insured: INR. 25,00,000/- Per Employee
- Accidental Death and Permanent Total Disability is covered up to 100% of Sum Insured
- Permanent Partial Disability is covered as per the scale of benefits decided by the Insurer
- Temporary Total Disability is covered
- An option of Paytm Food Wallet (up to Rs. 2500) as a tax saver benefit
- Monthly Internet Reimbursement of upto Rs. 1,000
- Opportunity to pursue Executive Programs/ courses at top universities globally
- Professional Development opportunities through various MTX sponsored certifications on multiple technology stacks including Salesforce, Google Cloud, Amazon & others
*******************
Experience: 5+yrs
Skills Required: -
Experience in Azure Administration, Configuration and Deployment of WindowsLinux VMContainer
based infrastructure Scripting Programming in Python, JavaScriptTypeScript, C Scripting PowerShell ,
Azure CLI and shell Scripts Identity, Access Management and RBAC model Virtual Networking, storage,
and Compute Resources
Azure Database Technologies. Monitoring and Analytics Tools in Azure
Azure DevOps based CICD Build pipeline integrated with GitHub – Java and Node.js
Test Automation and other CICD Tools
Azure Infrastructure using ARM template Terrafor

Total Experience: 6 – 12 Years
Required Skills and Experience
- 3+ years of relevant experience with DevOps tools Jenkins, Ansible, Chef etc
- 3+ years of experience in continuous integration/deployment and software tools development experience with Python and shell scripts etc
- Building and running Docker images and deployment on Amazon ECS
- Working with AWS services (EC2, S3, ELB, VPC, RDS, Cloudwatch, ECS, ECR, EKS)
- Knowledge and experience working with container technologies such as Docker and Amazon ECS, EKS, Kubernetes
- Experience with source code and configuration management tools such as Git, Bitbucket, and Maven
- Ability to work with and support Linux environments (Ubuntu, Amazon Linux, CentOS)
- Knowledge and experience in cloud orchestration tools such as AWS Cloudformation/Terraform etc
- Experience with implementing "infrastructure as code", “pipeline as code” and "security as code" to enable continuous integration and delivery
- Understanding of IAM, RBAC, NACLs, and KMS
- Good communication skills
Good to have:
- Strong understanding of security concepts, methodologies and apply them such as SSH, public key encryption, access credentials, certificates etc.
- Knowledge of database administration such as MongoDB.
- Knowledge of maintaining and using tools such as Jira, Bitbucket, Confluence.
Responsibilities
- Work with Leads and Architects in designing and implementation of technical infrastructure, platform, and tools to support modern best practices and facilitate the efficiency of our development teams through automation, CI/CD pipelines, and ease of access and performance.
- Establish and promote DevOps thinking, guidelines, best practices, and standards.
- Contribute to architectural discussions, Agile software development process improvement, and DevOps best practices.
We are looking for an experienced software engineer with a strong background in DevOps and handling traffic & infrastructure at scale.
Responsibilities :
Work closely with product engineers to implement scalable and highly reliable systems.
Scale existing backend systems to handle ever-increasing amounts of traffic and new product requirements.
Collaborate with other developers to understand & setup tooling needed for - Continuous Integration/Delivery/
Build & operate infrastructure to support website, backend cluster, ML projects in the organization.
Monitor and track performance and reliability of our services and software to meet promised SLA
2+ years of experience working on distributed systems and shipping high-quality product features on schedule
Intimate knowledge of the whole web stack (Front end, APIs, database, networks etc.)
Ability to build highly scalable, robust, and fault-tolerant services and stay up-to-date with the latest architectural trends
Experience with container based deployment, microservices, in-memory caches, relational databases, key-value stores
Hands-on experience with cloud infrastructure provisioning, deployment, monitoring (we are on AWS and use ECS, RDS, ELB, EC2, Elasticache, Elasticsearch, S3, CloudWatch)







