

Responsibilities:
- Design, implement, and maintain cloud infrastructure solutions on Microsoft Azure, with a focus on scalability, security, and cost optimization.
- Collaborate with development teams to streamline the deployment process, ensuring smooth and efficient delivery of software applications.
- Develop and maintain CI/CD pipelines using tools like Azure DevOps, Jenkins, or GitLab CI to automate build, test, and deployment processes.
- Utilize infrastructure-as-code (IaC) principles to create and manage infrastructure deployments using Terraform, ARM templates, or similar tools.
- Manage and monitor containerized applications using Azure Kubernetes Service (AKS) or other container orchestration platforms.
- Implement and maintain monitoring, logging, and alerting solutions for cloud-based infrastructure and applications.
- Troubleshoot and resolve infrastructure and deployment issues, working closely with development and operations teams.
- Ensure high availability, performance, and security of cloud infrastructure and applications.
- Stay up-to-date with the latest industry trends and best practices in cloud infrastructure, DevOps, and automation.
Requirements:
- Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent work experience).
- Minimum of four years of proven experience working as a DevOps Engineer or similar role, with a focus on cloud infrastructure and deployment automation.
- Strong expertise in Microsoft Azure services, including but not limited to Azure Virtual Machines, Azure App Service, Azure Storage, Azure Networking, Azure Security, and Azure Monitor.
- Proficiency in infrastructure-as-code (IaC) tools such as Terraform or ARM templates.
- Hands-on experience with containerization and orchestration platforms, preferably Azure Kubernetes Service (AKS) or Docker Swarm.
- Solid understanding of CI/CD principles and experience with relevant tools such as Azure DevOps, Jenkins, or GitLab CI.
- Experience with scripting languages like PowerShell, Bash, or Python for automation tasks.
- Strong problem-solving and troubleshooting skills with a proactive and analytical mindset.
- Excellent communication and collaboration skills, with the ability to work effectively in a team environment.
- Azure certifications (e.g., Azure Administrator, Azure DevOps Engineer, Azure Solutions Architect) are a plus.

About Limechat
About
Connect with the team
Similar jobs
Job Title : DevOps Engineer
Experience : 3+ Years
Location : Mumbai
Employment Type : Full-time
Job Overview :
We’re looking for an experienced DevOps Engineer to design, build, and manage Kubernetes-based deployments for a microservices data discovery platform.
The ideal candidate has strong hands-on expertise with Helm, Docker, CI/CD pipelines, and cloud networking — and can handle complex deployments across on-prem, cloud, and air-gapped environments.
Mandatory Skills :
✅ Helm, Kubernetes, Docker
✅ Jenkins, ArgoCD, GitOps
✅ Cloud Networking (VPCs, bare metal vs. VMs)
✅ Storage (MinIO, Ceph, NFS, S3/EBS)
✅ Air-gapped & multi-tenant deployments
Key Responsibilities :
- Build and customize Helm charts from scratch.
- Implement CI/CD pipelines using Jenkins, ArgoCD, GitOps.
- Manage containerized deployments across on-prem/cloud setups.
- Work on air-gapped and restricted environments.
- Optimize for scalability, monitoring, and security (Prometheus, Grafana, RBAC, HPA).
Job Title: AWS DevOps Engineer
Experience Level: 5+ Years
Location: Bangalore, Pune, Hyderabad, Chennai and Gurgaon
Summary:
We are looking for a hands-on Platform Engineer with strong execution skills to provision and manage cloud infrastructure. The ideal candidate will have experience with Linux, AWS services, Kubernetes, and Terraform, and should be capable of troubleshooting complex issues in cloud and container environments.
Key Responsibilities:
- Provision AWS infrastructure using Terraform (IaC).
- Manage and troubleshoot Kubernetes clusters (EKS/ECS).
- Work with core AWS services: VPC, EC2, S3, RDS, Lambda, ALB, WAF, and CloudFront.
- Support CI/CD pipelines using Jenkins and GitHub.
- Collaborate with teams to resolve infrastructure and deployment issues.
- Maintain documentation of infrastructure and operational procedures.
Required Skills:
- 3+ years of hands-on experience in AWS infrastructure provisioning using Terraform.
- Strong Linux administration and troubleshooting skills.
- Experience managing Kubernetes clusters.
- Basic experience with CI/CD tools like Jenkins and GitHub.
- Good communication skills and a positive, team-oriented attitude.
Preferred:
- AWS Certification (e.g., Solutions Architect, DevOps Engineer).
- Exposure to Agile and DevOps practices.
- Experience with monitoring and logging tools.
Responsibilities
Provisioning and de-provisioning AWS accounts for internal customers
Work alongside systems and development teams to support the transition and operation of client websites/applications in and out of AWS.
Deploying, managing, and operating AWS environments
Identifying appropriate use of AWS operational best practices
Estimating AWS costs and identifying operational cost control mechanisms
Keep technical documentation up to date
Proactively keep up to date on AWS services and developments
Create (where appropriate) automation, in order to streamline provisioning and de-provisioning processes
Lead certain data/service migration projects
Job Requirements
Experience provisioning, operating, and maintaining systems running on AWS
Experience with Azure/AWS.
Capabilities to provide AWS operations and deployment guidance and best practices throughout the lifecycle of a project
Experience with application/data migration to/from AWS
Experience with NGINX and the HTTP protocol.
Experience with configuration and management software such as GIT Strong analytical and problem-solving skills
Deployment experience using common AWS technologies like VPC, and regionally distributed EC2 instances, Docker, and more.
Ability to work in a collaborative environment
Detail-oriented, strong work ethic and high standard of excellence
A fast learner, the Achiever, sets high personal goals
Must be able to work on multiple projects and consistently meet project deadlines
At Egnyte we build and maintain our flagship software: a secure content platform used by companies like Red Bull and Yamaha.
We store, analyze, organize, and secure billions of files and petabytes of data with millions of users. We observe more than 1M API requests per minute on average. To make that possible and to provide the best possible experience, we rely on great engineers. For us, people who own their work from start to finish are integral. Our Engineers are part of the process from design to code, to test, to deployment, and back again for further iterations.
We have 300+ engineers spread across the US, Poland, and India.
You will be part of our DevOps Team working closely with our DBA team in automating, monitoring, and scaling our massive MySQL cluster. Previous MySQL experience is a plus.
Your day-to-day at Egnyte
- Designing, building, and maintaining cloud environments (using Terraform, Puppet or Kubernetes)
- Migrating services to cloud-based environments
- Collaborating with software developers and DBAs to create a reliable and scalable infrastructure for our product.
About you
- 2+ years of proven experience in a DevOps Engineer, System Administrator or Developer role, working on infrastructure or build processes
- Programming prowess (Python, Java, Ruby, Golang, or JavaScript)
- Experience with databases (MySQL or Postgress or RDS/Aurora or others)
- Experience with public cloud services (GCP/AWS/Azure)
- Good understanding of the Linux Operating System on the administration level
- Preferably you have experience with HA solutions: our tools of choice include Orchestrator, Proxysql, HAProxy, Corosync & Pacemaker, etc.
- Experience with metric-based monitoring solutions (Cloud: CloudWatch/Stackdriver, On-prem: InfluxDB/OpenTSDB/Prometheus)
- Drive to grow as a DevOps Engineer (we value open-mindedness and a can-do attitude)
LogiNext is looking for a technically savvy and passionate Principal DevOps Engineer or Senior Database Administrator to cater to the development and operations efforts in product. You will choose and deploy tools and technologies to build and support a robust infrastructure.
You have hands-on experience in building secure, high-performing and scalable infrastructure. You have experience to automate and streamline the development operations and processes. You are a master in troubleshooting and resolving issues in dev, staging and production environments.
Responsibilities:
Design and implement scalable infrastructure for delivering and running web, mobile and big data applications on cloud Scale and optimise a variety of SQL and NoSQL databases (especially MongoDB), web servers, application frameworks, caches, and distributed messaging systems Automate the deployment and configuration of the virtualized infrastructure and the entire software stack Plan, implement and maintain robust backup and restoration policies ensuring low RTO and RPO Support several Linux servers running our SaaS platform stack on AWS, Azure, IBM Cloud, Ali Cloud Define and build processes to identify performance bottlenecks and scaling pitfalls Manage robust monitoring and alerting infrastructure Explore new tools to improve development operations to automate daily tasks Ensure High Availability and Auto-failover with minimum or no manual interventions
Requirements:
Bachelor’s degree in Computer Science, Information Technology or a related field 8 to 10 years of experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure Strong background in Linux/Unix Administration and Python/Shell Scripting Extensive experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure Experience in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms Experience in query analysis, peformance tuning, database redesigning, Experience in enterprise application development, maintenance and operations Knowledge of best practices and IT operations in an always-up, always-available service Excellent written and oral communication skills, judgment and decision-making skills
About the role:
We are seeking a highly skilled Azure DevOps Engineer with a strong background in backend development to join our rapidly growing team. The ideal candidate will have a minimum of 4 years of experience and has have extensive experience in building and maintaining CI/CD pipelines, automating deployment processes, and optimizing infrastructure on Azure. Additionally, expertise in backend technologies and development frameworks is required to collaborate effectively with the development team in delivering scalable and efficient solutions.
Responsibilities
- Collaborate with development and operations teams to implement continuous integration and deployment processes.
- Automate infrastructure provisioning, configuration management, and application deployment using tools such as Ansible, and Jenkins.
- Design, implement, and maintain Azure DevOps pipelines for continuous integration and continuous delivery (CI/CD)
- Develop and maintain build and deployment pipelines, ensuring that they are scalable, secure, and reliable.
- Monitor and maintain the health of the production infrastructure, including load balancers, databases, and application servers.
- Automate the software development and delivery lifecycle, including code building, testing, deployment, and release.
- Familiarity with Azure CLI, Azure REST APIs, Azure Resource Manager template, Azure billing/cost management and the Azure Management Console
- Must have experience of any one of the programming language (Java, .Net, Python )
- Ensure high availability of the production environment by implementing disaster recovery and business continuity plans.
- Build and maintain monitoring, alerting, and trending operational tools (CloudWatch, New Relic, Splunk, ELK, Grafana, Nagios).
- Stay up to date with new technologies and trends in DevOps and make recommendations for improvements to existing processes and infrastructure.
- ontribute to backend development projects, ensuring robust and scalable solutions.
- Work closely with the development team to understand application requirements and provide technical expertise in backend architecture.
- Design and implement database schemas.
- Identify and implement opportunities for performance optimization and scalability of backend systems.
- Participate in code reviews, architectural discussions, and sprint planning sessions.
- Stay updated with the latest Azure technologies, tools, and best practices to continuously improve our development and deployment processes.
- Mentor junior team members and provide guidance and training on best practices in DevOps.
Required Qualifications
- BS/MS in Computer Science, Engineering, or a related field
- 4+ years of experience as an Azure DevOps Engineer (or similar role) with experience in backed development.
- Strong understanding of CI/CD principles and practices.
- Expertise in Azure DevOps services, including Azure Pipelines, Azure Repos, and Azure Boards.
- Experience with infrastructure automation tools like Terraform or Ansible.
- Proficient in scripting languages like PowerShell or Python.
- Experience with Linux and Windows server administration.
- Strong understanding of backend development principles and technologies.
- Excellent communication and collaboration skills.
- Ability to work independently and as part of a team.
- Problem-solving and analytical skills.
- Experience with industry frameworks and methodologies: ITIL/Agile/Scrum/DevOps
- Excellent problem-solving, critical thinking, and communication skills.
- Have worked in a product based company.
What we offer:
- Competitive salary and benefits package
- Opportunity for growth and advancement within the company
- Collaborative, dynamic, and fun work environment
- Possibility to work with cutting-edge technologies and innovative projects
About BootLabs
https://www.google.com/url?q=https://www.bootlabs.in/&sa=D&source=calendar&ust=1667803146567128&usg=AOvVaw1r5g0R_vYM07k6qpoNvvh6" target="_blank">https://www.bootlabs.in/
-We are a Boutique Tech Consulting partner, specializing in Cloud Native Solutions.
-We are obsessed with anything “CLOUD”. Our goal is to seamlessly automate the development lifecycle, and modernize infrastructure and its associated applications.
-With a product mindset, we enable start-ups and enterprises on the cloud
transformation, cloud migration, end-to-end automation and managed cloud services.
-We are eager to research, discover, automate, adapt, empower and deliver quality solutions on time.
-We are passionate about customer success. With the right blend of experience and exuberant youth in our in-house team, we have significantly impacted customers.
Technical Skills:
• Expertise in any one hyper scaler (AWS/AZURE/GCP), including basic services like networking,
data and workload management.
- AWS
Networking: VPC, VPC Peering, Transit Gateway, Route Tables, Security Groups, etc.
Data: RDS, DynamoDB, Elastic Search
Workload: EC2, EKS, Lambda, etc.
- Azure
Data: Azure MySQL, Azure MSSQL, etc.
Workload: AKS, Virtual Machines, Azure Functions
- GCP
Data: Cloud Storage, DataFlow, Cloud SQL, Firestore, BigTable, BigQuery
Workload: GKE, Instances, App Engine, Batch, etc.
• Experience in any one of the CI/CD tools (Gitlab/Github/Jenkins) including runner setup,
templating and configuration.
• Kubernetes experience or Ansible Experience (EKS/AKS/GKE), basics like pod, deployment,
networking, service mesh. Used any package manager like helm.
• Scripting experience (Bash/python), automation in pipelines when required, system service.
• Infrastructure automation (Terraform/pulumi/cloud formation), write modules, setup pipeline and version the code.
Optional:
• Experience in any programming language is not required but is appreciated.
• Good experience in GIT, SVN or any other code management tool is required.
• DevSecops tools like (Qualys/SonarQube/BlackDuck) for security scanning of artifacts, infrastructure and code.
• Observability tools (Opensource: Prometheus, Elasticsearch, Open Telemetry; Paid: Datadog,
24/7, etc)
- Preferred experience in development associated with Kafka or big data technologies understand essential Kafka components like Zookeeper, Brokers, and optimization of Kafka clients applications (Producers & Consumers). -
Experience with Automation of Infrastructure, Testing , DB Deployment Automation, Logging/Monitoring/alerting
- AWS services experience on CloudFormation, ECS, Elastic Container Registry, Pipelines, Cloudwatch, Glue, and other related services.
- AWS Elastic Kubernetes Services (EKS) - Kubernetes and containers managing and auto-scaling -
Good knowledge and hands-on experiences with various AWS services like EC2, RDS, EKS, S3, Lambda, API, Cloudwatch, etc.
- Good and quick with log analysis to perform Root Cause Analysis (RCA) on production deployments and container errors on cloud watch.
Working on ways to automate and improve deployment and release processes.
- High understanding of the Serverless architecture concept. - Good with Deployment automation tools and Investigating to resolve technical issues.
technical issues. - Sound knowledge of APIs, databases, and container-based ETL jobs.
- Planning out projects and being involved in project management decisions. Soft Skills
- Adaptability
- Collaboration with different teams
- Good communication skills
- Team player attitude

Required Skills and Experience
- 4+ years of relevant experience with DevOps tools Jenkins, Ansible, Chef etc
- 4+ years of experience in continuous integration/deployment and software tools development experience with Python and shell scripts etc
- Building and running Docker images and deployment on Amazon ECS
- Working with AWS services (EC2, S3, ELB, VPC, RDS, Cloudwatch, ECS, ECR, EKS)
- Knowledge and experience working with container technologies such as Docker and Amazon ECS, EKS, Kubernetes
- Experience with source code and configuration management tools such as Git, Bitbucket, and Maven
- Ability to work with and support Linux environments (Ubuntu, Amazon Linux, CentOS)
- Knowledge and experience in cloud orchestration tools such as AWS Cloudformation/Terraform etc
- Experience with implementing "infrastructure as code", “pipeline as code” and "security as code" to enable continuous integration and delivery
- Understanding of IAM, RBAC, NACLs, and KMS
- Good communication skills
Good to have:
- Strong understanding of security concepts, methodologies and apply them such as SSH, public key encryption, access credentials, certificates etc.
- Knowledge of database administration such as MongoDB.
- Knowledge of maintaining and using tools such as Jira, Bitbucket, Confluence.
- Work with Leads and Architects in designing and implementation of technical infrastructure, platform, and tools to support modern best practices and facilitate the efficiency of our development teams through automation, CI/CD pipelines, and ease of access and performance.
- Establish and promote DevOps thinking, guidelines, best practices, and standards.
- Contribute to architectural discussions, Agile software development process improvement, and DevOps best practices.


