
We are looking for a skilled DevOps Engineer with strong expertise in Microsoft Azure and Pulumi to join our growing engineering team. The ideal candidate will be responsible for designing, implementing, and maintaining scalable cloud infrastructure and CI/CD pipelines. This role requires hands-on experience with Infrastructure as Code (IaC), automation, and cloud-native DevOps practices while collaborating closely with development and platform teams.
The candidate must be comfortable working in a U.S. time zone overlap, ensuring smooth collaboration with onshore teams.
Key Responsibilities
- Design, implement, and maintain cloud infrastructure on Microsoft Azure using modern DevOps practices.
- Develop and manage Infrastructure as Code (IaC) using Pulumi to automate infrastructure provisioning and management.
- Build, maintain, and optimize CI/CD pipelines to support reliable and frequent deployments.
- Monitor, troubleshoot, and improve cloud infrastructure performance, availability, and security.
- Collaborate with development teams to improve deployment processes and application reliability.
- Implement automation and configuration management to streamline operations.
- Manage containerized applications and orchestration platforms when applicable.
- Ensure adherence to cloud security best practices, governance, and compliance standards.
- Support production environments and resolve infrastructure-related issues in a timely manner.
- Maintain documentation for infrastructure architecture, deployment pipelines, and operational procedures.
Required Skills & Experience
- 4–7 years of experience in DevOps, Cloud Engineering, or related roles.
- Strong hands-on experience with Microsoft Azure cloud services.
- Proven experience using Pulumi for Infrastructure as Code (must-have).
- Experience building and managing CI/CD pipelines using tools like Azure DevOps, GitHub Actions, Jenkins, or similar.
- Strong experience with containerization technologies (Docker, Kubernetes).
- Proficiency with scripting languages such as Python, Bash, or PowerShell.
- Experience with monitoring and logging tools.
- Strong understanding of cloud networking, security, and identity management.
- Experience with Git-based version control systems.
Preferred Qualifications
- Experience with Kubernetes in Azure (AKS).
- Familiarity with observability tools such as Prometheus, Grafana, or similar.
- Experience with cloud cost optimization and performance tuning.
- Knowledge of DevSecOps practices and security automation.
- Azure certifications such as Azure DevOps Engineer Expert or Azure Administrator.

Similar jobs
We are looking for a passionate DevOps Engineer who can support deployment and monitor our Production, QE, and Staging environments performance. Applicants should have a strong understanding of UNIX internals and should be able to clearly articulate how it works. Knowledge of shell scripting & security aspects is a must. Any experience with infrastructure as code is a big plus. The key responsibility of the role is to manage deployments, security, and support of business solutions. Having experience in database applications like Postgres, ELK, NodeJS, NextJS & Ruby on Rails is a huge plus. At VakilSearch. Experience doesn't matter, passion to produce change matters
Responsibilities and Accountabilities:
- As part of the DevOps team, you will be responsible for configuration, optimization, documentation, and support of the infra components of VakilSearch’s product which are hosted in cloud services & on-prem facility
- Design, build tools and framework that support deploying and managing our platform & Exploring new tools, technologies, and processes to improve speed, efficiency, and scalability
- Support and troubleshoot scalability, high availability, performance, monitoring, backup, and restore of different Env
- Manage resources in a cost-effective, innovative manner including assisting subordinates ineffective use of resources and tools
- Resolve incidents as escalated from Monitoring tools and Business Development Team
- Implement and follow security guidelines, both policy and technology to protect our data
- Identify root cause for issues and develop long-term solutions to fix recurring issues and Document it
- Strong in performing production operation activities even at night times if required
- Ability to automate [Scripts] recurring tasks to increase velocity and quality
- Ability to manage and deliver multiple project phases at the same time
I Qualification(s):
- Experience in working with Linux Server, DevOps tools, and Orchestration tools
- Linux, AWS, GCP, Azure, CompTIA+, and any other certification are a value-add
II Experience Required in DevOps Aspects:
- Length of Experience: Minimum 1-4 years of experience
- Nature of Experience:
- Experience in Cloud deployments, Linux administration[ Kernel Tuning is a value add ], Linux clustering, AWS, virtualization, and networking concepts [ Azure, GCP value add ]
- Experience in deployment solutions CI/CD like Jenkins, GitHub Actions [ Release Management is a value add ]
- Hands-on experience in any of the configuration management IaC tools like Chef, Terraform, and CloudFormation [ Ansible & Puppet is a value add ]
- Administration, Configuring and utilizing Monitoring and Alerting tools like Prometheus, Grafana, Loki, ELK, Zabbix, Datadog, etc
- Experience with Containerization and orchestration tools like Docker, and Kubernetes [ Docker swarm is a value add ]Good Scripting skills in at least one interpreted language - Shell/bash scripting or Ruby/Python/Perl
- Experience in Database applications like PostgreSQL, MongoDB & MySQL [DataOps]
- Good at Version Control & source code management systems like GitHub, GIT
- Experience in Serverless [ Lambda/GCP cloud function/Azure function ]
- Experience in Web Server Nginx, and Apache
- Knowledge in Redis, RabbitMQ, ELK, REST API [ MLOps Tools is a value add ]
- Knowledge in Puma, Unicorn, Gunicorn & Yarn
- Hands-on VMWare ESXi/Xencenter deployments is a value add
- Experience in Implementing and troubleshooting TCP/IP networks, VPN, Load Balancing & Web application firewalls
- Deploying, Configuring, and Maintaining Linux server systems ON premises and off-premises
- Code Quality like SonarQube is a value-add
- Test Automation like Selenium, JMeter, and JUnit is a value-add
- Experience in Heroku and OpenStack is a value-add
- Experience in Identifying Inbound and Outbound Threats and resolving it
- Knowledge of CVE & applying the patches for OS, Ruby gems, Node, and Python packages
- Documenting the Security fix for future use
- Establish cross-team collaboration with security built into the software development lifecycle
- Forensics and Root Cause Analysis skills are mandatory
- Weekly Sanity Checks of the on-prem and off-prem environment
III Skill Set & Personality Traits required:
- An understanding of programming languages such as Ruby, NodeJS, ReactJS, Perl, Java, Python, and PHP
- Good written and verbal communication skills to facilitate efficient and effective interaction with peers, partners, vendors, and customers
IV Age Group: 21 – 36 Years
V Cost to the Company: As per industry standards
Job role: Systems Engineer (L2)
Location: Remote/Bengaluru
Experience: 3-6 years
About the Role:
We are looking for a Systems Engineer (L2) to join our growing infrastructure team. You will be responsible for managing, optimizing, and scaling our cloud communication platform that handles billions of messages and voice calls annually.
Key Responsibilities:
— Design, deploy, and maintain scalable cloud infrastructure — AWS/GCP/Azure.
— Manage and optimize networking components — routers, switches, firewalls, load balancers.
— Handle incident response — monitor systems, identify issues, resolve production problems.
— Implement DevOps best practices — CI/CD pipelines, automation, containerization.
— Collaborate with backend and product teams on system architecture.
— Performance tuning — ensure high availability and reliability of platform.
— Security management — implement security protocols and compliance standards.
Required Skills:
Technical:
- Linux/Unix administration — strong fundamentals
- Networking — TCP/IP, DNS, BGP, VoIP protocols
- Cloud platforms — AWS/GCP/Azure — minimum 2 years
- DevOps tools — Docker, Kubernetes, Jenkins, CI/CD
- Monitoring tools — Grafana, Prometheus, Kibana, Datadog
- Scripting — Python, Bash, Shell
- Databases — MySQL, PostgreSQL, Redis
Soft skills:
- Strong problem-solving under pressure
- Good communication — English written and verbal
- Team player — collaborative mindset
Good to Have:
- Experience in telecom/CPaaS/cloud communications industry
- Knowledge of VoIP, SIP, RTP protocols
- AI/ML operations experience
- CCNA/AWS certifications
Sr.DevOps Engineer (5 to 8 yrs. Exp.)
Location: Ahmedabad
- Strong Experience in Infrastructure provisioning in cloud using Terraform & AWS CloudFormation Templates.
- Strong Experience in Serverless Containerization technologies such as Kubernetes, Docker etc.
- Strong Experience in Jenkins & AWS Native CI/CD implementation using code
- Strong Experience in Cloud operational automation using Python, Shell script, AWS CLI, AWS Systems Manager, AWS Lamnda, etc.
- Day to Day AWS Cloud administration tasks
- Strong Experience in Configuration management using Ansible and PowerShell.
- Strong Experience in Linux and any scripting language must required.
- Knowledge of Monitoring tool will be added advantage.
- Understanding of DevOps practices which involves Continuous Integration, Delivery and Deployment.
- Hands on with application deployment process
Key Skills: AWS, terraform, Serverless, Jenkins,Devops,CI/CD,Python,CLI,Linux,Git,Kubernetes
Role: Software Developer
Industry Type: IT-Software, Software Services
FunctionalArea:ITSoftware- Application Programming, Maintenance
Employment Type: Full Time, Permanent
Education: Any computer graduate.
Salary: Best in Industry.Role : Principal Devops Engineer
About the Client
It is a Product base company that has to build a platform using AI and ML technology for their transportation and logiticsThey also have a presence in the global market
Responsibilities and Requirements
• Experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
• Knowledge in Linux/Unix Administration and Python/Shell Scripting
• Experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
• Knowledge in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios
• Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
• Experience in enterprise application development, maintenance and operations
• Knowledge of best practices and IT operations in an always-up, always-available service
• Excellent written and oral communication skills, judgment and decision-making skill
Bito is a startup that is using AI (ChatGPT, OpenAI, etc) to create game-changing productivity experiences for software developers in their IDE and CLI. Already, over 100,000 developers are using Bito to increase their productivity by 31% and performing more than 1 million AI requests per week.
Our founders have previously started, built, and taken a company public (NASDAQ: PUBM), worth well over $1B. We are looking to take our learnings, learn a lot along with you, and do something more exciting this time. This journey will be incredibly rewarding, and is incredibly difficult!
We are building this company with a fully remote approach, with our main teams for time zone management in the US and in India. The founders happen to be in Silicon Valley and India.
We are hiring a DevOps Engineer to join our team.
Responsibilities:
- Collaborate with the development team to design, develop, and implement Java-based applications
- Perform analysis and provide recommendations for Cloud deployments and identify opportunities for efficiency and cost reduction
- Build and maintain clusters for various technologies such as Aerospike, Elasticsearch, RDS, Hadoop, etc
- Develop and maintain continuous integration (CI) and continuous delivery (CD) frameworks
- Provide architectural design and practical guidance to software development teams to improve resilience, efficiency, performance, and costs
- Evaluate and define/modify configuration management strategies and processes using Ansible
- Collaborate with DevOps engineers to coordinate work efforts and enhance team efficiency
- Take on leadership responsibilities to influence the direction, schedule, and prioritization of the automation effort
Requirements:
- Minimum 4+ years of relevant work experience in a DevOps role
- At least 3+ years of experience in designing and implementing infrastructure as code within the AWS/GCP/Azure ecosystem
- Expert knowledge of any cloud core services, big data managed services, Ansible, Docker, Terraform/CloudFormation, Amazon ECS/Kubernetes, Jenkins, and Nginx
- Expert proficiency in at least two scripting/programming languages such as Bash, Perl, Python, Go, Ruby, etc.
- Mastery in configuration automation tool sets such as Ansible, Chef, etc
- Proficiency with Jira, Confluence, and Git toolset
- Experience with automation tools for monitoring and alerts such as Nagios, Grafana, Graphite, Cloudwatch, New Relic, etc
- Proven ability to manage and prioritize multiple diverse projects simultaneously
What do we offer:
At Bito, we strive to create a supportive and rewarding work environment that enables our employees to thrive. Join a dynamic team at the forefront of generative AI technology.
· Work from anywhere
· Flexible work timings
· Competitive compensation, including stock options
· A chance to work in the exciting generative AI space
· Quarterly team offsite events
As a SaaS DevOps Engineer, you will be responsible for providing automated tooling and process enhancements for SaaS deployment, application and infrastructure upgrades and production monitoring.
-
Development of automation scripts and pipelines for deployment and monitoring of new production environments.
-
Development of automation scripts for upgrades, hotfixes deployments and maintenance.
-
Work closely with Scrum teams and product groups to support the quality and growth of the SaaS services.
-
Collaborate closely with SaaS Operations team to handle day-to-day production activities - handling alerts and incidents.
-
Assist SaaS Operations team to handle customers focus projects: migrations, features enablement.
-
Write Knowledge articles to document known issues and best practices.
-
Conduct regression tests to validate solutions or workarounds.
-
Work in a globally distributed team.
What achievements should you have so far?
-
Bachelor's or master’s degree in Computer Science, Information Systems, or equivalent.
-
Experience with containerization, deployment, and operations.
-
Strong knowledge of CI/CD processes (Git, Jenkins, Pipelines).
-
Good experience with Linux systems and Shell scripting.
-
Basic cloud experience, preferably oriented on MS Azure.
-
Basic knowledge of containerized solutions (Helm, Kubernetes, Docker).
-
Good Networking skills and experience.
-
Having Terraform or CloudFormation knowledge will be considered a plus.
-
Ability to analyze a task from a system perspective.
-
Excellent problem-solving and troubleshooting skills.
-
Excellent written and verbal communication skills; mastery in English and local language.
-
Must be organized, thorough, autonomous, committed, flexible, customer-focused and productive.
Job description
The role requires you to design development pipelines from the ground up, Creation of Docker Files, design and operate highly available systems in AWS Cloud environments. Also involves Configuration Management, Web Services Architectures, DevOps Implementation, Database management, Backups, and Monitoring.
Key responsibility area
- Ensure reliable operation of CI/CD pipelines
- Orchestrate the provisioning, load balancing, configuration, monitoring and billing of resources in the cloud environment in a highly automated manner
- Logging, metrics and alerting management.
- Creation of Bash/Python scripts for automation
- Performing root cause analysis for production errors.
Requirement
- 2 years experience as Team Lead.
- Good Command on kubernetes.
- Proficient in Linux Commands line and troubleshooting.
- Proficient in AWS Services. Deployment, Monitoring and troubleshooting applications in AWS.
- Hands-on experience with CI tooling preferably with Jenkins.
- Proficient in deployment using Ansible.
- Knowledge of infrastructure management tools (Infrastructure as cloud) such as terraform, AWS cloudformation etc.
- Proficient in deployment of applications behind load balancers and proxy servers such as nginx, apache.
- Scripting languages: Bash, Python, Groovy.
- Experience with Logging, Monitoring, and Alerting tools like ELK(Elastic-search, Logstash, Kibana), Nagios. Graylog, splunk Prometheus, Grafana is a plus.
Must Have:
Linux, CI/CD(Jenkin), AWS, Scripting(Bash,shell Python, Go), Ngnix, Docker.
Good to have
Configuration Management(Ansible or similar tool), Logging tool( ELK or similar), Monitoring tool(Ngios or similar), IaC(Terraform, cloudformation).This person MUST have:
- B.E Computer Science or equivalent
- 2+ Years of hands-on experience troubleshooting/setting up of the Linux environment, who can write shell scripts for any given requirement.
- 1+ Years of hands-on experience setting up/configuring AWS or GCP services from SCRATCH and maintaining them.
- 1+ Years of hands-on experience setting up/configuring Kubernetes & EKS and ensuring high availability of container orchestration.
- 1+ Years of hands-on experience setting up CICD from SCRATCH in Jenkins & Gitlab.
- Experience configuring/maintaining one monitoring tool.
- Excellent verbal & written communication skills.
- Candidates with certifications - AWS, GCP, CKA, etc will be preferred
- Hands-on experience with databases (Cassandra, MongoDB, MySQL, RDS).
Experience:
- Min 3 years of experience as SRE automation engineer building, running, and maintaining production sites. Not looking for candidates who have experience only as L1/L2.
Location:
- Remotely, anywhere in India
Timings:
- The person is expected to deliver with both high speed and high quality as well as work for 40 Hours per week (~6.5 hours per day, 6 days per week) in shifts which will rotate every month.
Position:
- Full time/Direct
- We have great benefits such as PF, medical insurance, 12 annual company holidays, 12 PTO leaves per year, annual increments, Diwali bonus, spot bonuses and other incentives etc.
- We dont believe in locking in people with large notice periods. You will stay here because you love the company. We have only a 15 days notice period.
Should be open to embracing new technologies, keeping up with emerging tech.
Strong troubleshooting and problem-solving skills.
Willing to be part of a high-performance team, build mature products.
Should be able to take ownership and work under minimal supervision.
Strong Linux System Administration background (with minimum 2 years experience), responsible for handling/defining the organization infrastructure(Hybrid).
Working knowledge of MySQL databases, Nginx, and Haproxy Load Balancer.
Experience in CI/CD pipelines, Configuration Management (Ansible/Saltstack) & Cloud Technologies (AWS/Azure/GCP)
Hands-on experience in GitHub, Jenkins, Prometheus, Grafana, Nagios, and Open Sources tools.
Strong Shell & Python scripting would be a plus.
We are a growth-oriented, dynamic, multi-national startup, so those that are looking for that startup excitement, dynamics, and buzz are here at the right place. Read on -
FrontM (http://www.frontm.com/" target="_blank">www.frontm.com) is an edge AI company with a platform that is redefining how businesses and people in remote and isolated environments (maritime, aviation, mining....) collaborate and drive smart decisions.
Successful candidate will lead the back end architecture working alongside VP of delivery, CTO and CEO
The problem you will be working on:
- Take ownership of AWS cloud infrastructure
- Overlook tech ops with hands-on CI/CD and administration
- Develop Node.js Java and backend system procedures for stability, scale and performance
- Understand FrontM platform roadmap and contribute to planning strategic and tactical capabilities
- Integrate APIs and abstractions for complex requirements
Who you are:
- You are an experienced Cloud Architect and back end developer
- You have experience creating AWS Serverless Lambdas EC2 MongoDB backends
- You have extensive CI/CD and DevOps experience
- You can take ownership of continuous server uptime, maintenance, stability and performance
- You can lead a team of backend developers and architects
- You are a die-hard problem solver and never-say-no person
- You have 10+ years experience
- You are very sound in English language
- You have the ability to initiate and lead teams working with senior management
Additional benefits
- Generous pay package, flexible for the right candidate
- Career development and growth planning
- Entrepreneurial environment that nurtures and promotes innovation
- Multi-national team with an enjoyable culture
We'd love to talk to you if you find this interesting and like to join in on our exciting journey











