DevOps Engineer
Company Introduction
https://www.cometchat.com/">CometChat harnesses the power of chat by helping thousands of businesses around the world create customized in-app messaging experiences. Our products allow developers to seamlessly add voice, video and text chat to their websites and mobile apps so that their users can communicate with each other, resulting in a unified customer experience, increased engagement and retention, and revenue growth.
In 2019, CometChat was selected into the exclusive Techstars Boulder Accelerator. CometChat (Industry CPaaS: communication-platform-as-a-service) has also been listed among the top 10 best SaaS companies by G2 Crowd. With solid financials, strong organic growth and increasing interest in developer tool-focused companies (from the market and with top technical talent), we’re heading into an exciting period of growth and acceleration. https://www.crunchbase.com/organization/cometchat">CometChat is backed by seasoned investors such as iSeed Ventures, Range Ventures, Silicon Badia, eonCapital and Matchstick Ventures.
A global business from the start, we have 60+ team members across our Denver and Mumbai offices serving over 50,000 customers around the world. We’ve had an exciting journey so far, and we know this is just the beginning!
CometChat’s Mission
Enable meaningful connections between real people in an increasingly digital world.
CometChat’s Products
CometChat offers a robust suite of cloud hosted text, voice and video options that meet businesses where they are–whether they need drag and drop plugins that can be ready within 30 minutes or if they want more advanced features and can invest development resources to launch the experience that will best serve their users.
● Quickly build a reliable & full featured chat experience into any mobile or web app
● Fully customizable SDKs and API designed to help companies ship faster
At every step, CometChat helps customers solve complex infrastructure, performance and security challenges, regardless of the platform. But there is so much more! With over 20 ready to use extensions, customers can build an experience and get the data, analysis and insights they need to drive their business forward.
CometChat’s solutions are perfect for every kind of chat including:
● Social community – Allowing people in online communities to interact without moving the conversation to another platform
● Marketplace – Enabling communications between buyers and sellers
● Events – Bringing thousands of users together to interact without diminishing the quality of the experience
● Telemedicine – Making connections between patients and providers more accessible
● Dating – Keeping people engaged while they connect with one another
● And more!
CometChat is committed to fostering a culture of innovation & collaboration. Our people are our strength so we respect and nurture their individual talent and potential. Join us if you are looking to be a part of a high growth team!
Position Overview & Priorities:
The DevOps Engineer will be responsible for effective provisioning, installation/configuration, operation, and maintenance of systems and software using Infrastructure as Code. This can include the provision of cloud instances, streamlining deployments, configuring virtual instances, scaling out DB servers.
Primary responsibility would be:
- Oversight of all server environments, from Dev through Production.
- Work on an infrastructure that is 100% on AWS.
- Work on CI/CD tooling which is used to build and deploy code to our cloud.
- Assist with day-to-day issue management.
- Work on internal tooling which simplifies workflows.
- Research, design and implement solutions for fault tolerance, monitoring, performance enhancement, capacity optimization, and configuration management of systems and applications.
Work Location:
We operate on a Hybrid model – you choose where you work from! Remotely or from our offices. Currently, our talent is spread across 14 different cities globally.
Prioritized Experiences and Capabilities:
- 2-4 years of experience working as a DevOps Engineer/currently practicing DevOps methodology
- Experience in AWS Infrastructure
- Hands-on experience with Infrastructure as Code (Cloud Formation / Terraform, Puppet / Chef / Ansible)
- Strong background in Linux/Unix Administration
- DevOps automation with CI/CD, a pipeline that enforces proper versioning and branching practices
- Experience in Docker and Kubernetes.

About CometChat
About
CometChat is a full-stack conversational platform built to unify every layer of interaction — bringing together real-time conversations (chat, messaging, voice, and video), AI Agents, moderation, notifications, and analytics in one modular, developer-first solution.
We believe the interface of the future is conversation — not clicks. Every app will soon have an AI layer that’s as native as text messaging today. That’s why we’re building the infrastructure for the world’s AI-powered conversations — from human-to-human, to human-to-agent, to multi-party collaboration with AI in the mix.
From AI onboarding assistants that get users productive in minutes, to copilots that perform complex workflows in-app, to intelligent moderators that protect and guide communities in real time — our AI Agent platform makes it all possible.
With CometChat’s ready-to-use UI kits, powerful SDKs, and our Full Stack AI Agent Platform, product teams across startups and enterprises can launch safe, scalable, and smart in-app interactions faster than ever.
Similar jobs
Role: Full-Time, Long-Term Required: Docker, GCP, CI/CD Preferred: Experience with ML pipelines
OVERVIEW
We are seeking a DevOps engineer to join as a core member of our technical team. This is a long-term position for someone who wants to own infrastructure and deployment for a production machine learning system. You will ensure our prediction pipeline runs reliably, deploys smoothly, and scales as needed.
The ideal candidate thinks about failure modes obsessively, automates everything possible, and builds systems that run without constant attention.
CORE TECHNICAL REQUIREMENTS
Docker (Required): Deep experience with containerization. Efficient Dockerfiles, layer caching, multi-stage builds, debugging container issues. Experience with Docker Compose for local development.
Google Cloud Platform (Required): Strong GCP experience: Cloud Run for serverless containers, Compute Engine for VMs, Artifact Registry for images, Cloud Storage, IAM. You can navigate the console but prefer scripting everything.
CI/CD (Required): Build and maintain deployment pipelines. GitHub Actions required. You automate testing, building, pushing, and deploying. You understand the difference between continuous integration and continuous deployment.
Linux Administration (Required): Comfortable on the command line. SSH, diagnose problems, manage services, read logs, fix things. Bash scripting is second nature.
PostgreSQL (Required): Database administration basics—backups, monitoring, connection management, basic performance tuning. Not a DBA, but comfortable keeping a production database healthy.
Infrastructure as Code (Preferred): Terraform, Pulumi, or similar. Infrastructure should be versioned, reviewed, and reproducible—not clicked together in a console.
WHAT YOU WILL OWN
Deployment Pipeline: Maintaining and improving deployment scripts and CI/CD workflows. Code moves from commit to production reliably with appropriate testing gates.
Cloud Run Services: Managing deployments for model fitting, data cleansing, and signal discovery services. Monitor health, optimize cold starts, handle scaling.
VM Infrastructure: PostgreSQL and Streamlit on GCP VMs. Instance management, updates, backups, security.
Container Registry: Managing images in GitHub Container Registry and Google Artifact Registry. Cleanup policies, versioning, access control.
Monitoring and Alerting: Building observability. Logging, metrics, health checks, alerting. Know when things break before users tell us.
Environment Management: Configuration across local and production. Secrets management. Environment parity where it matters.
WHAT SUCCESS LOOKS LIKE
Deployments are boring—no drama, no surprises. Systems recover automatically from transient failures. Engineers deploy with confidence. Infrastructure changes are versioned and reproducible. Costs are reasonable and resources scale appropriately.
ENGINEERING STANDARDS
Automation First: If you do something twice, automate it. Manual processes are bugs waiting to happen.
Documentation: Runbooks, architecture diagrams, deployment guides. The next person can understand and operate the system.
Security Mindset: Secrets never in code. Least-privilege access. You think about attack surfaces.
Reliability Focus: Design for failure. Backups are tested. Recovery procedures exist and work.
CURRENT ENVIRONMENT
GCP (Cloud Run, Compute Engine, Artifact Registry, Cloud Storage), Docker, Docker Compose, GitHub Actions, PostgreSQL 16, Bash deployment scripts with Python wrapper.
WHAT WE ARE LOOKING FOR
Ownership Mentality: You see a problem, you fix it. You do not wait for assignment.
Calm Under Pressure: When production breaks, you diagnose methodically.
Communication: You explain infrastructure decisions to non-infrastructure people. You document what you build.
Long-Term Thinking: You build systems maintained for years, not quick fixes creating tech debt.
EDUCATION
University degree in Computer Science, Engineering, or related field preferred. Equivalent demonstrated expertise also considered.
TO APPLY
Include: (1) CV/resume, (2) Brief description of infrastructure you built or maintained, (3) Links to relevant work if available, (4) Availability and timezone.
Job Summary:
Seeking a seasoned SQL + ETL Developer with 4+ years of experience in managing large-scale datasets and cloud-based data pipelines. The ideal candidate is hands-on with MySQL, PySpark, AWS Glue, and ETL workflows, with proven expertise in AWS migration and performance optimization.
Key Responsibilities:
- Develop and optimize complex SQL queries and stored procedures to handle large datasets (100+ million records).
- Build and maintain scalable ETL pipelines using AWS Glue and PySpark.
- Work on data migration tasks in AWS environments.
- Monitor and improve database performance; automate key performance indicators and reports.
- Collaborate with cross-functional teams to support data integration and delivery requirements.
- Write shell scripts for automation and manage ETL jobs efficiently.
Required Skills:
- Strong experience with MySQL, complex SQL queries, and stored procedures.
- Hands-on experience with AWS Glue, PySpark, and ETL processes.
- Good understanding of AWS ecosystem and migration strategies.
- Proficiency in shell scripting.
- Strong communication and collaboration skills.
Nice to Have:
- Working knowledge of Python.
- Experience with AWS RDS.
Please Apply - https://zrec.in/sEzbp?source=CareerSite
About Us
Infra360 Solutions is a services company specializing in Cloud, DevSecOps, Security, and Observability solutions. We help technology companies adapt DevOps culture in their organization by focusing on long-term DevOps roadmap. We focus on identifying technical and cultural issues in the journey of successfully implementing the DevOps practices in the organization and work with respective teams to fix issues to increase overall productivity. We also do training sessions for the developers and make them realize the importance of DevOps. We provide these services - DevOps, DevSecOps, FinOps, Cost Optimizations, CI/CD, Observability, Cloud Security, Containerization, Cloud Migration, Site Reliability, Performance Optimizations, SIEM and SecOps, Serverless automation, Well-Architected Review, MLOps, Governance, Risk & Compliance. We do assessments of technology architecture, security, governance, compliance, and DevOps maturity model for any technology company and help them optimize their cloud cost, streamline their technology architecture, and set up processes to improve the availability and reliability of their website and applications. We set up tools for monitoring, logging, and observability. We focus on bringing the DevOps culture to the organization to improve its efficiency and delivery.
Job Description
Job Title: DevOps Engineer Azure
Department: Technology
Location: Gurgaon
Work Mode: On-site
Working Hours: 10 AM - 7 PM
Terms: Permanent
Experience: 2-4 years
Education: B.Tech/MCA/BCA
Notice Period: Immediately
Infra360.io is searching for a DevOps Engineer to lead our group of IT specialists in maintaining and improving our software infrastructure. You'll collaborate with software engineers, QA engineers, and other IT pros in deploying, automating, and managing the software infrastructure. As a DevOps engineer you will also be responsible for setting up CI/CD pipelines, monitoring programs, and cloud infrastructure.
Below is a detailed description of the roles and responsibilities, expectations for the role.
Tech Stack :
- Kubernetes: Deep understanding of Kubernetes clusters, container orchestration, and its architecture.
- Terraform: Extensive hands-on experience with Infrastructure as Code (IaC) using Terraform for managing cloud resources.
- ArgoCD: Experience in continuous deployment and using ArgoCD to maintain GitOps workflows.
- Helm: Expertise in Helm for managing Kubernetes applications.
- Cloud Platforms: Expertise in AWS, GCP or Azure will be an added advantage.
- Debugging and Troubleshooting: The DevOps Engineer must be proficient in identifying and resolving complex issues in a distributed environment, ranging from networking issues to misconfigurations in infrastructure or application components.
Key Responsibilities:
- CI/CD and configuration management
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environment's infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Infrastructure as a code (IaC)
- Keeping the cost of the infrastructure to the minimum
- Setting up the right set of security measures
- CI/CD and configuration management
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environment's infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Infrastructure as a code (IaC)
- Keeping the cost of the infrastructure to the minimum
- Setting up the right set of security measures
Ideal Candidate Profile:
- A graduation/post-graduation degree in Computer Science and related fields
- 2-4 years of strong DevOps experience with the Linux environment.
- Strong interest in working in our tech stack
- Excellent communication skills
- Worked with minimal supervision and love to work as a self-starter
- Hands-on experience with at least one of the scripting languages - Bash, Python, Go etc
- Experience with version control systems like Git
- Strong experience of Amazon Web Services (EC2, RDS, VPC, S3, Route53, IAM etc.)
- Strong experience with managing the Production Systems day in and day out
- Experience in finding issues in different layers of architecture in production environment and fixing them
- Knowledge of SQL and NoSQL databases, ElasticSearch, Solr etc.
- Knowledge of Networking, Firewalls, load balancers, Nginx, Apache etc.
- Experience in automation tools like Ansible/SaltStack and Jenkins
- Experience in Docker/Kubernetes platform and managing OpenStack (desirable)
- Experience with Hashicorp tools i.e. Vault, Vagrant, Terraform, Consul, VirtualBox etc. (desirable)
- Experience with managing/mentoring small team of 2-3 people (desirable)
- Experience in Monitoring tools like Prometheus/Grafana/Elastic APM.
- Experience in logging tools Like ELK/Lo
(Candidates from Service based Companies apply-Looking for automation(shell or python scripting))
SHIFT- Shift time either US East coast or west coast (2:30 PM to 10:30 PM India time or 5 to 2 am india time)
Exp- 5 to 8 years
Salary- Upto 25 LPA
Hyderabad based candidates preferred!
Immediate joiners would be preferred!!
Role Objective:
- Ability to identify processes where efficiency could be improved via automation
- Ability to research, prototype, iterate and test automation solutions
- Good Technical understanding of Cloud service offering, with a sound appreciation of the associated business processes.
- Ability to build & maintain a strong working relationship with other Technical teams using the agile methodology (internal and external), Infrastructure Partners and Service Engagement Managers.
- Ability to shape and co-ordinate delivery of key initiatives to deliver improvements in stability
- Good understanding of the cost of the e2e service provision, and delivery of associated savings.
- Knowledge of web security principals
- Strong Linux experience – comfortable working from command line
- Some networking knowledge (routing, DNS)
- Knowledge of HA and DR concepts and experience implementing them
- Working with team to analyse and design infrastructure with 99.99% up-time.
Qualifications:
- Infrastructure automation through DevOps scripting (Eg Python, Ruby, PowerShell, Java, shell) or previous software development experience
- Experience in building and managing production cloud environments from the ground up.
- Hands-on, working experience with primary AWS services (EC2, VPC, RDS, Route53, S3)
- Knowledge on repository management (GitHub, SVN)
- Solid understanding of web application architecture and RDBMS (SQL Server preferred).
- Experience with IT compliance and risk management requirements is a bonus. (Eg Security, Privacy, HIPAA, SOX, etc)
- Strong logical, analytical and problem-solving skills with excellent communication skills.
- Should have degree in computer science, MIS, engineering or equivalent with 5+ years of experience.
- Should be willing to work in rotational shifts (including the nights)
Perks and benefits:
- Health & Wellness
- Paid time off
- Learning at work
- Fun at work
- Night shift allowance
- Comp off
- Pick and drop facility available to certain distance
We are an on-demand E-commerce technology and Services Company and a tech-enabled 3PL (third party logistics). We unlocks ecommerce for companies by managing the entire operations lifecycle:
Sell, Fulfil & Reconcile.
Using us, companies can: -
• Store their inventory in our fulfilment centers (FCs)
• Sell their products on multiple sales channels (online marketplaces like Amazon, Flipkart, and their own website)
• Get their orders processed within a defined SLA
• Reconcile payments against the sales company combines infrastructure and dedicated experts to give brands: accountability, peace of mind, and control over the ecommerce journey.
The company is working on a remarkable concept for running an E-commerce business- starting from establishing an online presence for many enterprises. It offers a combination of products and services to create a comprehensive platform and manage all aspects of running a brand online, including the development of an exclusive web store, handling logistics, integrating all marketplaces and so on.
Who are we looking for?
We are looking for a skilled and passionate DevOps Engineer to join our Centre of Excellence to build and scale effective software solutions for our Ecommerce Domain.
Wondering what your Responsibilities would be?
• Building and setting up new development tools and infrastructure
• Provide full support to the software development teams to deploy, run and roll out new services and new capabilities in Cloud infrastructure
• Implement CI/CD and DevOps best practices for software application teams and assist in executing the integration and operation processes
• Build proactive monitoring and alerting infrastructure services to support operations and system health
• Be hands-on in developing prototypes and conducting Proof of Concepts
• Work in an agile, collaborative environment, partnering with other engineers to bring new solutions to the table
• Join the DevOps Chapter where you’ll have the opportunity to investigate and share information about technologies within the DevOps Engineering Community
What Makes you Eligible?
• Bachelor’s Degree or higher in Computer Science or Software Engineering with appropriate experience
• Minimum of 1 year of proven experience as DevOps Engineer
• Experience in working with a DevOps culture, following Agile Software Development methodologies of Scrum
• Proven experience in source code management tools like Bitbucket and Git
• Solid experience in CI/CD pipelines like Jenkins
• Shown ability with configuration management tools (e.g., Terraform, Ansible, Docker and Kubernetes) and repository tools like Artifactory
• Experience in Cloud architecture & provisioning
• Knowledge in Programming / Querying NoSQL databases
• Teamwork skills with a problem-solving attitude
3-5 years of experience in DevOps, systems administration, or software engineering roles.
B. Tech. in computer science or related field from top tier engineering colleges.
Strong technical skills in software development, systems administration, and cloud infrastructure management.
Experience with infrastructure-as-code tools such as Terra form or Cloud Formation.
Experience with containerization technologies such as Dockers and Kubernetes.
Experience with cloud providers such as AWS or Azure.
Experience with scripting languages such as Bash or Python.
Strong problem-solving and analytical skills.
Strong communication and collaboration skills
As a DevOps Engineer with experience in Kubernetes, you will be responsible for leading and managing a team of DevOps engineers in the design, implementation, and maintenance of the organization's infrastructure. You will work closely with software developers, system administrators, and other IT professionals to ensure that the organization's systems are efficient, reliable, and scalable.
Specific responsibilities will include:
- Leading the team in the development and implementation of automation and continuous delivery pipelines using tools such as Jenkins, Terraform, and Ansible.
- Managing the organization's infrastructure using Kubernetes, including deployment, scaling, and monitoring of applications.
- Ensuring that the organization's systems are secure and compliant with industry standards.
- Collaborating with software developers to design and implement infrastructure as code.
- Providing mentorship and technical guidance to team members.
- Troubleshooting and resolving technical issues in collaboration with other IT professionals.
- Participating in the development and maintenance of the organization's disaster recovery and incident response plans.
To be successful in this role, you should have strong leadership skills and experience with a variety of DevOps and infrastructure tools and technologies. You should also have excellent communication and problem-solving skills, and be able to work effectively in a fast-paced, dynamic environment.
Role:
- Developing a good understanding of the solutions which Company delivers, and how these link to Company’s overall strategy.
- Making suggestions towards shaping the strategy for a feature and engineering design.
- Managing own workload and usually delivering unsupervised. Accountable for their own workstream or the work of a small team.
- Understanding Engineering priorities and is able to focus on these, helping others to remain focussed too
- Acting as the Lead Engineer on a project. Helps ensure others follow Company processes, such as release and version control.
- An active member of the team, through useful contributions to projects and in team meetings.
- Supervising others. Deputising for a Lead and/or support them with tasks. Mentoring new joiners/interns and Masters students. Sharing knowledge and learnings with the team.
Requirements:
- Acquired strong proven professional programming experience.
- Strong command of Algorithms, Data structures, Design patterns, and Product Architectural Design.
- Good understanding of DevOps, Cloud technologies, CI/CD, Serverless and Docker, preferable AWS
- Proven track record and expert in one of the field - DevOps/Frontend/Backend
- Excellent coding and debugging skills in any language with command on any one programming paradigm, preferred Javascript/Python/Go
- Experience with at least one of the Database systems - RDBMS and NoSQL
- Ability to document requirements and specifications.
- A naturally inquisitive and problem-solving mindset.
- Strong experience in using AGILE or SCRUM techniques to build quality software.
- Advantage: experience in React js, AWS, Nodejs, Golang, Apache Spark, ETL tool, data integration system, certification in AWS, worked in a Product company and involved in making it from scratch, Good communication skills, open-source contributions, proven competitive coding pro
- Experience working on Linux based infrastructure
- Strong hands-on knowledge of setting up production, staging, and dev environments on AWS/GCP/Azure
- Strong hands-on knowledge of technologies like Terraform, Docker, Kubernetes
- Strong understanding of continuous testing environments such as Travis-CI, CircleCI, Jenkins, etc.
- Configuration and managing databases such as MySQL, Mongo
- Excellent troubleshooting
- Working knowledge of various tools, open-source technologies, and cloud services
- Awareness of critical concepts in DevOps and Agile principles
Engineering group to plan ongoing feature development, product maintenance.
• Familiar with Virtualization, Containers - Kubernetes, Core Networking, Cloud Native
Development, Platform as a Service – Cloud Foundry, Infrastructure as a Service, Distributed
Systems etc
• Implementing tools and processes for deployment, monitoring, alerting, automation, scalability,
and ensuring maximum availability of server infrastructure
• Should be able to manage distributed big data systems such as hadoop, storm, mongoDB,
elastic search and cassandra etc.,
• Troubleshooting multiple deployment servers, Software installation, Managing licensing etc,.
• Plan, coordinate, and implement network security measures in order to protect data, software, and
hardware.
• Monitor the performance of computer systems and networks, and to coordinate computer network
access and use.
• Design, configure and test computer hardware, networking software, and operating system
software.
• Recommend changes to improve systems and network configurations, and determine hardware or
software requirements related to such changes.









