
About Cipher Research Group
About
Connect with the team
Similar jobs
Job Title: Senior DevOps Engineer
Location: Gurgaon – Sector 39
Work Mode: 5 Days Onsite
Experience: 5+ Years
About the Role
We are looking for an experienced Senior DevOps Engineer to build, manage, and maintain highly reliable, scalable, and secure infrastructure. The role involves deploying product updates, handling production issues, implementing customer integrations, and leading DevOps best practices across teams.
Key Responsibilities
- Manage and maintain production-grade infrastructure ensuring high availability and performance.
- Deploy application updates, patches, and bug fixes across environments.
- Handle Level-2 support and resolve escalated production issues.
- Perform root cause analysis and implement preventive solutions.
- Build automation tools and scripts to improve system reliability and efficiency.
- Develop monitoring, logging, alerting, and reporting systems.
- Ensure secure deployments following data encryption and cybersecurity best practices.
- Collaborate with development, product, and QA teams for smooth releases.
- Lead and mentor a small DevOps team (3–4 engineers).
Core Focus Areas
Server Setup & Management (60%)
- Hands-on management of bare-metal servers.
- Server provisioning, configuration, and lifecycle management.
- Network configuration including redundancy, bonding, and performance tuning.
Queue Systems – Kafka / RabbitMQ (15%)
- Implementation and management of message queues for distributed systems.
Storage Systems – SAN / NAS (15%)
- Setup and management of enterprise storage systems.
- Ensure backup, recovery, and data availability.
Database Knowledge (5%)
- Working experience with Redis, MySQL/PostgreSQL, MongoDB, Elasticsearch.
- Basic database administration and performance tuning.
Telecom Exposure (Good to Have – 5%)
- Experience with SMS, voice systems, or real-time data processing environments.
Technical Skills Required
- Linux administration & Shell scripting
- CI/CD tools – Jenkins
- Git (GitHub / SVN) and branching strategies
- Docker & Kubernetes
- AWS cloud services
- Ansible for configuration management
- Databases: MySQL, MariaDB, MongoDB
- Web servers: Apache, Tomcat
- Load balancing & HA: HAProxy, Keepalived
- Monitoring tools: Nagios and related observability stacks
We are looking for a DevOps Engineer with hands-on experience in managing production infrastructure using AWS, Kubernetes, and Terraform. The ideal candidate will have exposure to CI/CD tools and queueing systems, along with a strong ability to automate and optimize workflows.
Responsibilities:
* Manage and optimize production infrastructure on AWS, ensuring scalability and reliability.
* Deploy and orchestrate containerized applications using Kubernetes.
* Implement and maintain infrastructure as code (IaC) using Terraform.
* Set up and manage CI/CD pipelines using tools like Jenkins or Chef to streamline deployment processes.
* Troubleshoot and resolve infrastructure issues to ensure high availability and performance.
* Collaborate with cross-functional teams to define technical requirements and deliver solutions.
* Nice-to-have: Manage queueing systems like Amazon SQS, Kafka, or RabbitMQ.
Requirements:
* 2+ years of experience with AWS, including practical exposure to its services in production environments.
* Demonstrated expertise in Kubernetes for container orchestration.
* Proficiency in using Terraform for managing infrastructure as code.
* Exposure to at least one CI/CD tool, such as Jenkins or Chef.
* Nice-to-have: Experience managing queueing systems like SQS, Kafka, or RabbitMQ.
Job Responsibilities:
- Managing and maintaining the efficient functioning of containerized applications and systems within an organization
- Design, implement, and manage scalable Kubernetes clusters in cloud or on-premise environments
- Develop and maintain CI/CD pipelines to automate infrastructure and application deployments, and track all automation processes
- Implement workload automation using configuration management tools, as well as infrastructure as code (IaC) approaches for resource provisioning
- Monitor, troubleshoot, and optimize the performance of Kubernetes clusters and underlying cloud infrastructure
- Ensure high availability, security, and scalability of infrastructure through automation and best practices
- Establish and enforce cloud security standards, policies, and procedures Work agile technologies
Primary Requirements:
- Kubernetes: Proven experience in managing Kubernetes clusters (min. 2-3 years)
- Linux/Unix: Proficiency in administering complex Linux infrastructures and services
- Infrastructure as Code: Hands-on experience with CM tools like Ansible, as well as the
- knowledge of resource provisioning with Terraform or other Cloud-based utilities
- CI/CD Pipelines: Expertise in building and monitoring complex CI/CD pipelines to
- manage the build, test, packaging, containerization and release processes of software
- Scripting & Automation: Strong scripting and process automation skills in Bash, Python
- Monitoring Tools: Experience with monitoring and logging tools (Prometheus, Grafana)
- Version Control: Proficient with Git and familiar with GitOps workflows.
- Security: Strong understanding of security best practices in cloud and containerized
- environments.
Skills/Traits that would be an advantage:
- Kubernetes administration experience, including installation, configuration, and troubleshooting
- Kubernetes development experience
- Strong analytical and problem-solving skills
- Excellent communication and interpersonal skills
- Ability to work independently and as part of a team
-
Working with Ruby, Python, Perl, and Java
-
Troubleshooting and having working knowledge of various tools, open-source technologies, and cloud services.
-
Configuring and managing databases and cache layers such as MySQL, Mongo, Elasticsearch, Redis
-
Setting up all databases and for optimisations (sharding, replication, shell scripting etc)
-
Creating user, Domain handling, Service handling, Backup management, Port management, SSL services
-
Planning, testing & development of IT Infrastructure ( Server configuration and Database) and handling the technical issue related to server Docker and VM optimization
-
Demonstrate awareness of DB management, server related work, Elasticsearch.
-
Selecting and deploying appropriate CI/CD tools
-
Striving for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
-
Experience working on Linux based infrastructure
-
Awareness of critical concepts in DevOps and Agile principles
-
6-8 years of experience
About Hive
Hive is the leading provider of cloud-based AI solutions for content understanding,
trusted by the world’s largest, fastest growing, and most innovative organizations. The
company empowers developers with a portfolio of best-in-class, pre-trained AI models, serving billions of customer API requests every month. Hive also offers turnkey software applications powered by proprietary AI models and datasets, enabling breakthrough use cases across industries. Together, Hive’s solutions are transforming content moderation, brand protection, sponsorship measurement, context-based ad targeting, and more.
Hive has raised over $120M in capital from leading investors, including General Catalyst, 8VC, Glynn Capital, Bain & Company, Visa Ventures, and others. We have over 250 employees globally in our San Francisco, Seattle, and Delhi offices. Please reach out if you are interested in joining the future of AI!
About Role
Our unique machine learning needs led us to open our own data centers, with an
emphasis on distributed high performance computing integrating GPUs. Even with these data centers, we maintain a hybrid infrastructure with public clouds when the right fit. As we continue to commercialize our machine learning models, we also need to grow our DevOps and Site Reliability team to maintain the reliability of our enterprise SaaS offering for our customers. Our ideal candidate is someone who is
able to thrive in an unstructured environment and takes automation seriously. You believe there is no task that can’t be automated and no server scale too large. You take pride in optimizing performance at scale in every part of the stack and never manually performing the same task twice.
Responsibilities
● Create tools and processes for deploying and managing hardware for Private Cloud Infrastructure.
● Improve workflows of developer, data, and machine learning teams
● Manage integration and deployment tooling
● Create and maintain monitoring and alerting tools and dashboards for various services, and audit infrastructure
● Manage a diverse array of technology platforms, following best practices and
procedures
● Participate in on-call rotation and root cause analysis
Requirements
● Minimum 5 - 10 years of previous experience working directly with Software
Engineering teams as a developer, DevOps Engineer, or Site Reliability
Engineer.
● Experience with infrastructure as a service, distributed systems, and software design at a high-level.
● Comfortable working on Linux infrastructures (Debian) via the CLIAble to learn quickly in a fast-paced environment.
● Able to debug, optimize, and automate routine tasks
● Able to multitask, prioritize, and manage time efficiently independently
● Can communicate effectively across teams and management levels
● Degree in computer science, or similar, is an added plus!
Technology Stack
● Operating Systems - Linux/Debian Family/Ubuntu
● Configuration Management - Chef
● Containerization - Docker
● Container Orchestrators - Mesosphere/Kubernetes
● Scripting Languages - Python/Ruby/Node/Bash
● CI/CD Tools - Jenkins
● Network hardware - Arista/Cisco/Fortinet
● Hardware - HP/SuperMicro
● Storage - Ceph, S3
● Database - Scylla, Postgres, Pivotal GreenPlum
● Message Brokers: RabbitMQ
● Logging/Search - ELK Stack
● AWS: VPC/EC2/IAM/S3
● Networking: TCP / IP, ICMP, SSH, DNS, HTTP, SSL / TLS, Storage systems,
RAID, distributed file systems, NFS / iSCSI / CIFS
Who we are
We are a group of ambitious individuals who are passionate about creating a revolutionary AI company. At Hive, you will have a steep learning curve and an opportunity to contribute to one of the fastest growing AI start-ups in San Francisco. The work you do here will have a noticeable and direct impact on the
development of the company.
Thank you for your interest in Hive and we hope to meet you soon
- Recommend a migration and consolidation strategy for DevOps tools
- Design and implement an Agile work management approach
- Make a quality strategy
- Design a secure development process
- Create a tool integration strategy
Profile: DevOps Engineer
Experience: 5-8 Yrs
Notice Period: Immediate to 30 Days
Job Descrtiption:
Technical Experience (Must Have):
Cloud: Azure
DevOps Tool: Terraform, Ansible, Github, CI-CD pipeline, Docker, Kubernetes
Network: Cloud Networking
Scripting Language: Any/All - Shell Script, PowerShell, Python
OS: Linux (Ubuntu, RHEL etc)
Database: MongoDB
Professional Attributes: Excellent communication, written, presentation,
and problem-solving skills.
Experience: Minimum of 5-8 years of experience in Cloud Automation and
Application
Additional Information (Good to have):
Microsoft Azure Fundamentals AZ-900
Terraform Associate
Docker
Certified Kubernetes Administrator
Role:
Building and maintaining tools to automate application and
infrastructure deployment, and to monitor operations.
Design and implement cloud solutions which are secure, scalable,
resilient, monitored, auditable and cost optimized.
Implementing transformation from an as is state, to the future.
Coordinating with other members of the DevOps team, Development, Test,
and other teams to enhance and optimize existing processes.
Provide systems support, implement monitoring and logging alerting
solutions that enable the production systems to be monitored.
Writing Infrastructure as Code (IaC) using Industry standard tools and
services.
Writing application deployment automation using industry standard
deployment and configuration tools.
Design and implement continuous delivery pipelines that serve the
purpose of provisioning and operating client test as well as production
environments.
Implement and stay abreast of Cloud and DevOps industry best practices
and tooling.
REVOS is a smart micro-mobility platform that works with enterprises across the automotive shared mobility value chain to enable and accelerate their smart vehicle journeys. Founded in 2017, it aims to empower all 2 and 3 wheeler vehicles through AI-integrated IoT solutions that will make them smart, safe, connected. We are backed by investors like USV and Prime Venture.
Duties and Responsibilities :
- Automating various tasks in cloud operations, deployment, monitoring, and performance optimization for big data stack.
- Build, release, and configuration management of production systems.
- System troubleshooting and problem-solving across platform and application domains.
- Suggesting architecture improvements, recommending process improvements.
- Evaluate new technology options and vendor products.
- Function well in a fast-paced, rapidly-changing environment
- Communicate effectively with people at all levels of the organization
Qualifications and Required Skills:
- Overall 3+ years of experience in various software engineering roles.
- 3+ years of experience in building applications and tools in any tech stack, preferably deployed on cloud
- Recent 3 years’ experience must be on Serverless/cloud-native development in AWS (preferred)/Azure
- Expertise in any of the programming languages – (NodeJS or Python preferable)
- Must have hands-on experience in using AWS/Azure - SDK/APIs.
- Must have experience in deploying, releasing, and managing production systems
- MCA or a degree in engineering in Computer Science, IT, or Electronics stream
Position Summary:
Technology Lead provides technical leadership with in-depth DevOps experience and is responsible for enabling delivery of high-quality projects to Saviant clients through highly effective DevOps process. This is a highly technical role, with a focus on analysing, designing, documenting, and implementing a complete DevOps process for enterprise applications using the most advanced technology stacks, methodologies, and best practices within the agreed timelines.
Individuals in this role will need to have good technical and communication skills and strive to be on the cutting edge, innovate, and explore to deliver quality solutions to Saviant Clients.
Your Role & Responsibilities at Saviant:
• Design, analyze, document, and develop the technical architecture for on-premise as well as cloud-based DevOps solutions around customers’ business problems.
• Lead end to end process and setup implementation of configuration management, CI, CD, and monitoring platforms.
• Conduct reviews of design and implementation of DevOps processes while establishing, and maintaining best practices
• Setup new processes to improve the quality of development, delivery and deployment processes
• Provide technical support and guidance to project team members.
• Upgrade by learning technologies beyond traditional area of expertise
• Contribute to pre-sales, proposal creation, POCs, technology incubation from technical and architecture perspective
• Participate in recruitment and people development initiatives.
Job Requirements/Qualifications:
• Educational Qualification: BE, BTech, MTech, MCA from a reputed institute • 6 to 8 years of hands-on experience of the DevOps process using technologies like Dot Net Core, Python, C#, MVC, ReactJS, Python, Android, IOS, Linux, Windows
• Strong hands-on experience of the full life cycle of DevOps: DevOps Orchestration/Configuration/Security/CI-CD/Release Management and Environment management • Solid hands-on knowledge of DevOps technologies and tools such as Jenkins, Spinnaker, Azure for DevOps, Chef, Puppet, JIRA, TFS, Git, SVN, various scripting tools, etc. • Solid hands-on knowledge of containerization technologies and tools such as Docker, Kubernetes, Cloud Foundry • In-depth understanding of various development and deployment architectures from a DevOps perspective
• Expertise in Grounds-up DevOps projects involving multiple agile teams spread across geographies.
• Experience in a various Agile Project Management software /techniques / tools
• Strong analytical and problem solving skills
• Excellent written and oral communication skills
• Enjoys working as part of agile software teams in a startup environment.
Who Should Apply?
• You have independently managed end-to-end DevOps projects, including understanding requirements, design solutions and implementing, setting up best practices with different business domain over last 2 years.
• You are well versed with Agile development methodologies and have successfully implemented them across at least 2-3 projects
• You have lead development team of 5 to 8 developers with Technology responsibility
• You have served as “Single Point of Contact” for managing technical escalations and decisions
Job Description :
- The engineer should be highly motivated, able to work independently, work and guide other engineers within and outside the team.
- The engineer should possess varied software skills in Shell scripting, Linux, Oracle database, WebLogic, GIT, ANT, Hudson, Jenkins, Docker and Maven
- Work is super fun, non-routine, and, challenging, involving the application of advanced skills in area of specialization.
Key responsibilities :
- Design, develop, troubleshoot and debug software programs for Hudson monitoring, installing Cloud Software, Infra and cloud application usage monitoring.
Required Knowledge :
- Source Configuration management, Docker, Puppet, Ansible, Application and Server Monitoring, AWS, Database Administration, Kubernetes, Log monitoring, CI/CD, Design and implement build, deployment, and configuration management, Improve infrastructure development, Scripting language.
- Good written and verbal communication skills
Qualification :
Education and Experience : Bachelors/Masters in Computer Science
Open for 24- 7 shifts








