About Hive
Hive is the leading provider of cloud-based AI solutions for content understanding,
trusted by the world’s largest, fastest growing, and most innovative organizations. The
company empowers developers with a portfolio of best-in-class, pre-trained AI models, serving billions of customer API requests every month. Hive also offers turnkey software applications powered by proprietary AI models and datasets, enabling breakthrough use cases across industries. Together, Hive’s solutions are transforming content moderation, brand protection, sponsorship measurement, context-based ad targeting, and more.
Hive has raised over $120M in capital from leading investors, including General Catalyst, 8VC, Glynn Capital, Bain & Company, Visa Ventures, and others. We have over 250 employees globally in our San Francisco, Seattle, and Delhi offices. Please reach out if you are interested in joining the future of AI!
About Role
Our unique machine learning needs led us to open our own data centers, with an
emphasis on distributed high performance computing integrating GPUs. Even with these data centers, we maintain a hybrid infrastructure with public clouds when the right fit. As we continue to commercialize our machine learning models, we also need to grow our DevOps and Site Reliability team to maintain the reliability of our enterprise SaaS offering for our customers. Our ideal candidate is someone who is
able to thrive in an unstructured environment and takes automation seriously. You believe there is no task that can’t be automated and no server scale too large. You take pride in optimizing performance at scale in every part of the stack and never manually performing the same task twice.
Responsibilities
● Create tools and processes for deploying and managing hardware for Private Cloud Infrastructure.
● Improve workflows of developer, data, and machine learning teams
● Manage integration and deployment tooling
● Create and maintain monitoring and alerting tools and dashboards for various services, and audit infrastructure
● Manage a diverse array of technology platforms, following best practices and
procedures
● Participate in on-call rotation and root cause analysis
Requirements
● Minimum 5 - 10 years of previous experience working directly with Software
Engineering teams as a developer, DevOps Engineer, or Site Reliability
Engineer.
● Experience with infrastructure as a service, distributed systems, and software design at a high-level.
● Comfortable working on Linux infrastructures (Debian) via the CLIAble to learn quickly in a fast-paced environment.
● Able to debug, optimize, and automate routine tasks
● Able to multitask, prioritize, and manage time efficiently independently
● Can communicate effectively across teams and management levels
● Degree in computer science, or similar, is an added plus!
Technology Stack
● Operating Systems - Linux/Debian Family/Ubuntu
● Configuration Management - Chef
● Containerization - Docker
● Container Orchestrators - Mesosphere/Kubernetes
● Scripting Languages - Python/Ruby/Node/Bash
● CI/CD Tools - Jenkins
● Network hardware - Arista/Cisco/Fortinet
● Hardware - HP/SuperMicro
● Storage - Ceph, S3
● Database - Scylla, Postgres, Pivotal GreenPlum
● Message Brokers: RabbitMQ
● Logging/Search - ELK Stack
● AWS: VPC/EC2/IAM/S3
● Networking: TCP / IP, ICMP, SSH, DNS, HTTP, SSL / TLS, Storage systems,
RAID, distributed file systems, NFS / iSCSI / CIFS
Who we are
We are a group of ambitious individuals who are passionate about creating a revolutionary AI company. At Hive, you will have a steep learning curve and an opportunity to contribute to one of the fastest growing AI start-ups in San Francisco. The work you do here will have a noticeable and direct impact on the
development of the company.
Thank you for your interest in Hive and we hope to meet you soon

Similar jobs
🚀 Hiring: Azure DevOps Engineer – Immediate Joiners Only! 🚀
📍 Location: Pune (Hybrid)
💼 Experience: 5+ Years
🕒 Mode of Work: Hybrid
Are you a proactive and skilled Azure DevOps Engineer looking for your next challenge? We are hiring immediate joiners to join our dynamic team! If you are passionate about CI/CD, cloud automation, and SRE best practices, we want to hear from you.
🔹 Key Skills Required:
✅ Cloud Expertise: Proficiency in any cloud (Azure preferred)
✅ CI/CD Pipelines: Hands-on experience in designing and managing pipelines
✅ Containers & IaC: Strong knowledge of Docker, Terraform, Kubernetes
✅ Incident Management: Quick issue resolution and RCA
✅ SRE & Observability: Experience with SLI/SLO/SLA, monitoring, tracing, logging
✅ Programming: Proficiency in Python, Golang
✅ Performance Optimization: Identifying and resolving system bottlenecks
Challenging opportunity to Improve and manage a complicated AWS Infrastructure built for ECG wearable, for live ecg streaming and monitoring Patients
About the Role:
We are seeking a talented and passionate DevOps Engineer to join our dynamic team. You will be responsible for designing, implementing, and managing scalable and secure infrastructure across multiple cloud platforms. The ideal candidate will have a deep understanding of DevOps best practices and a proven track record in automating and optimizing complex workflows.
Key Responsibilities:
Cloud Management:
- Design, implement, and manage cloud infrastructure on AWS, Azure, and GCP.
- Ensure high availability, scalability, and security of cloud resources.
Containerization & Orchestration:
- Develop and manage containerized applications using Docker.
- Deploy, scale, and manage Kubernetes clusters.
CI/CD Pipelines:
- Build and maintain robust CI/CD pipelines to automate the software delivery process.
- Implement monitoring and alerting to ensure pipeline efficiency.
Version Control & Collaboration:
- Manage code repositories and workflows using Git.
- Collaborate with development teams to optimize branching strategies and code reviews.
Automation & Scripting:
- Automate infrastructure provisioning and configuration using tools like Terraform, Ansible, or similar.
- Write scripts to optimize and maintain workflows.
Monitoring & Logging:
- Implement and maintain monitoring solutions to ensure system health and performance.
- Analyze logs and metrics to troubleshoot and resolve issues.
Required Skills & Qualifications:
- 3-5 years of experience with AWS, Azure, and Google Cloud Platform (GCP).
- Proficiency in containerization tools like Docker and orchestration tools like Kubernetes.
- Hands-on experience building and managing CI/CD pipelines.
- Proficient in using Git for version control.
- Experience with scripting languages such as Bash, Python, or PowerShell.
- Familiarity with infrastructure-as-code tools like Terraform or CloudFormation.
- Solid understanding of networking, security, and system administration.
- Excellent problem-solving and troubleshooting skills.
- Strong communication and teamwork skills.
Preferred Qualifications:
- Certifications such as AWS Certified DevOps Engineer, Azure DevOps Engineer, or Google Professional DevOps Engineer.
- Experience with monitoring tools like Prometheus, Grafana, or ELK Stack.
- Familiarity with serverless architectures and microservices.
Position: Senior DevOps Engineer (Azure Cloud Infra & Application deployments)
Location:
Hyderabad
Hiring a Senior
DevOps engineer having 2 to 5 years of experience.
Primary Responsibilities
Strong Programming experience in PowerShell and batch
scripts.
Strong expertise in Azure DevOps, GitLab, CI/CD, Jenkins and
Git Actions, Azure Infrastructure.
Strong
experience in configuring infrastructure and Deployments application in
Kubernetes, docker & Helm charts, App Services, Server less, SQL database, Cloud
services and Container deployment.
Continues
Integration, deployment, and version control (Git/ADO).
Strong experience in managing and configuring RBAC, managed identity and
security best practices for cloud environments.
Strong verbal and written communication skills.
Experience with agile development process.
·
Good analytical skills
Additional Responsibility.
Familiar with
various design and architecture patterns.
Work with modern frameworks and design patterns.
Experience with cloud applications Azure/AWS. Should have experience in developing solutions and plugins and should have used XRM Toolbox/ToolKit.
Exp in Customer Portal and Fetchxml and Power Apps and Power Automate is good to have.
Role – Devops
Experience 3 – 6 Years
Roles & Responsibilities –
- 3-6 years of experience in deploying and managing highly scalable fault resilient systems
- Strong experience in container orchestration and server automation tools such as Kubernetes, Google Container Engine, Docker Swarm, Ansible, Terraform
- Strong experience with Linux-based infrastructures, Linux/Unix administration, AWS, Google Cloud, Azure
- Strong experience with databases such as MySQL, Hadoop, Elasticsearch, Redis, Cassandra, and MongoDB.
- Knowledge of scripting languages such as Java, JavaScript, Python, PHP, Groovy, Bash.
- Experience in configuring CI/CD pipelines using Jenkins, GitLab CI, Travis.
- Proficient in technologies such as Docker, Kafka, Raft and Vagrant
- Experience in implementing queueing services such as RabbitMQ, Beanstalkd, Amazon SQS and knowledge in ElasticStack is a plus.
- Install, configuration management, performance tuning and monitoring of Web, App and Database servers.
- Install, setup and management of Java, PHP and NodeJS stack with software load balancers.
- Install, setup and administer MySQL, Mongo, Elasticsearch & PostgreSQL DBs.
- Install, set up and maintenance monitoring solutions for like Nagios, Zabbix.
- Design and implement DevOps processes for new projects following the department's objectives of automation.
- Collaborate on projects with development teams to provide recommendations, support and guidance.
- Work towards full automation, monitoring, virtualization and containerization.
- Create and maintain tools for deployment, monitoring and operations.
- Automation of processes in a scalable and easy to understand way that can be detailed and understood through documentation.
- Develop and deploy software that will help drive improvements towards the availability, performance, efficiency, and security of services.
- Maintain 24/7 availability for responsible systems and be open to on-call rotation.
Senior Devops Engineer
Who are we?
Searce is a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering, AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully moved to cloud.
What do we believe?
- Best practices are overrated
- Implementing best practices can only make one n ‘average’ .
- Honesty and Transparency
- We believe in naked truth. We do what we tell and tell what we do.
- Client Partnership
- Client - Vendor relationship: No. We partner with clients instead.
- And our sales team comprises 100% of our clients.
How do we work?
It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER.
- Humble: Happy people don’t carry ego around. We listen to understand; not to respond.
- Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about.
- Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it.
- Passionate: We are as passionate about the great street-food vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver.
- Innovative: Innovate or Die. We love to challenge the status quo.
- Experimental: We encourage curiosity & making mistakes.
- Responsible: Driven. Self motivated. Self governing teams. We own it.
Are you the one? Quick self-discovery test:
- Love for cloud: When was the last time your dinner entailed an act on “How would ‘Jerry Seinfeld’ pitch Cloud platform & products to this prospect” and your friend did the ‘Sheldon’ version of the same thing.
- Passion for sales: When was the last time you went at a remote gas station while on vacation, and ended up helping the gas station owner saasify his 7 gas stations across other geographies.
- Compassion for customers: You listen more than you speak. When you do speak, people feel the need to listen.
- Humor for life: When was the last time you told a concerned CEO, ‘If Elon Musk can attempt to take humanity to Mars, why can’t we take your business to run on cloud ?
Introduction
When was the last time you thought about rebuilding your smart phone charger using solar panels on your backpack OR changed the sequencing of switches in your bedroom (on your own, of course) to make it more meaningful OR pointed out an engineering flaw in the sequencing of traffic signal lights to a fellow passenger, while he gave you a blank look? If the last time this happened was more than 6 months ago, you are a dinosaur for our needs. If it was less than 6 months ago, did you act on it? If yes, then let’s talk.
We are quite keen to meet you if:
- You eat, dream, sleep and play with Cloud Data Store & engineering your processes on cloud architecture
- You have an insatiable thirst for exploring improvements, optimizing processes, and motivating people.
- You like experimenting, taking risks and thinking big.
3 things this position is NOT about:
- This is NOT just a job; this is a passionate hobby for the right kind.
- This is NOT a boxed position. You will code, clean, test, build and recruit & energize.
- This is NOT a position for someone who likes to be told what needs to be done.
3 things this position IS about:
- Attention to detail matters.
- Roles, titles, ego does not matter; getting things done matters; getting things done quicker & better matters the most.
- Are you passionate about learning new domains & architecting solutions that could save a company millions of dollars?
Roles and Responsibilities
This is an entrepreneurial Cloud/DevOps Lead position that evolves to the Director- Cloud engineering .This position requires fanatic iterative improvement ability - architect a solution, code, research, understand customer needs, research more, rebuild and re-architect, you get the drift. We are seeking hard-core-geeks-turned-successful-techies who are interested in seeing their work used by millions of users the world over.
Responsibilities:
- Consistently strive to acquire new skills on Cloud, DevOps, Big Data, AI and ML technologies
- Design, deploy and maintain Cloud infrastructure for Clients – Domestic & International
- Develop tools and automation to make platform operations more efficient, reliable and reproducible
- Create Container Orchestration (Kubernetes, Docker), strive for full automated solutions, ensure the up-time and security of all cloud platform systems and infrastructure
- Stay up to date on relevant technologies, plug into user groups, and ensure our client are using the best techniques and tools
- Providing business, application, and technology consulting in feasibility discussions with technology team members, customers and business partners
- Take initiatives to lead, drive and solve during challenging scenarios
Requirements:
- 3 + Years of experience in Cloud Infrastructure and Operations domains
- Experience with Linux systems, RHEL/CentOS preferred
- Specialize in one or two cloud deployment platforms: AWS, GCP, Azure
- Hands on experience with AWS services (EC2, VPC, RDS, DynamoDB, Lambda)
- Experience with one or more programming languages (Python, JavaScript, Ruby, Java, .Net)
- Good understanding of Apache Web Server, Nginx, MySQL, MongoDB, Nagios
- Knowledge on Configuration Management tools such as Ansible, Terraform, Puppet, Chef
- Experience working with deployment and orchestration technologies (such as Docker, Kubernetes, Mesos)
- Deep experience in customer facing roles with a proven track record of effective verbal and written communications
- Dependable and good team player
- Desire to learn and work with new technologies
Key Success Factors
- Are you
- Likely to forget to eat, drink or pee when you are coding?
- Willing to learn, re-learn, research, break, fix, build, re-build and deliver awesome code to solve real business/consumer needs?
- An open source enthusiast?
- Absolutely technology agnostic and believe that business processes define and dictate which technology to use?
- Ability to think on your feet, and follow-up with multiple stakeholders to get things done
- Excellent interpersonal communication skills
- Superior project management and organizational skills
- Logical thought process; ability to grasp customer requirements rapidly and translate the same into technical as well as layperson terms
- Ability to anticipate potential problems, determine and implement solutions
- Energetic, disciplined, with a results-oriented approach
- Strong ethics and transparency in dealings with clients, vendors, colleagues and partners
- Attitude of ‘give me 5 sharp freshers and 6 months and I will rebuild the way people communicate over the internet.
- You are customer-centric, and feel strongly about building scalable, secure, quality software. You thrive and succeed in delivering high quality technology products in a growth environment where priorities shift fast.
Job Summary
Creates, modifies, and maintains software applications individually or as part of a team. Provides technical leadership on a team, including training and mentoring of other team members. Provides technology and architecture direction for the team, department, and organization.
Essential Duties & Responsibilities
- Develops software applications and supporting infrastructure using established coding standards and methodologies
- Sets example for software quality through multiple levels of automated tests, including but not limited to unit, API, End to End, and load.
- Self-starter and self-organized - able to work without supervision
- Develops tooling, test harnesses and innovative solutions to understand and monitor the quality of the product
- Develops infrastructure as code to reliably deploy applications on demand or through automation
- Understands cloud managed services and builds scalable and secure applications using them
- Creates proof of concepts for new ideas that answer key questions of feasibility, desirability, and viability
- Work with other technical leaders to establish coding standards, development best practices and technology direction
- Performs thorough code reviews that promote better understanding throughout the team
- Work with architects, designers, business analysts and others to design and implement high quality software solutions
- Builds intuitive user interfaces with the end user persona in mind using front end frameworks and styling
- Assist product owners in backlog grooming, story breakdown and story estimation
- Collaborate and communicate effectively with team members and other stakeholders throughout the organization
- Document software changes for use by other engineers, quality assurance and documentation specialists
- Master the technologies, languages, and practices used by the team and project assigned
- Train others in the technologies, languages, and practices used by the team
- Trouble shoot, instrument and debug existing software resolving root causes of defective behavior
- Guide the team in setting up the infrastructure in the cloud.
- Setup the security protocols for the cloud infrastructure
- Works with the team in setting up the data hub in the cloud
- Create dashboards for the visibility of the various interactions between the cloud services
- Other duties as assigned
Experience
Education
- BA/BS in Computer Science, a related field or equivalent work experience
Minimum Qualifications
- Mastered advanced programming concepts, including object oriented programming
- Mastered technologies and tools utilized by team and project assigned
- Able to train others on general programming concepts and specific technologies
- Minimum 8 years’ experience developing software applications
Skills/Knowledge
- Must be expert in advanced programming skills and database technology
- Must be expert in at least one technology and/or language and proficient in multiple technologies and languages:
- (Specific languages needed will vary based on development department or project)
- .Net Core, C#, Java, SQL, JavaScript, Typescript, Python
- Additional desired skills:
- Single-Page Applications, Angular (v9), Ivy, RXJS, NGRX, HTML5, CSS/SASS, Web Components, Atomic Design
- Test First approach, Test Driven Development (TDD), Automated testing (Protractor, Jasmine), Newman Postman, artillery.io
- Microservices, Terraform, Jenkins, Jupyter Notebook, Docker, NPM, Yarn, Nuget, NodeJS, Git/Gerrit, LaunchDarkly
- Amazon Web Services (AWS), Lambda, S3, Cognito, Step Functions, SQS, IAM, Cloudwatch, Elasticache
- Database Design, Optimization, Replication, Partitioning/Sharding, NoSQL, PostgreSQL, MongoDB, DynamoDB, Elastic Search, PySpark, Kafka
- Agile, Scrum, Kanban, DevSecOps
- Strong problem-solving skills
- Outstanding communications and interpersonal skills
- Strong organizational skills and ability to multi-task
- Ability to track software issues to successful resolution
- Ability to work in a collaborative fast paced environment
- Setting up complex AWS data storage hub
- Well versed in setting up infrastructure security in the interactions between the planned components
- Experienced in setting up dashboards for analyzing the various operations in the AWS infra setup.
- Ability to learn new development language quickly and apply that knowledge effectively
The mission of R&D IT Design Infrastructure is to offer a state-of-the-art design environment
for the chip hardware designers. The R&D IT design environment is a complex landscape of EDA Applications, High Performance Compute, and Storage environments - consolidated in five regional datacenters. Over 7,000 chip hardware designers, spread across 40+ locations around the world, use this state-of-the-art design environment to design new chips and drive the innovation of company. The following figures give an idea about the scale: the landscape has more 75,000+ cores, 30+ PBytes of data, and serves 2,000+ CAD applications and versions. The operational service teams are globally organized to cover 24/7 support to the chip hardware design and software design projects.
Since the landscape is really too complex to manage the traditional way, it is our strategy to transform our R&D IT design infrastructure into “software-defined datacenters”. This transformation entails a different way of work and a different mind-set (DevOps, Site Reliability Engineering) to ensure that our IT services are reliable. That’s why we are looking for a DevOps Linux Engineer to strengthen the team that is building a new on-premise software defined virtualization and containerization platform (PaaS) for our IT landscape, so that we can manage it with best practices from software engineering and offer the IT service reliability which is required by our chip hardware design community.
It will be your role to develop and maintain the base Linux OS images that are offered via automation to the customers of the internal (on-premise) cloud platforms.
Your responsibilities as DevOps Linux Engineer:
• Develop and maintain the base RedHat Linux operating system images
• Develop and maintain code to configure and test the base OS image
• Provide input to support the team to design, develop and maintain automation
products with playbooks (YAML) and modules (Python/PowerShell) in tools like
Ansible Tower and Service-Now
• Test and verify the code produced by the team (including your own) to continuously
improve and refactor
• Troubleshoot and solve incidents on the RedHat Linux operating system
• Work actively with other teams to align on the architecture of the PaaS solution
• Keep the Base OS image up2date via patches or make sure patches are available to
the virtual machine owners
• Train team members and others with your extensive automation knowledge
• Work together with ServiceNow developers in your team to provide the best intuitive
end-user experience possible for the virtual machine OS deployments
We are looking for a DevOps engineer/consultant with the following characteristics:
• Master or Bachelor degree
• You are a technical, creative, analytical and open-minded engineer that is eager to
learn and not afraid to take initiative.
• Your favorite t-shirt has “Linux” or “RedHat” printed on it at least once.
• Linux guru: You have great knowledge Linux servers (RedHat), RedHat Satellite 6
and other RedHat products
• Experience in Infrastructure services, e.g., Networking, DNS, LDAP, SMTP
• DevOps mindset: You are a team-player that is eager to develop and maintain cool
products to automate/optimize processes in a complex IT infrastructure and are able
to build and maintain productive working relationships
• You have great English communication skills, both verbally as in writing.
• No issue to work outside business hours to support the platform for critical R&D
Applications
Other competences we value, but are not strictly mandatory:
• Experience with agile development methods, like Scrum, and are convinced of its
power to deliver products with immense (business) value.
• “Security” is your middle name, and you are always challenging yourself and your
colleagues to design and develop new solutions as security tight as possible.
• Being a master in automation and orchestration with tools like Ansible Tower (or
comparable) and feel comfortable with developing new modules in Python or
PowerShell.
• It would be awesome if you are already a true Yoda when it comes to code version
control and branching strategies with Git, and preferably have worked with GitLab
before.
• Experience with automated testing in a CI/CD pipeline with Ansible, Python and tools
like Selenium.
• An enthusiast on cloud platforms like Azure & AWS.
• Background in and affinity with R&D


