

Are you the one? Quick self-discovery test:
- Love for the cloud: When was the last time your dinner entailed an act on “How would ‘Jerry Seinfeld’ pitch Cloud platform & products to this prospect” and your friend did the ‘Sheldon’ version of the same thing.
- Passion: When was the last time you went to a remote gas station while on vacation and ended up helping the gas station owner saasify his 7 gas stations across other geographies.
- Compassion for customers: You listen more than you speak. When you do speak, people feel the need to listen.
- Humor for life: When was the last time you told a concerned CEO, ‘If Elon Musk can attempt to take humanity to Mars, why can’t we take your business to run on the cloud?
Your bucket of undertakings:
This position will be responsible to consult with clients and propose architectural solutions to help move & improve infra from on-premise to cloud or help optimize cloud spend from one public cloud to the other.
- Be the first one to experiment on new-age cloud offerings, help define the best practice as a thought leader for cloud, automation & Dev-Ops, be a solution visionary and technology expert across multiple channels.
- Continually augment skills and learn new tech as the technology and client needs evolve
- Use your experience in the Google cloud platform, AWS, or Microsoft Azure to build hybrid-cloud solutions for customers.
- Provide leadership to project teams, and facilitate the definition of project deliverables around core Cloud-based technology and methods.
- Define tracking mechanisms and ensure IT standards and methodology are met; deliver quality results.
- Participate in technical reviews of requirements, designs, code, and other artifacts
- Identify and keep abreast of new technical concepts in the google cloud platform
- Security, Risk, and Compliance - Advise customers on best practices around access management, network setup, regulatory compliance, and related areas.
Accomplishment Set
- Passionate, persuasive, articulate Cloud professional capable of quickly establishing interest and credibility
- Good business judgment, a comfortable, open communication style, and a willingness and ability to work with customers and teams.
- Strong service attitude and a commitment to quality.
- Highly organised and efficient.
- Confident working with others to inspire a high-quality standard.
Experience :
- 4-8 years experience in Cloud Infrastructure and Operations domains
- Experience with Linux systems and/OR Windows servers
- Specialize in one or two cloud deployment platforms: AWS, GCP
- Hands on experience with AWS services (EKS, ECS, EC2, VPC, RDS, Lambda, GKE, Compute Engine, API Gateway, AppSync and ServiceMesh)
- Experience in one or more scripting language-Python, Bash
- Good understanding of Apache Web Server, Nginx, MySQL, MongoDB, Nagios
- Logging and Monitoring tools (ELK, Stackdriver, CloudWatch)
- DevOps Technologies (AWS DevOps, Jenkins, Git, Maven)
- Knowledge on Configuration Management tools such as Ansible, Terraform, Puppet, Chef, Packer
- Experience working with deployment and orchestration technologies (such as Docker, Kubernetes, Mesos)
Education :
- Is Education overrated? Yes. We believe so. However there is no way to locate you otherwise. So unfortunately we might have to look for a Bachelor's or Master's degree in engineering from a reputed institute or you should be programming from 12. And the latter is better. We will find you faster if you specify the latter in some manner. Not just degree, but we are not too thrilled by tech certifications too ... :)
- To reiterate: Passion to tech-awesome, insatiable desire to learn the latest of the new-age cloud tech, highly analytical aptitude and a strong ‘desire to deliver’ outlives those fancy degrees!
- 3-8 years of experience with hands-on experience in Cloud Computing (AWS/GCP) and IT operational experience in a global enterprise environment.
- Good analytical, communication, problem solving, and learning skills.
- Knowledge on programming against cloud platforms such as Google Cloud Platform and lean development methodologies.

About Searce Inc
About
Searce is a cloud, automation & analytics led process improvement company helping futurify businesses. Searce is a premier partner for Google Cloud for all products and services. Searce is the largest Cloud Systems Integrator for enterprises with the largest # of enterprise Google Cloud clients in India.
Searce specializes in helping businesses move to cloud, build on the next generation cloud, adopt SaaS - Helping reimagine the ‘why’ & redefining ‘what’s next’ for workflows, automation, machine learning & related futuristic use cases. Searce has been recognized by Google as one of the Top partners for the year 2015, 2016.
Searce's organizational culture encourages making mistakes and questioning the status quo and that allows us to specialize in simplifying complex business processes and use a technology agnostic approach to create, improve and deliver.
Connect with the team
Similar jobs
Job description
Job Title: DevOps Engineer
Experience: 6–10 years
Role Summary
We are seeking a skilled DevOps Engineer with hands-on expertise in Azure DevOps, automation, Infrastructure as Code (IaC), and cloud-native Azure services. The candidate will support environment automation, compliance enforcement, and app modernization initiatives, including CI/CD pipeline design, policy-as-code integration, and modernization of legacy applications using modern Azure PaaS services.
Key Responsibilities:
- Design and implement CI/CD pipelines in Azure DevOps using YAML and Classic pipelines for infrastructure and app deployments.
- Implement Azure Policy and Enterprise Policy-as-Code (EPAC) for compliance enforcement.
- Develop and manage automation scripts using PowerShell, REST APIs, and Bicep/ARM templates.
- Lead modernization of legacy applications using Azure App Services, Azure Functions, API Management, and Azure AD.
- Manage secrets using Azure Key Vault, configure service connections, and ensure secure deployments.
- Support Agile processes using Azure Boards (Scrum/Kanban), manage sprints, user stories, and iterations.
- Create and maintain Azure DevOps dashboards, analytics views, and optionally integrate with Power BI for advanced reporting.
- Troubleshoot and optimize CI/CD pipelines, deployment modules, and infrastructure provisioning.
Technical Skills
Must-Have:
- Azure DevOps (Pipelines, Boards, Repos, Artifacts, Service Connections)
- PowerShell scripting (including writing modules, Pester tests)
- Git and source control (branching, pull requests, code reviews)
- Azure Policy, Enterprise Policy-as-Code (EPAC)
- Infrastructure as Code: ARM, Bicep, Terraform
- Azure PaaS: App Services, Azure Functions, API Management, Azure SQL, Key Vault, Azure AD, App Insights
- REST APIs integration
- CI/CD & DevOps principles, secrets management, automation pipelines
Nice-to-Have:
- Octopus Deploy, Power Automate
- Azure certifications (AZ-204, AZ-400)
- .NET Framework deployment familiarity
- Agile tools: Jira, Asana
- Reporting with Power BI
Soft Skills
- Excellent communication and stakeholder engagement
- Strong analytical and problem-solving abilities
- Agile mindset, self-starter, and ability to work independently
- Proven ability to work in cross-functional DevOps teams
Educational qualification:
B.E/B.Tech/MCA
Job Description
Position - SRE developer / DevOps Engineer
Location - Mumbai
Experience- 3- 10 years
About HaystackAnalytics:
HaystackAnalytics is a company working in deep technology of genomics, computing and data science for creating the first of its kind clinical reporting engine in Healthcare. We are a new but well funded company with a tremendous amount of pedigree in the team (IIT Founders, IIT & IIM core team). Some of the technologies we have created are a global first in infectious disease and chronic diagnostics. As a product company creating a huge amount of IP, our Technology and R&D team are our crown jewels. With early success of our products in India, we are now expanding to take our products to international shores.
Inviting Passionate Engineers to join a new age enterprise:
At HaystackAnalytics, we rely on our dynamic team of engineers to solve the many challenges and puzzles that come with our rapidly evolving stack that deals with Healthcare and Genomics.
We’re looking for full stack engineers who are passionate problem solvers, ready to work with new technologies and architectures in a forward-thinking organization that’s always pushing boundaries. Here, you will take complete, end-to-end ownership of projects across the entire stack.
Our ideal candidate has experience building enterprise products and has understanding and experience of working with new age front end technologies, web frameworks, APIs, databases, distributed computing,back end languages, caching, security, message based architectures et al.
You’ll be joining a small team working at the forefront of new technology, solving the challenges that impact both the front end and back end architecture, and ultimately, delivering amazing global user experiences.
Objectives of this Role:
- Work across the full stack, building highly scalable distributed solutions that enable positive user experiences and measurable business growth
- Ideate and develop new product features in collaboration with domain experts in healthcare and genomics
- Develop state of the art enterprise standard front-end and backend services
- Develop cloud platform services based on container orchestration platform
- Continuously embrace automation for repetitive tasks
- Ensure application performance, uptime, and scale, maintaining high standards of code quality by using clean coding principles and solid design patterns
- Build robust tech modules that are Unit Testable, Automating recurring tasks and processes
- Engage effectively with team members and collaborate to upskill and unblock each other
Frontend Skills
- HTML 5
- CSS framework ( LESS/ SASS )
- Es6 / Typescript
- Electron app / TAURI
- Component library ( Webcomponents / radix / material )
- CSS ( tailwind)
- State management --> Redux / Zustand / Recoil
- Build tools - > (webpack/vite/Parcel/turborepo)
- Frameworks -- > Next JS /
- Design patterns
- Test Automation Frameworks (cypress playwright etc )
- Functional Programming concepts
- Scripting ( bash , python )
Backend Skills
- Node / Deno / bun - Express / NEST JS
- Language : Typescript / Python / Rust /
- REST / GRAPHQL
- SOLID Design Principles
- Storage (mongodb/ Object Storage / postgres )
- Caching ( Redis / In memory Data grid )
- Pub sub (KAFKA / SQS / SNS / Event bridge / RabbitMQ)
- Container Technology ( Docker / Kubernetes )
- Cloud ( azure , aws , openshift )
- GITOPS
- Automation ( terraform , Serverless )
Other Skills
- Innovation and thought leadership
- UI - UX design skills
- Interest in learning new tools, languages, workflows, and philosophies to grow
- Communication
Roles and Responsibilities:
- AWS Cloud Management: Design, deploy, and manage AWS cloud infrastructure. Optimize and maintain cloud resources for performance and cost efficiency. Monitor and ensure the security of cloud-based systems.
- Automated Provisioning: Develop and implement automated provisioning processes for infrastructure deployment. Utilize tools like Terraform and Packer to automate and streamline the provisioning of resources.
- Infrastructure as Code (IaC): Champion the use of Infrastructure as Code principles. Collaborate with development and operations teams to define and maintain IaC scripts for infrastructure deployment and configuration.
- Collaboration and Communication: Work closely with cross-functional teams to understand project requirements and provide DevOps expertise. Communicate effectively with team members and stakeholders regarding infrastructure changes, updates, and improvements.
- Continuous Integration/Continuous Deployment (CI/CD): Implement and maintain CI/CD pipelines to automate software delivery processes. Ensure reliable and efficient deployment of applications through the development lifecycle.
- Performance Monitoring and Optimization: Implement monitoring solutions to track system performance, troubleshoot issues, and optimize resource utilization. Proactively identify opportunities for system and process improvements.
Mandatory Skills:
- Proven experience as a DevOps Engineer or similar role, with a focus on AWS.
- Strong proficiency in automated provisioning and cloud management.
- Experience with Infrastructure as Code tools, particularly Terraform and Packer.
- Solid understanding of CI/CD pipelines and version control systems.
- Strong scripting skills (e.g., Python, Bash) for automation tasks.
- Excellent problem-solving and troubleshooting skills.
- Good interpersonal and communication skills for effective collaboration.
Secondary Skills:
- AWS certifications (e.g., AWS Certified DevOps Engineer, AWS Certified Solutions Architect).
- Experience with containerization and orchestration tools (e.g., Docker, Kubernetes).
- Knowledge of microservices architecture and serverless computing.
- Familiarity with monitoring and logging tools (e.g., CloudWatch, ELK stack).


This role is part of the Quickbase Center of Excellence, a global initiative operated in partnership with Aeries, and offers an exciting opportunity to work on cutting-edge DevOps technologies with strong collaboration across teams in the US, Bulgaria, and India.
Key Responsibilities
- Build and manage CI/CD pipelines across environments
- Automate infrastructure provisioning and deployments using Infrastructure as Code (IaC)
- Develop internal tools and scripts to boost developer productivity
- Set up and maintain monitoring, alerting, and performance dashboards
- Collaborate with cross-functional engineering teams to ensure infrastructure scalability and security
- Contribute to the DevOps Community of Practice by sharing best practices and tools
- Continuously evaluate and integrate new technologies and DevOps trends
Skills & Experience Required
- Strong scripting experience: Bash, PowerShell, Python, or Groovy
- Hands-on with containerization tools like Docker and Kubernetes
- Proficiency in Infrastructure as Code: Terraform, CloudFormation, or Azure Templates
- Experience with CI/CD tools such as Jenkins, TeamCity, GitHub Actions, or CircleCI
- Exposure to Serverless computing (AWS Lambda or Google App Engine)
- Cloud experience with AWS, GCP, or Azure
- Solid understanding of networking concepts: DNS, DHCP, SSL, subnets
- Experience with monitoring tools and alerting platforms
- Basic understanding of security principles and best practices
- Prior experience working directly with software engineering teams
Preferred Qualifications
- Bachelor’s degree in Computer Science or related discipline
- Strong communication skills (verbal & written)
- Ability to work effectively in a distributed, high-performance team
- Passion for DevOps best practices and a continuous learning mindset
- Customer-obsessed and committed to improving engineering efficiency
Why Join Us?
- Quickbase Center of Excellence: Purpose-built team delivering excellence from Bangalore
- Fast-Growing Environment: Be part of a growing company with strong career advancement
- Innovative Tech Stack: Exposure to cutting-edge tech in cloud, AI, and DevOps tooling
- Inclusive Culture: ERGs and leadership development programs to support growth
- Global Collaboration: Work closely with teams across the US, Bulgaria, and India
About Quickbase
Quickbase is a leading no-code platform that empowers organizations to create enterprise applications without writing code. Founded in 1999 and trusted by over 6,000 customers, Quickbase helps companies connect data, streamline workflows, and achieve real-time insights.
Learn more: https://www.quickbase.com
Job Description
We are seeking a seasoned DevOps Architect to join our dynamic team. The ideal candidate should possess a deep understanding of DevOps principles, system design, and architecture, with a focus on creating robust and scalable infrastructure solutions through automation. This role requires a candidate with hands-on experience in development, testing, and deployment processes. Additionally, the candidate should have a minimum of 5 years of experience in DevOps operations and should be proficient in team management, coordination, problem-solving, troubleshooting, and technical expertise.
About the company:
A rapidly growing omni-channel luxury retailer with eight stores across Mumbai, Delhi, Kolkata and a global e-commerce platform servicing 65+ countries worldwide. The 18-year-old company is an established market leader with considerable brand equity.
Location: Prabhadevi, Mumbai
Key Responsibilities:
- System Design and Architecture: Develop robust and scalable system designs that align with business requirements and industry best practices.
- Automation: Implement automation solutions to streamline processes and enhance system reliability.
- Development, Testing, and Deployment: Oversee the entire software development lifecycle, from code creation to testing and deployment.
- Coordination and Issue Resolution: Collaborate with cross-functional teams, resolve technical issues, and ensure smooth project execution.
- Troubleshooting: Apply your technical expertise to diagnose and resolve complex system issues efficiently.
- Interpersonal Skills: Communicate effectively with team members, stakeholders, and management to ensure project success.
- Ecommerce (B2C) Expertise: Bring in-depth knowledge of Ecommerce (B2C) operations to tailor DevOps solutions to our specific needs.
- Infrastructure Automation: Design and implement infrastructure automation tools and workflows to support CI/CD initiatives.
- CI/CD Pipeline Management: Build and operate complex CI/CD pipelines at scale, ensuring efficient software delivery.
- Cloud Expertise: Possess knowledge of handling GCP/AWS clouds, optimizing cloud resources, and managing cloud-based applications.
- Cybersecurity: Ensure that systems are safe and secure against cybersecurity threats, implementing best practices for data protection and compliance.
Requirements
Qualifications:
- Bachelor's degree in Computer Science or related field (Master's preferred).
- Minimum 5 years of hands-on experience in DevOps operations.
- Has worked to ensure system reliability, scale & performance in high growth environments.
- Experienced in designing and implementing scalable and robust IT solutions.
- Strong technical background and proficiency in DevOps tools and practices.
- Experience with Ecommerce (B2C) platforms is mandatory.
- Excellent team management, coordination, and interpersonal skills.
- Proficiency in troubleshooting and issue resolution.
- Familiarity with the latest open-source technologies.
- Expertise in CI/CD pipeline management.
- Knowledge of GCP/AWS cloud services.
- Understanding cybersecurity best practices.
Benefits
- Group Mediclaim cover 2.5 L sum assured (Employee + Spouse + 2 Children) & Group Personal Accident – 5 L sum assured.
- Rewards & Recognition programmes
Overview
adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job Description
We are looking for an experienced Technical Team Lead to guide a local IT Services Management Team and also acting as a software developer. In this role, you will be responsible for the application management of a B2C application to meet the agreed Service Level Agreements (SLAs) and fulfil customer expectations.
Your Team will act as a on-call-duty team in the time between 6 pm to 8 am, 365 days a year. You will work together with the responsible Senior Project Manager in Germany.
We are seeking a hands-on leader who thrives in both team management and operational development. Whether you have experience in DevOps and Backend or Frontend, your expertise in both leadership and technical skills will be key to success in this position.
Responsibilities:
Problem Management & Incident Management activities: Identifying and resolving technical issues and errors that arise during application usage.
Release and Update Coordination: Planning and executing software updates, new versions, or system upgrades to keep applications up to date.
Change Management: Responsible for implementing and coordinating changes to the application, considering the impact on ongoing operations.
Requirements:
Education und Experience: A Bachelor’s or Master’s degree in a relevant field, with a minimum of 5 years of professional experience or equivalent work experience.
Skills & Expertise:
Proficient in ITIL service management frameworks.
Strong analytical and problem-solving abilities.
Experienced in project management methodologies (Agile, Kanban).
Leadership: Very good leadership skills with a customer orientated, proactive and results driven approach.
Communication: Excellent communication, presentation, and interpersonal skills, with the ability to engage and collaborate with stakeholders.
Language: English on a C2 Level.
Skills & Requirements
kubeAPI high Kustomize high docker/container high Debug Tools openSSL high Curl high Azure Devops, Pipeline, Repository, Deployment, ArgoCD, Certificates: Certificate Management / SSL, LetsEncrypt, Linux Shell, Keycloak.
Intuitive is the fastest growing top-tier Cloud Solutions and Services company supporting Global Enterprise Customer across Americas, Europe and Middle East.
Excellent communication skills
Open to work on EST time zone(6pm to 3am)
Technical Skills:
· In depth understanding of DevSecOps process and governance
· Understanding of various branching strategies
· Hands on experience working with various testing and scanning tools (ex. SonarQube, Snyk, Blackduck, etc.)
· Expertise working with one or more CICD platforms (ex. Azure DevOps, GitLab, GitHub Actions, etc)
· Expertise within one CSP and experience/working knowledge of a second CSP (Azure, AWS, GCP)
· Proficient with Terraform
· Hands on experience working with Kubernetes
· Proficient working with GIT version control
· Hands on experience working with monitoring/observability tool(s) (Splunk, Data Dog, Dynatrace, etc)
· Hands on experience working with Configuration Management platform(s) (Chef, Saltstack, Ansible, etc)
· Hands on experience with GitOps
Job Description: DevOps Engineer
About Hyno:
Hyno Technologies is a unique blend of top-notch designers and world-class developers for new-age product development. Within the last 2 years we have collaborated with 32 young startups from India, US and EU to to find the optimum solution to their complex business problems. We have helped them to address the issues of scalability and optimisation through the use of technology and minimal cost. To us any new challenge is an opportunity.
As part of Hyno’s expansion plans,Hyno, in partnership with Sparity, is seeking an experienced DevOps Engineer to join our dynamic team. As a DevOps Engineer, you will play a crucial role in enhancing our software development processes, optimising system infrastructure, and ensuring the seamless deployment of applications. If you are passionate about leveraging cutting-edge technologies to drive efficiency, reliability, and scalability in software development, this is the perfect opportunity for you.
Position: DevOps Engineer
Experience: 5-7 years
Responsibilities:
- Collaborate with cross-functional teams to design, develop, and implement CI/CD pipelines for automated application deployment, testing, and monitoring.
- Manage and maintain cloud infrastructure using tools like AWS, Azure, or GCP, ensuring scalability, security, and high availability.
- Develop and implement infrastructure as code (IaC) using tools like Terraform or CloudFormation to automate the provisioning and management of resources.
- Constantly evaluate continuous integration and continuous deployment solutions as the industry evolves, and develop standardised best practices.
- Work closely with development teams to provide support and guidance in building applications with a focus on scalability, reliability, and security.
- Perform regular security assessments and implement best practices for securing the entire development and deployment pipeline.
- Troubleshoot and resolve issues related to infrastructure, deployment, and application performance in a timely manner.
- Follow regulatory and ISO 13485 requirements.
- Stay updated with industry trends and emerging technologies in the DevOps and cloud space, and proactively suggest improvements to current processes.
Requirements:
- Bachelor's degree in Computer Science, Engineering, or related field (or equivalent work experience).
- Minimum of 5 years of hands-on experience in DevOps, system administration, or related roles.
- Solid understanding of containerization technologies (Docker, Kubernetes) and orchestration tools
- Strong experience with cloud platforms such as AWS, Azure, or GCP, including services like ECS, S3, RDS, and more.
- Proficiency in at least one programming/scripting language such as Python, Bash, or PowerShell..
- Demonstrated experience in building and maintaining CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or CircleCI.
- Familiarity with configuration management tools like Ansible, Puppet, or Chef.
- Experience with container (Docker, ECS, EKS), serverless (Lambda), and Virtual Machine (VMware, KVM) architectures.
- Experience with infrastructure as code (IaC) tools like Terraform, CloudFormation, or Pulumi.
- Strong knowledge of monitoring and logging tools such as Prometheus, ELK stack, or Splunk.
- Excellent problem-solving skills and the ability to work effectively in a fast-paced, collaborative environment.
- Strong communication skills and the ability to work independently as well as in a team.
Nice to Have:
- Relevant certifications such as AWS Certified DevOps Engineer, Azure DevOps Engineer, Certified Kubernetes Administrator (CKA), etc.
- Experience with microservices architecture and serverless computing.
Soft Skills:
- Excellent written and verbal communication skills.
- Ability to manage conflict effectively.
- Ability to adapt and be productive in a dynamic environment.
- Strong communication and collaboration skills supporting multiple stakeholders and business operations.
- Self-starter, self-managed, and a team player.
Join us in shaping the future of DevOps at Hyno in collaboration with Sparity. If you are a highly motivated and skilled DevOps Engineer, eager to make an impact in a remote setting, we'd love to hear from you.
Role Description:
● Own, deploy, configure, and manage infrastructure environment and/or applications in
both private and public cloud through cross-technology administration (OS, databases,
virtual networks), scripting, and monitoring automation execution.
● Manage incidents with a focus on service restoration.
● Act as the primary point of contact for all compute, network, storage, security, or
automation incidents/requests.
● Manage rollout of patches and release management schedule and implementation.
Technical experience:
● Strong knowledge of scripting languages such as Bash, Python, and Golang.
● Expertise in using command line tools and shells
● Strong working knowledge of Linux/UNIX and related applications
● Knowledge in implementing DevOps and having an inclination towards automation.
● Sound knowledge in infrastructure-as-a-code approaches with Puppet, Chef, Ansible, or
Terraform, and Helm. (preference towards Terraform, Ansible, and Helm)
● Must have strong experience in technologies such as Docker, Kubernetes, OpenShift,
etc.
● Working with REST/gRPC/GraphQL APIs
● Knowledge in networking, firewalls, network automation
● Experience with Continuous Delivery pipelines - Jenkins/JenkinsX/ArgoCD/Tekton.
● Experience with Git, GitHub, and related tools
● Experience in at least one public cloud provider
Skills/Competencies
● Foundation: OS (Linux/Unix) & N/w concepts and troubleshooting
● Automation: Bash or Python or Golang
● CI/CD & Config Management: Jenkin, Ansible, ArgoCD, Helm, Chef/Puppet, Git/GitHub
● Infra as a Code: Terraform
● Platform: Docker, K8s, VMs
● Databases: MySQL, PostgreSql, DataStore (Mongo, Redis, AeroSpike) good to have
● Security: Vulnerability Management and Golden Image
● Cloud: Deep working knowledge on any public cloud (GCP preferable)
● Monitoring Tools: Prometheus, Grafana, NewRelic
Roles and Responsibilities
- 5 - 8 years of experience in Infrastructure setup on Cloud, Build/Release Engineering, Continuous Integration and Delivery, Configuration/Change Management.
- Good experience with Linux/Unix administration and moderate to significant experience administering relational databases such as PostgreSQL, etc.
- Experience with Docker and related tools (Cassandra, Rancher, Kubernetes etc.)
- Experience of working in Config management tools (Ansible, Chef, Puppet, Terraform etc.) is a plus.
- Experience with cloud technologies like Azure
- Experience with monitoring and alerting (TICK, ELK, Nagios, PagerDuty)
- Experience with distributed systems and related technologies (NSQ, RabbitMQ, SQS, etc.) is a plus
- Experience with scaling data store technologies is a plus (PostgreSQL, Scylla, Redis) is a plus
- Experience with SSH Certificate Authorities and Identity Management (Netflix BLESS) is a plus
- Experience with multi-domain SSL certs and provisioning a plus (Let's Encrypt) is a plus
- Experience with chaos or similar methodologies is a plus

