The ideal person for the role will:
Possess a keen mind for solving tough problems by partnering effectively with various teams and stakeholders
Be comfortable working in a fast-paced, dynamic, and agile framework
Focus on implementing an end-to-end automated chain
Responsibilities
_____________________________________________________
Strengthen the application and environment security by applying standards and best practices and providing tooling to make development workflows more secure
Identify systems that can benefit from automation, monitoring and infrastructure-as-code and develop and scale products and services accordingly.
Implement sophisticated alerts and escalation mechanisms using automated processes
Help increase production system performance with a focus on high availability and scalability
Continue to keep the lights on (day-to-day administration)
Programmatically create infrastructure in AWS, leveraging Autoscaling Groups, Security Groups, Route53, S3 and IAM with Terraform and Ansible.
Enable our product development team to deliver new code daily through Continuous Integration and Deployment Pipelines.
Create a secure production infrastructure and protect our customer data with continuous security practices and monitoring. Design, develop and scale infrastructure-as-code
Establish SLAs for service uptime, and build the necessary telemetry and alerting platforms to enforce them
Architect and build continuous data pipelines for data lakes, Business Intelligence and AI practices of the company
Remain up to date on industry trends, share knowledge among teams and abide by industry best practices for configuration management and automation.
Qualifications and Background
_______________________________________________________
Graduate degree in Computer Science and Engineering or related technologies
Work or research project experience of 5-7 years, with a minimum of 3 years of experience directly related to the job description
Prior experience working in HIPAA / Hi-Trust frameworks will be given preference
About Witmer Health
_________________________________________________________
We exist to make mental healthcare more accessible, affordable, and effective. At Witmer, we are on a mission to build a research-driven, global mental healthcare company to work on developing novel solutions - by harnessing the power of AI/ML and data science - for a range of mental illnesses like depression, anxiety, OCD, and schizophrenia, among others. Our first foray will be in the space of workspace wellness, where we are building tools to help individual employees and companies improve their mental wellness and raise productivity levels.

Similar jobs
Job Title : Senior DevOps Engineer
Location : Remote
Experience Level : 5+ Years
Role Overview :
We are a funded AI startup seeking a Senior DevOps Engineer to design, implement, and maintain a secure, scalable, and efficient infrastructure. In this role, you will focus on automating operations, optimizing deployment processes, and enabling engineering teams to deliver high-quality products seamlessly.
Key Responsibilities:
Infrastructure Scalability & Reliability :
- Architect and manage cloud infrastructure on AWS, GCP, or Azure for high availability, reliability, and cost-efficiency.
 - Implement container orchestration using Kubernetes or Docker Compose.
 - Utilize Infrastructure as Code (IaC) tools like Pulumi or Terraform to manage and configure infrastructure.
 
Deployment Automation :
- Design and maintain CI/CD pipelines using GitHub Actions, Jenkins, or similar tools.
 - Implement deployment strategies such as canary or blue-green deployments, and create rollback mechanisms to ensure seamless updates.
 
Monitoring & Observability :
- Leverage tools like OpenTelemetry, Grafana, and Datadog to monitor system health and performance.
 - Establish centralized logging systems and create real-time dashboards for actionable insights.
 
Security & Compliance :
- Securely manage secrets using tools like HashiCorp Vault or Doppler.
 - Conduct static code analysis with tools such as SonarQube or Snyk to ensure compliance with security standards.
 
Collaboration & Team Enablement :
- Mentor and guide team members on DevOps best practices and workflows.
 - Document infrastructure setups, incident runbooks, and troubleshooting workflows to enhance team efficiency.
 
Required Skills :
- Expertise in managing cloud platforms like AWS, GCP, or Azure.
 - In-depth knowledge of Kubernetes, Docker, and IaC tools like Terraform or Pulumi.
 - Advanced scripting capabilities in Python or Bash.
 - Proficiency in CI/CD tools such as GitHub Actions, Jenkins, or similar.
 - Experience with observability tools like Grafana, OpenTelemetry, and Datadog.
 - Strong troubleshooting skills for debugging production systems and optimizing performance.
 
Preferred Qualifications :
- Experience in scaling AI or ML-based applications.
 - Familiarity with distributed systems and microservices architecture.
 - Understanding of agile methodologies and DevSecOps practices.
 - Certifications in AWS, Azure, or Kubernetes.
 
What We Offer :
- Opportunity to work in a fast-paced AI startup environment.
 - Flexible remote work culture.
 - Competitive salary and equity options.
 - Professional growth through challenging projects and learning opportunities.
 
Job Title: Lead DevOps Engineer
Experience Required: 8+ years in DevOps or related fields
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.
Key Responsibilities:
Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).
CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.
Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.
Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.
Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.
Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.
Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.
Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.
Required Skills & Qualifications:
Technical Expertise:
Strong proficiency in cloud platforms like AWS, Azure, or GCP.
Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).
Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.
Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.
Proficiency in scripting languages (e.g., Python, Bash, PowerShell).
Soft Skills:
Excellent communication and leadership skills.
Strong analytical and problem-solving abilities.
Proven ability to manage and lead a team effectively.
Experience:
8+ years of experience in DevOps or Site Reliability Engineering (SRE).
3+ years in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.
Strong understanding of microservices, APIs, and serverless architectures.
Nice to Have:
Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.
Experience with GitOps tools such as ArgoCD or Flux.
Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).
Perks & Benefits:
Competitive salary and performance bonuses.
Comprehensive health insurance for you and your family.
Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.
Flexible working hours and remote work options.
Collaborative and inclusive work culture.
Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.
Type, Location
Full Time @ Anywhere in India
Desired Experience
2+ years
Job Description
What You’ll Do
● Deploy, automate and maintain web-scale infrastructure with leading public cloud vendors such as Amazon Web Services, Digital Ocean & Google Cloud Platform.
● Take charge of DevOps activities for CI/CD with the latest tech stacks.
● Acquire industry-recognized, professional cloud certifications (AWS/Google) in the capacity of developer or architect Devise multi-region technical solutions.
● Implementing the DevOps philosophy and strategy across different domains in organisation.
● Build automation at various levels, including code deployment to streamline release process
● Will be responsible for architecture of cloud services
● 24*7 monitoring of the infrastructure
● Use programming/scripting in your day-to-day work
● Have shell experience - for example Powershell on Windows, or BASH on *nix
● Use a Version Control System, preferably git
● Hands on at least one CLI/SDK/API of at least one public cloud ( GCP, AWS, DO)
● Scalability, HA and troubleshooting of web-scale applications.
● Infrastructure-As-Code tools like Terraform, CloudFormation
● CI/CD systems such as Jenkins, CircleCI
● Container technologies such as Docker, Kubernetes, OpenShift
● Monitoring and alerting systems: e.g. NewRelic, AWS CloudWatch, Google StackDriver, Graphite, Nagios/ICINGA
What you bring to the table
● Hands on experience in Cloud compute services, Cloud Function, Networking, Load balancing, Autoscaling.
● Hands on with GCP/AWS Compute & Networking services i.e. Compute Engine, App Engine, Kubernetes Engine, Cloud Function, Networking (VPC, Firewall, Load Balancer), Cloud SQL, Datastore.
● DBs: Postgresql, MySQL, Elastic Search, Redis, kafka, MongoDB or other NoSQL systems
● Configuration management tools such as Ansible/Chef/Puppet
Bonus if you have…
● Basic understanding of Networking(routing, switching, dns) and Storage
● Basic understanding of Protocol such as UDP/TCP
● Basic understanding of Cloud computing
● Basic understanding of Cloud computing models like SaaS, PaaS
● Basic understanding of git or any other source code repo
● Basic understanding of Databases(sql/no sql)
● Great problem solving skills
● Good in communication
● Adaptive to learning
- Public clouds, such as AWS, Azure, or Google Cloud Platform
 - Automation technologies, such as Kubernetes or Jenkins
 - Configuration management tools, such as Puppet or Chef
 - Scripting languages, such as Python or Ruby
 
Requirements:
● Should have at least 2+ years of DevOps experience
● Should have experience with Kubernetes
● Should have experience with Terraform/Helm
● Should have experience in building scalable server-side systems
● Should have experience in cloud infrastructure and designing databases
● Having experience with NodeJS/TypeScript/AWS is a bonus
● Having experience with WebRTC is a bonus
Job description
The role requires you to design development pipelines from the ground up, Creation of Docker Files,  design and operate highly available systems in AWS Cloud environments. Also involves Configuration Management, Web Services Architectures, DevOps Implementation, Database management, Backups, and Monitoring.
Key responsibility area
- Ensure reliable operation of CI/CD pipelines
- Orchestrate the provisioning, load balancing, configuration, monitoring and billing of resources in the cloud environment in a highly automated manner
- Logging, metrics and alerting management.
- Creation of Bash/Python scripts for automation
- Performing root cause analysis for production errors.
Requirement
- 2 years experience as Team Lead.
- Good Command on kubernetes.
- Proficient in Linux Commands line and troubleshooting.
- Proficient in AWS Services. Deployment, Monitoring and troubleshooting applications in AWS.
- Hands-on experience with CI tooling preferably with Jenkins.
- Proficient in deployment using Ansible.
- Knowledge of infrastructure management tools (Infrastructure as cloud) such as terraform, AWS cloudformation etc.
- Proficient in deployment of applications behind load balancers and proxy servers such as nginx, apache.
- Scripting languages: Bash, Python, Groovy.
- Experience with Logging, Monitoring, and Alerting tools like ELK(Elastic-search, Logstash, Kibana), Nagios. Graylog, splunk Prometheus, Grafana is a plus.
Must Have:
Linux, CI/CD(Jenkin), AWS, Scripting(Bash,shell Python, Go), Ngnix, Docker.
Good to have
Configuration Management(Ansible or similar tool), Logging tool( ELK or similar), Monitoring tool(Ngios or similar), IaC(Terraform, cloudformation).Projects you'll be working on:
- We're focused on enhancing our product for our clients and their users, as well as streamlining operations and improving our technical foundation.
 - Writing scripts for procurement, configuration and deployment of instances (infrastructure automation) on GCP
 - Managing Kubernetes cluster
 - Manage product and services like VPC, Elasticsearch, cloud functions, rabbitMQ, redis servers, postgres infrastructure, app engine, etc.
 - Supporting developers in setting up infrastructure for services
 - Manage and improve microservices infrastructure
 - Managing high availability, low latency applications
 - Focus on security best practices to ensure assist in security and compliance activities
 
Requirements
- Minimum 3 years experience as DevOps
 - Minimum 1 years' experience with Kubernetes Cluster (Infrastructure as code, maintaining and scalability).
 - BASH expertise, node or python professional programming experience
 - Experience with setting up, configuring and using Jenkins or any CI tools, building CI/CD pipeline
 - Experience setting microservices architecture
 - Experience with package management and deployments
 - Thorough understanding of networking.
 - Understanding of all common services and protocols
 - Experience in web server configuration, monitoring, network design and high availability
 - Thorough understanding of DNS, VPN, SSL
 
Technologies you'll work with:
- GKE, Prometheus, Grafana, Stackdriver
 - ArgoCD and GitHub Actions
 - NodeJS Backend
 - Postgres, ElasticSearch, Redis, RabbitMQ
 - Whatever else you decide - we're constantly re-evaluating our stack and tools
 - Having prior experience with the technologies is a plus, but not mandatory for skilled candidates.
 
Benefits
- Remote Option - You can work from location of your choice :)
 - Reimbursement of Home Office Setup
 - Competitive Salary
 - Friendly atmosphere
 - Flexible paid vacation policy
 
The brand is associated with some of the major icons across categories and tie-ups with industries covering fashion, sports, and music, of course. The founders are Marketing grads, with vast experience in the consumer lifestyle products and other major brands. With their vigorous efforts toward quality and marketing, they have been able to strike a chord with major E-commerce brands and even consumers.
What you will do:
- Defining and documenting best practices and strategies regarding application deployment and infrastructure maintenance
 - Providing guidance, thought leadership and mentorship to development teams to build cloud competencies
 - Ensuring application performance, uptime, and scale, maintaining high standards of code quality and thoughtful design
 - Managing cloud environments in accordance with company security guidelines
 - Developing and implementing technical efforts to design, build and deploy AWS applications at the direction of lead architects, including large-scale data processing, computationally intensive statistical modeling and advanced analytics
 - Participating in all aspects of the software development life cycle for AWS solutions, including planning, requirements, development, testing, and quality assurance
 - Troubleshooting incidents, identifying root cause, fixing and documenting problems and implementing preventive measures
 - Educating teams on the implementation of new cloud-based initiatives, providing associated training as required
 
Desired Candidate Profile
What you need to have:- Bachelor’s degree in computer science, information technology
 - 2+ years of experience as architect, designing, developing, and implementing cloud solutions on AWS platforms
 - Experience in several of the following areas: database architecture, ETL, business intelligence, big data, machine learning, advanced analytic
 - Proven ability to collaborate with multi-disciplinary teams of business analysts, developers, data scientists and subject matter experts
 - Self-motivation with the ability to drive features to delivery
 - Strong analytical and problem solving skills
 - Excellent oral and written communication skills
 - Good logical sense, strong technical skills and the ability to learn new technologies quickly
 - AWS certifications are a plus
 - Knowledge of web services, API, REST, and RPC
 
Rules & Responsibilities:
- Design, implement and maintain all AWS infrastructure and services within a managed service environment
 - Should be able to work on 24 X 7 shifts for support of infrastructure.
 - Design, Deploy and maintain enterprise class security, network and systems management applications within an AWS environment
 - Design and implement availability, scalability, and performance plans for the AWS managed service environment
 - Continual re-evaluation of existing stack and infrastructure to maintain optimal performance, availability and security
 - Manage the production deployment and deployment automation
 - Implement process and quality improvements through task automation
 - Institute infrastructure as code, security automation and automation or routine maintenance tasks
 - Experience with containerization and orchestration tools like docker, Kubernetes
 - Build, Deploy and Manage Kubernetes clusters thru automation
 - Create and deliver knowledge sharing presentations and documentation for support teams
 - Learning on the job and explore new technologies with little supervision
 - Work effectively with onsite/offshore teams
 
Qualifications:
- Must have Bachelor's degree in Computer Science or related field and 4+ years of experience in IT
 - Experience in designing, implementing, and maintaining all AWS infrastructure and services
 - Design and implement availability, scalability, and performance plans for the AWS managed service environment
 - Continual re-evaluation of existing stack and infrastructure to maintain optimal performance, availability, and security
 - Hands-on technical expertise in Security Architecture, automation, integration, and deployment
 - Familiarity with compliance & security standards across the enterprise IT landscape
 - Extensive experience with Kubernetes and AWS(IAM, Route53, SSM, S3, EFS, EBS, ELB, Lambda, CloudWatch, CloudTrail, SQS, SNS, RDS, Cloud Formation, DynamoDB)
 - Solid understanding of AWS IAM Roles and Policies
 - Solid Linux experience with a focus on web (Apache Tomcat/Nginx)
 - Experience with automation/configuration management using Terraform\Chef\Ansible or similar.
 - Understanding of protocols/technologies like Microservices, HTTP/HTTPS, SSL/TLS, LDAP, JDBC, SQL, HTML
 - Experience in managing and working with the offshore teams
 - Familiarity with CI/CD systems such as Jenkins, GitLab CI
 - Scripting experience (Python, Bash, etc.)
 - AWS, Kubernetes Certification is preferred
 - Ability to work with and influence Engineering teams
 
● Develop and deliver automation software required for building & improving the functionality, reliability, availability, and manageability of applications and cloud platforms
● Champion and drive the adoption of Infrastructure as Code (IaC) practices and mindset
● Design, architect, and build self-service, self-healing, synthetic monitoring and alerting platform and tools
● Automate the development and test automation processes through CI/CD pipeline (Git, Jenkins, SonarQube, Artifactory, Docker containers)
● Build container hosting-platform using Kubernetes
● Introduce new cloud technologies, tools & processes to keep innovating in commerce area to drive greater business value.
Skills Required:
● Excellent written and verbal communication skills and a good listener.
● Proficiency in deploying and maintaining Cloud based infrastructure services (AWS, GCP, Azure – good hands-on experience in at least one of them)
● Well versed with service-oriented architecture, cloud-based web services architecture, design patterns and frameworks.
● Good knowledge of cloud related services like compute, storage, network, messaging (Eg SNS, SQS) and automation (Eg. CFT/Terraform).
● Experience with relational SQL and NoSQL databases, including Postgres and
Cassandra.
● Experience in systems management/automation tools (Puppet/Chef/Ansible, Terraform)
● Strong Linux System Admin Experience with excellent troubleshooting and problem solving skills
● Hands-on experience with languages (Bash/Python/Core Java/Scala)
● Experience with CI/CD pipeline (Jenkins, Git, Maven etc)
● Experience integrating solutions in a multi-region environment
● Self-motivate, learn quickly and deliver results with minimal supervision
● Experience with Agile/Scrum/DevOps software development methodologies.
Nice to Have:
● Experience in setting-up Elastic Logstash Kibana (ELK) stack.
● Having worked with large scale data.
● Experience with Monitoring tools such as Splunk, Nagios, Grafana, DataDog etc.
● Previously experience on working with distributed architectures like Hadoop, Mapreduce etc.









