Cutshort logo
witmer health logo
Development-Security-Operations Technical Specialist
Development-Security-Operations Technical Specialist
witmer health's logo

Development-Security-Operations Technical Specialist

anand kumar's profile picture
Posted by anand kumar
4 - 8 yrs
₹5L - ₹15L / yr
Chennai
Skills
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
HIPAA
Cloud Computing
AWS Elastic Beanstalk
AWS Lambda
AWS Simple Notification Service (SNS)
Witmer Health Technologies is looking for a Development-Security-Operations Technical Specialist who will join the team in design, development, deployment, monitoring and gatekeeping HIPAA / HiTrust compliant reliable and hi-trust digital platforms for mental health.

The ideal person for the role will:
Possess a keen mind for solving tough problems by partnering effectively with various teams and stakeholders
Be comfortable working in a fast-paced, dynamic, and agile framework
Focus on implementing an end-to-end automated chain
Responsibilities
_____________________________________________________
Strengthen the application and environment security by applying standards and best practices and providing tooling to make development workflows more secure
Identify systems that can benefit from automation, monitoring and infrastructure-as-code and develop and scale products and services accordingly.
Implement sophisticated alerts and escalation mechanisms using automated processes
Help increase production system performance with a focus on high availability and scalability
Continue to keep the lights on (day-to-day administration)
Programmatically create infrastructure in AWS, leveraging Autoscaling Groups, Security Groups, Route53, S3 and IAM with Terraform and Ansible.
Enable our product development team to deliver new code daily through Continuous Integration and Deployment Pipelines.
Create a secure production infrastructure and protect our customer data with continuous security practices and monitoring. Design, develop and scale infrastructure-as-code
Establish SLAs for service uptime, and build the necessary telemetry and alerting platforms to enforce them
Architect and build continuous data pipelines for data lakes, Business Intelligence and AI practices of the company
Remain up to date on industry trends, share knowledge among teams and abide by industry best practices for configuration management and automation.
Qualifications and Background
_______________________________________________________
Graduate degree in Computer Science and Engineering or related technologies
Work or research project experience of 5-7 years, with a minimum of 3 years of experience directly related to the job description
Prior experience working in HIPAA / Hi-Trust frameworks will be given preference
About Witmer Health
_________________________________________________________
We exist to make mental healthcare more accessible, affordable, and effective. At Witmer, we are on a mission to build a research-driven, global mental healthcare company to work on developing novel solutions - by harnessing the power of AI/ML and data science - for a range of mental illnesses like depression, anxiety, OCD, and schizophrenia, among others. Our first foray will be in the space of workspace wellness, where we are building tools to help individual employees and companies improve their mental wellness and raise productivity levels.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

About witmer health

Founded :
2020
Type :
Products & Services
Size :
0-20
Stage :
Bootstrapped

About

At Witmer, we will apply the power of artificial intelligence, machine learning and data analytics to develop solutions for mental healthcare.
Read more

Company social profiles

linkedintwitter

Similar jobs

Designing a generic ML platform as a product.
Designing a generic ML platform as a product.
Agency job
via Qrata by Blessy Fernandes
Bengaluru (Bangalore)
4 - 8 yrs
₹25L - ₹50L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Requirements

  • 3+ years work experience writing clean production code
  • Well versed with maintaining infrastructure as code (Terraform, Cloudformation etc). High proficiency with Terraform / Terragrunt is absolutely critical
  • Experience of setting CI/CD pipelines from scratch
  • Experience with AWS(EC2, ECS, RDS, Elastic Cache etc), AWS lambda, Kubernetes, Docker, ServiceMesh
  • Experience with ETL pipelines, Bigdata infra
  • Understanding of common security issues

Roles / Responsibilities:

  • Write terraform modules for deploying different component of infrastructure in AWS like Kubernetes, RDS, Prometheus, Grafana, Static Website
  • Configure networking, autoscaling. continuous deployment, security and multiple environments
  • Make sure the infrastructure is SOC2, ISO 27001 and HIPAA compliant
  • Automate all the steps to provide a seamless experience to developers.
Read more
Information Technology Services
Information Technology Services
Agency job
via Jobdost by Sathish Kumar
Pune
5 - 9 yrs
₹10L - ₹30L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+8 more
Preferred Education & Experience: 
• Bachelor’s or master’s degree in Computer Engineering,
Computer Science, Computer Applications, Mathematics, Statistics or related technical field or
equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a
different stream of education.
• Well-versed in DevOps principals & practices and hands-on DevOps
tool-chain integration experience: Release Orchestration & Automation, Source Code & Build
Management, Code Quality & Security Management, Behavior Driven Development, Test Driven
Development, Continuous Integration, Continuous Delivery, Continuous Deployment, and
Operational Monitoring & Management; extra points if you can demonstrate your knowledge with
working examples.
• Hands-on experience with demonstrable working experience with DevOps tools
and platforms viz., Slack, Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory,
Terraform, Ansible/Chef/Puppet, Spinnaker, Tekton, StackStorm, Prometheus, Grafana, ELK,
PagerDuty, VictorOps, etc.
• Well-versed in Virtualization & Containerization; must demonstrate
experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox,
Vagrant, etc.
• Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate
experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in
any categories: Compute or Storage, Database, Networking & Content Delivery, Management &
Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstratable Cloud
Platform experience.
• Well-versed with demonstrable working experience with API Management,
API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption, tools &
platforms.
• Hands-on programming experience in either core Java and/or Python and/or JavaScript
and/or Scala; freshers passing out of college or lateral movers into IT must be able to code in
languages they have studied.
• Well-versed with Storage, Networks and Storage Networking basics
which will enable you to work in a Cloud environment.
• Well-versed with Network, Data, and
Application Security basics which will enable you to work in a Cloud as well as Business
Applications / API services environment.
• Extra points if you are certified in AWS and/or Azure
and/or Google Cloud.
Read more
Flytbase
at Flytbase
3 recruiters
Alice Philip
Posted by Alice Philip
Pune
2 - 4 yrs
₹8L - ₹15L / yr
CI/CD
skill iconAmazon Web Services (AWS)
skill iconDocker
Artificial Intelligence (AI)

Lead DevSecOps Engineer


Location: Pune, India (In-office) | Experience: 3–5 years | Type: Full-time


Apply here → https://lnk.ink/CLqe2


About FlytBase:

FlytBase is a Physical AI platform powering autonomous drones and robots across industrial sites. Our software enables 24/7 operations in critical infrastructure like solar farms, ports, oil refineries, and more.

We're building intelligent autonomy — not just automation — and security is core to that vision.


What You’ll Own

You’ll be leading and building the backbone of our AI-native drone orchestration platform — used by global industrial giants for autonomous operations.

Expect to:

  • Design and manage multi-region, multi-cloud infrastructure (AWS, Kubernetes, Terraform, Docker)
  • Own infrastructure provisioning through GitOps, Ansible, Helm, and IaC
  • Set up observability stacks (Prometheus, Grafana) and write custom alerting rules
  • Build for Zero Trust security — logs, secrets, audits, access policies
  • Lead incident response, postmortems, and playbooks to reduce MTTR
  • Automate and secure CI/CD pipelines with SAST, DAST, image hardening
  • Script your way out of toil using Python, Bash, or LLM-based agents
  • Work alongside dev, platform, and product teams to ship secure, scalable systems


What We’re Looking For:

You’ve probably done a lot of this already:

  • 3–5+ years in DevOps / DevSecOps for high-availability SaaS or product infra
  • Hands-on with Kubernetes, Terraform, Docker, and cloud-native tooling
  • Strong in Linux internals, OS hardening, and network security
  • Built and owned CI/CD pipelines, IaC, and automated releases
  • Written scripts (Python/Bash) that saved your team hours
  • Familiar with SOC 2, ISO 27001, threat detection, and compliance work

Bonus if you’ve:

  • Played with LLMs or AI agents to streamline ops and Built bots that monitor, patch, or auto-deploy.


What It Means to Be a Flyter

  • AI-native instincts: You don’t just use AI — you think in it. Your terminal window has a co-pilot.
  • Ownership without oversight: You own outcomes, not tasks. No one micromanages you here.
  • Joy in complexity: Security + infra + scale = your happy place.
  • Radical candor: You give and receive sharp feedback early — and grow faster because of it.
  • Loops over lines: we prioritize continuous feedback, iteration, and learning over one-way execution or rigid, linear planning.
  • H3: Happy. Healthy. High-Performing. We believe long-term performance stems from an environment where you feel emotionally fulfilled, physically well, and deeply motivated.
  • Systems > Heroics: We value well-designed, repeatable systems over last-minute firefighting or one-off effort.


Perks:

▪ Unlimited leave & flexible hours

▪ Top-tier health coverage

▪ Budget for AI tools, courses

▪ International deployments

▪ ESOPs and high-agency team culture


Apply Here- https://lnk.ink/CLqe2

Read more
Infra360 Solutions Pvt Ltd
at Infra360 Solutions Pvt Ltd
2 candid answers
HR Infra360
Posted by HR Infra360
Gurugram
3 - 8 yrs
₹10L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+2 more

Please Apply - https://zrec.in/7EYKe?source=CareerSite


About Us

Infra360 Solutions is a services company specializing in Cloud, DevSecOps, Security, and Observability solutions. We help technology companies adapt DevOps culture in their organization by focusing on long-term DevOps roadmap. We focus on identifying technical and cultural issues in the journey of successfully implementing the DevOps practices in the organization and work with respective teams to fix issues to increase overall productivity. We also do training sessions for the developers and make them realize the importance of DevOps. We provide these services - DevOps, DevSecOps, FinOps, Cost Optimizations, CI/CD, Observability, Cloud Security, Containerization, Cloud Migration, Site Reliability, Performance Optimizations, SIEM and SecOps, Serverless automation, Well-Architected Review, MLOps, Governance, Risk & Compliance. We do assessments of technology architecture, security, governance, compliance, and DevOps maturity model for any technology company and help them optimize their cloud cost, streamline their technology architecture, and set up processes to improve the availability and reliability of their website and applications. We set up tools for monitoring, logging, and observability. We focus on bringing the DevOps culture to the organization to improve its efficiency and delivery.


Job Description

Job Title:             Senior DevOps Engineer / SRE

Department:       Technology

Location:             Gurgaon

Work Mode:         On-site

Working Hours:   10 AM - 7 PM 

Terms:                 Permanent

Experience:      4-6 years

Education:           B.Tech/MCA

Notice Period:     Immediately

About Us

At Infra360.io, we are a next-generation cloud consulting and services company committed to delivering comprehensive, 360-degree solutions for cloud, infrastructure, DevOps, and security. We partner with clients to transform and optimize their technology landscape, ensuring resilience, scalability, cost efficiency and innovation.

Our core services include Cloud Strategy, Site Reliability Engineering (SRE), DevOps, Cloud Security Posture Management (CSPM), and related Managed Services. We specialize in driving operational excellence across multi-cloud environments, helping businesses achieve their goals with agility and reliability.

We thrive on ownership, collaboration, problem-solving, and excellence, fostering an environment where innovation and continuous learning are at the forefront. Join us as we expand and redefine what’s possible in cloud technology and infrastructure.


Role Summary

We are seeking a Senior DevOps Engineer (SRE) to manage and optimize large-scale, mission-critical production systems. The ideal candidate will have a strong problem-solving mindset, extensive experience in troubleshooting, and expertise in scaling, automating, and enhancing system reliability. This role requires hands-on proficiency in tools like Kubernetes, Terraform, CI/CD, and cloud platforms (AWS, GCP, Azure), along with scripting skills in Python or Go. The candidate will drive observability and monitoring initiatives using tools like Prometheus, Grafana, and APM solutions (Datadog, New Relic, OpenTelemetry).

Strong communication, incident management skills, and a collaborative approach are essential. Experience in team leadership and multi-client engagement is a plus.


Ideal Candidate Profile


  • Solid 4-6 years of experience as an SRE and DevOps with a proven track record of handling large-scale production environments
  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field
  • Strong Hands-on experience with managing Large Scale Production Systems
  • Strong Production Troubleshooting Skills and handling high-pressure situations.
  • Strong Experience with Databases (PostgreSQL, MongoDB, ElasticSearch, Kafka)
  • Worked on making production systems more Scalable, Highly Available and Fault-tolerant
  • Hands-on experience with ELK or other logging and observability tools
  • Hands-on experience with Prometheus, Grafana & Alertmanager and on-call processes like Pagerduty
  • Problem-Solving Mindset
  • Strong with skills - K8s, Terraform, Helm, ArgoCD, AWS/GCP/Azure etc
  • Good with Python/Go Scripting Automation
  • Strong with fundamentals like DNS, Networking, Linux
  • Experience with APM tools like - Newrelic, Datadog, OpenTelemetry
  • Good experience with Incident Response, Incident Management, Writing detailed RCAs
  • Experience with Applications best practices in making apps more reliable and fault-tolerant
  • Strong leadership skills and the ability to mentor team members and provide guidance on best practices.
  • Able to manage multiple clients and take ownership of client issues.
  • Experience with Git and coding best practices


Good to have

  • Team-leading Experience
  • Multiple Client Handling
  • Requirements gathering from clients
  • Good Communication


Key Responsibilities


  1. Design and Development:
  2. Architect, design, and develop high-quality, scalable, and secure cloud-based software solutions.
  3. Collaborate with product and engineering teams to translate business requirements into technical specifications.
  4. Write clean, maintainable, and efficient code, following best practices and coding standards.
  5. Cloud Infrastructure:
  6. Develop and optimise cloud-native applications, leveraging cloud services like AWS, Azure, or Google Cloud Platform (GCP).
  7. Implement and manage CI/CD pipelines for automated deployment and testing.
  8. Ensure the security, reliability, and performance of cloud infrastructure.
  9. Technical Leadership:
  10. Mentor and guide junior engineers, providing technical leadership and fostering a collaborative team environment.
  11. Participate in code reviews, ensuring adherence to best practices and high-quality code delivery.
  12. Lead technical discussions and contribute to architectural decisions.
  13. Problem Solving and Troubleshooting:
  14. Identify, diagnose, and resolve complex software and infrastructure issues.
  15. Perform root cause analysis for production incidents and implement preventative measures.
  16. Continuous Improvement:
  17. Stay up-to-date with the latest industry trends, tools, and technologies in cloud computing and software engineering.
  18. Contribute to the continuous improvement of development processes, tools, and methodologies.
  19. Drive innovation by experimenting with new technologies and solutions to enhance the platform.
  20. Collaboration:
  21. Work closely with DevOps, QA, and other teams to ensure smooth integration and delivery of software releases.
  22. Communicate effectively with stakeholders, including technical and non-technical team members.
  23. Client Interaction & Management: 
  24. Will serve as a direct point of contact for multiple clients.
  25. Able to handle the unique technical needs and challenges of two or more clients concurrently. 
  26. Involve both direct interaction with clients and internal team coordination.
  27. Production Systems Management: 
  28. Must have extensive experience in managing, monitoring, and debugging production environments. 
  29. Will work on troubleshooting complex issues and ensure that production systems are running smoothly with minimal downtime.
Read more
ZeMoSo Technologies
at ZeMoSo Technologies
11 recruiters
HR Team
Posted by HR Team
Remote only
4 - 8 yrs
₹10L - ₹20L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Looking out for GCP Devop's Engineer who can join Immediately or within 15 days

 

Job Summary & Responsibilities:

 

Job Overview:

 

You will work in engineering and development teams to integrate and develop cloud solutions and virtualized deployment of software as a service product. This will require understanding the software system architecture function as well as performance and security requirements. The DevOps Engineer is also expected to have expertise in available cloud solutions and services, administration of virtual machine clusters, performance tuning and configuration of cloud computing resources, the configuration of security, scripting and automation of monitoring functions. This position requires the deployment and management of multiple virtual clusters and working with compliance organizations to support security audits. The design and selection of cloud computing solutions that are reliable, robust, extensible, and easy to migrate are also important.

 

Experience:

 

Experience working on billing and budgets for a GCP project - MUST

 

Experience working on optimizations on GCP based on vendor recommendations - NICE TO HAVE

 

Experience in implementing the recommendations on GCP

 

Architect Certifications on GCP - MUST

 

Excellent communication skills (both verbal & written) - MUST

 

Excellent documentation skills on processes and steps and instructions- MUST

 

At least 2 years of experience on GCP.

 

 

Basic Qualifications:

● Bachelor’s/Master’s Degree in Engineering OR Equivalent.

 

● Extensive scripting or programming experience (Shell Script, Python).

 

● Extensive experience working with CI/CD (e.g. Jenkins).

 

● Extensive experience working with GCP, Azure, or Cloud Foundry.

 

● Experience working with databases (PostgreSQL, elastic search).

 

● Must have 2 years of minimum experience with GCP certification.

 

 

Benefits :

● Competitive salary.

 

● Work from anywhere.

 

● Learning and gaining experience rapidly.

 

● Reimbursement for basic working set up at home.

 

● Insurance (including top-up insurance for COVID).

 

Location :

Remote - work from anywhere.

Read more
AJACKUS
at AJACKUS
1 video
6 recruiters
Kaushik Vedpathak
Posted by Kaushik Vedpathak
Remote only
2 - 7 yrs
₹4L - ₹18L / yr
DevOps
MySQL
skill iconKubernetes
Cloud Computing
Google Cloud Platform (GCP)
+2 more

Type, Location

Full Time @ Anywhere in India

 

Desired Experience

2+ years

 

Job Description

What You’ll Do

● Deploy, automate and maintain web-scale infrastructure with leading public cloud vendors such as Amazon Web Services, Digital Ocean & Google Cloud Platform.

● Take charge of DevOps activities for CI/CD with the latest tech stacks.

● Acquire industry-recognized, professional cloud certifications (AWS/Google) in the capacity of developer or architect Devise multi-region technical solutions.

● Implementing the DevOps philosophy and strategy across different domains in organisation.

● Build automation at various levels, including code deployment to streamline release process

● Will be responsible for architecture of cloud services

● 24*7 monitoring of the infrastructure

● Use programming/scripting in your day-to-day work

● Have shell experience - for example Powershell on Windows, or BASH on *nix

● Use a Version Control System, preferably git

● Hands on at least one CLI/SDK/API of at least one public cloud ( GCP, AWS, DO)

● Scalability, HA and troubleshooting of web-scale applications.

● Infrastructure-As-Code tools like Terraform, CloudFormation

● CI/CD systems such as Jenkins, CircleCI

● Container technologies such as Docker, Kubernetes, OpenShift

● Monitoring and alerting systems: e.g. NewRelic, AWS CloudWatch, Google StackDriver, Graphite, Nagios/ICINGA

 

What you bring to the table

● Hands on experience in Cloud compute services, Cloud Function, Networking, Load balancing, Autoscaling.

● Hands on with GCP/AWS Compute & Networking services i.e. Compute Engine, App Engine, Kubernetes Engine, Cloud Function, Networking (VPC, Firewall, Load Balancer), Cloud SQL, Datastore.

● DBs: Postgresql, MySQL, Elastic Search, Redis, kafka, MongoDB or other NoSQL systems

● Configuration management tools such as Ansible/Chef/Puppet

 

 

Bonus if you have…

● Basic understanding of Networking(routing, switching, dns) and Storage

● Basic understanding of Protocol such as UDP/TCP

● Basic understanding of Cloud computing

● Basic understanding of Cloud computing models like SaaS, PaaS

● Basic understanding of git or any other source code repo

● Basic understanding of Databases(sql/no sql)

● Great problem solving skills

● Good in communication

● Adaptive to learning

Read more
Basik Marketing PVT LTD
at Basik Marketing PVT LTD
2 candid answers
Naveen G
Posted by Naveen G
Bengaluru (Bangalore)
6 - 10 yrs
₹15L - ₹22L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Automation
+4 more

As a DevOps Engineer with experience in Kubernetes, you will be responsible for leading and managing a team of DevOps engineers in the design, implementation, and maintenance of the organization's infrastructure. You will work closely with software developers, system administrators, and other IT professionals to ensure that the organization's systems are efficient, reliable, and scalable. 

Specific responsibilities will include: 

  • Leading the team in the development and implementation of automation and continuous delivery pipelines using tools such as Jenkins, Terraform, and Ansible. 
  • Managing the organization's infrastructure using Kubernetes, including deployment, scaling, and monitoring of applications. 
  • Ensuring that the organization's systems are secure and compliant with industry standards. 
  • Collaborating with software developers to design and implement infrastructure as code. 
  • Providing mentorship and technical guidance to team members. 
  • Troubleshooting and resolving technical issues in collaboration with other IT professionals. 
  • Participating in the development and maintenance of the organization's disaster recovery and incident response plans. 

To be successful in this role, you should have strong leadership skills and experience with a variety of DevOps and infrastructure tools and technologies. You should also have excellent communication and problem-solving skills, and be able to work effectively in a fast-paced, dynamic environment. 

Read more
CloudAngle
Bogam Surender
Posted by Bogam Surender
Hyderabad
7 - 15 yrs
₹10L - ₹15L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconJava
skill iconNodeJS (Node.js)
+2 more
  1. Bachelor’s and/or master’s degree in Computer Science, Computer Engineering or related technical discipline
  2. About 5 years of professional experience supporting AWS cloud environments
  3. Certified Amazon Architect Associate or Architect
  4. Experience serving as lead (shift management, reporting) will be a plus
  5. AWS Architect Certified Solution Architect Professional (Must have)
  6. Minimum 4yrs experience, maximum 8 years’ experience.
  1. 100% work from office in Hyderabad
  2. Very fluent in English
Read more
Olacabs.com
at Olacabs.com
6 recruiters
Roshni Pillai
Posted by Roshni Pillai
Bengaluru (Bangalore)
5 - 9 yrs
₹8L - ₹21L / yr
DevOps
skill iconAmazon Web Services (AWS)
skill iconKubernetes
Linux/Unix
We are looking for a Site Reliability Engineer/Sr. Site Reliability Engineer to help us build and enhance platforms to achieve availability, scalability and operational effectiveness. The right individual will embrace the opportunity to tackle challenging problems and use their influence to drive continual improvement. You will also work on the cutting edge of technology, leveraging Kong, Repose, Docker, Mesos/Kubernetes, Jenkins, Chef, HaProxy, Nginx, GitLab, MySQL, Scylla, Aerospike, Service Mesh ( Istio/Linkerd), Prometheus etc.

Roles and Responsibilities
● Managing Availability, Performance, Capacity of infrastructure and applications.
● Building and implementing observability for applications health/performance/capacity.
● Optimizing On-call rotations and processes.
● Documenting “tribal” knowledge.
● Managing Infra-platforms like
- Mesos/Kubernetes
- CICD
- Observability(Prometheus/New Relic/ELK)
- Cloud Platforms ( AWS/ Azure )
- Databases
- Data Platforms Infrastructure
● Providing help in onboarding new services with the production readiness review process.
● Providing reports on services SLO/Error Budgets/Alerts and Operational Overhead.
● Working with Dev and Product teams to define SLO/Error Budgets/Alerts.
● Working with the Dev team to have an in-depth understanding of the application architecture and its bottlenecks.
● Identifying observability gaps in product services, infrastructure and working with stake owners to fix it.
● Managing Outages and doing detailed RCA with developers and identifying ways to avoid that situation.
● Managing/Automating upgrades of the infrastructure services.
● Automate toil work.

Experience & Skills
● 3+ Years of experience as an SRE/DevOps/Infrastructure Engineer on large scale microservices and infrastructure.
● A collaborative spirit with the ability to work across disciplines to influence, learn, and deliver.
● A deep understanding of computer science, software development, and networking principles.
● Demonstrated experience with languages, such as Python, Java, Golang etc.
● Extensive experience with Linux administration and good understanding of the various linux kernel subsystems (memory, storage, network etc).
● Extensive experience in DNS, TCP/IP, UDP, GRPC, Routing and Load Balancing.
● Expertise in GitOps, Infrastructure as a Code tools such as Terraform etc.. and Configuration Management Tools such as Chef, Puppet, Saltstack, Ansible.
● Expertise of Amazon Web Services (AWS) and/or other relevant Cloud Infrastructure solutions like Microsoft Azure or Google Cloud.
● Experience in building CI/CD solutions with tools such as Jenkins, GitLab, Spinnaker, Argo etc.
● Experience in managing and deploying containerized environments using Docker,
Mesos/Kubernetes is a plus.
● Experience with multiple datastores is a plus (MySQL, PostgreSQL, Aerospike,
Couchbase, Scylla, Cassandra, Elasticsearch).
● Experience with data platforms tech stacks like Hadoop, Hive, Presto etc is a plus
Read more
Radical HealthTech
at Radical HealthTech
3 recruiters
Shibjash Dutt
Posted by Shibjash Dutt
NCR (Delhi | Gurgaon | Noida)
2 - 7 yrs
₹5L - ₹15L / yr
skill iconPython
Terraform
skill iconAmazon Web Services (AWS)
Linux/Unix
skill iconDocker
DevOps Engineer


Radical is a platform connecting data, medicine and people -- through machine learning, and usable, performant products. Software has never been the strong suit of the medical industry -- and we are changing that. We believe that the same sophistication and performance that powers our daily needs through millions of consumer applications -- be it your grocery, your food delivery or your movie tickets -- when applied to healthcare, has a massive potential to transform the industry, and positively impact lives of patients and doctors. Radical works with some of the largest hospitals and public health programmes in India, and has a growing footprint both inside the country and abroad.


As a DevOps Engineer at Radical, you will:

Work closely with all stakeholders in the healthcare ecosystem - patients, doctors, paramedics and administrators - to conceptualise and bring to life the ideal set of products that add value to their time
Work alongside Software Developers and ML Engineers to solve problems and assist in architecture design
Work on systems which have an extraordinary emphasis on capturing data that can help build better workflows, algorithms and tools
Work on high performance systems that deal with several million transactions, multi-modal data and large datasets, with a close attention to detail


We’re looking for someone who has:

Familiarity and experience with writing working, well-documented and well-tested scripts, Dockerfiles, Puppet/Ansible/Chef/Terraform scripts.
Proficiency with scripting languages like Python and Bash.
Knowledge of systems deployment and maintainence, including setting up CI/CD and working alongside Software Developers, monitoring logs, dashboards, etc.
Experience integrating with a wide variety of external tools and services
Experience navigating AWS and leveraging appropriate services and technologies rather than DIY solutions (such as hosting an application directly on EC2 vs containerisation, or an Elastic Beanstalk)


It’s not essential, but great if you have:

An established track record of deploying and maintaining systems.
Experience with microservices and decomposition of monolithic architectures
Proficiency in automated tests.
Proficiency with the linux ecosystem
Experience in deploying systems to production on cloud platforms such as AWS


The position is open now, and we are onboarding immediately.


Please write to us with an updated resume, and one thing you would like us to see as part of your application. This one thing can be anything that you think makes you stand apart among candidates.


Radical is based out of Delhi NCR, India, and we look forward to working with you!


We're looking for people who may not know all the answers, but are obsessive about finding them, and take pride in the code that they write. We are more interested in the ability to learn fast, think rigorously and for people who aren’t afraid to challenge assumptions, and take large bets -- only to work hard and prove themselves correct. You're encouraged to apply even if your experience doesn't precisely match the job description. Join us.

Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos