Cutshort logo
US Healthcare IT Product company logo
CloudOps Lead
US Healthcare IT Product company
US Healthcare IT Product company's logo

CloudOps Lead

at US Healthcare IT Product company

Agency job
8 - 10 yrs
₹20L - ₹30L / yr
Pune
Skills
DevOps
skill iconDocker
skill iconKubernetes
skill iconJenkins
skill iconAmazon Web Services (AWS)

Responsibilities

  • Designing and building infrastructure to support AWS, Azure, and GCP-based Cloud services and infrastructure.
  • Creating and utilizing tools to monitor our applications and services in the cloud including system health indicators, trend identification, and anomaly detection.
  • Working with development teams to help engineer scalable, reliable, and resilient software running in the cloud.
  • Participating in on-call escalation to troubleshoot customer-facing issues
  • Analyzing and monitoring performance bottlenecks and key metrics to optimize software and system performance.
  • Providing analytics and forecasts for cloud capacity, troubleshooting analysis, and uptime.

Skills

  • Should have strong experience of a couple of years, in leading DevOps team and planning, defining DevOps roadmap and executing as per the same along with the team 
  • Familiarity with AWS cloud and JSON templates, Python, AWS Cloud formation templates
  • Designing solutions using one or more AWS features, tools, and technologies such as EC2, EBS, Glacier, S3, ELB, CloudFormation, Lambada, CloudWatch, VPC, RDS, Direct Connect, AWS CLI, REST API
  • Design and implement system architecture with AWS cloud - Develop automation scripts, ARM templates, Ansible, Chef, Python, Powershell Knowledge of AWS services and cloud design patterns- Knowledge on Cloud fundamentals like autoscaling, serverless
  • Have experience with DevOps and Infrastructure as Code: AWS environment and application automation utilizing CloudFormation and third-party tools. CI/CD pipeline setup utilizing
  • CI experience with the following is a must: Jenkins, Bitbucket/GIT, Nexus or Artifactory, SonarQube, WireMock or other mocking solution 
  • Expert knowledge on Windows/Linux OS/Mac with at least 5-6 years of system administration experience
  • Should have strong skills in using JIRA build tool
  • Should have knowledge in managing the CI/CD pipeline on public cloud deployments using AWS
  • Should have strong skills in using tools like Jenkins, Docker, Kubernetes (AWS EKS, Azure AKS), and Cloudformation.
  • Experience in monitoring tools like Pingdom, Nagios, etc.
  • Experience in reverse proxy services like Nginx and Apache
  • Desirable experience in Bitbucket with version control tools like GIT/SVN
  • Experience of manual/automated testing desired application deployments
  • Experience in database technologies such as PostgreSQL, MySQL
  • Knowledge of helm and terraform
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

Similar jobs

Tracxn
at Tracxn
1 recruiter
Tracxn Technologies
Posted by Tracxn Technologies
Bengaluru (Bangalore)
1 - 8 yrs
₹3L - ₹25L / yr
skill iconPython
Shell Scripting
Ansible
Linux/Unix
skill iconAmazon Web Services (AWS)
+1 more

Mode of Hire: Permanent

Required Skills Set (Mandatory): Linux, Shell Scripting, Python, AWS, Security best practices, Git

Desired Skills (Good if you have): Ansible, Terraform


Job Responsibilities

  • Design, develop, and maintain deployment pipelines and automation tooling to improve platform efficiency, scalability, and reliability.
  • Manage infrastructure and services in production AWS environments.
  • Drive platform improvements with a focus on security, scalability, and operational excellence.
  • Collaborate with engineering teams to enhance development tooling, streamline access workflows, and improve platform usability through feedback.
  • Mentor junior engineers and help foster a culture of high-quality engineering and knowledge sharing.


Job Requirements

  • Strong foundational understanding of Linux systems.
  • Cloud experience (e.g., AWS) with strong problem-solving in cloud-native environments.
  • Proven track record of delivering robust, well-documented, and secure automation solutions.
  • Comfortable owning end-to-end delivery of infrastructure components and tooling.


Preferred Qualifications

  • Advanced system and cloud optimization skills.
  • Prior experience in platform teams or DevOps roles at product-focused startups.
  • Demonstrated contributions to internal tooling, open-source, or automation projects.


Read more
LogiNext
at LogiNext
1 video
7 recruiters
Rakhi Daga
Posted by Rakhi Daga
Mumbai
11 - 15 yrs
₹1L - ₹15L / yr
Microservices
Linux/Unix
skill iconPython
Shell Scripting
skill iconAmazon Web Services (AWS)
+22 more

Only apply on this link - https://loginext.hire.trakstar.com/jobs/fk025uh?source=" target="_blank">https://loginext.hire.trakstar.com/jobs/fk025uh?source=

LogiNext is looking for a technically savvy and passionate Associate Vice President - Product Engineering - DevOps or Senior Database Administrator to cater to the development and operations efforts in product. You will choose and deploy tools and technologies to build and support a robust infrastructure.

You have hands-on experience in building secure, high-performing and scalable infrastructure. You have experience to automate and streamline the development operations and processes. You are a master in troubleshooting and resolving issues in dev, staging and production environments.

 

Responsibilities:

  • Design and implement scalable infrastructure for delivering and running web, mobile and big data applications on cloud
  • Scale and optimise a variety of SQL and NoSQL databases (especially MongoDB), web servers, application frameworks, caches, and distributed messaging systems
  • Automate the deployment and configuration of the virtualized infrastructure and the entire software stack
  • Plan, implement and maintain robust backup and restoration policies ensuring low RTO and RPO
  • Support several Linux servers running our SaaS platform stack on AWS, Azure, IBM Cloud, Ali Cloud
  • Define and build processes to identify performance bottlenecks and scaling pitfalls
  • Manage robust monitoring and alerting infrastructure 
  • Explore new tools to improve development operations to automate daily tasks
  • Ensure High Availability and Auto-failover with minimum or no manual interventions


Requirements:

  • Bachelor’s degree in Computer Science, Information Technology or a related field
  • 11 to 14 years of experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
  • Strong background in Linux/Unix Administration and Python/Shell Scripting
  • Extensive experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
  • Experience in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios
  • Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
  • Experience in query analysis, peformance tuning, database redesigning, 
  • Experience in enterprise application development, maintenance and operations
  • Knowledge of best practices and IT operations in an always-up, always-available service
  • Excellent written and oral communication skills, judgment and decision-making skills.
  • Excellent leadership skill.
Read more
Bito Inc
at Bito Inc
2 recruiters
Amrit Dash
Posted by Amrit Dash
Remote only
5 - 8 yrs
Best in industry
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
Microsoft Windows Azure
Ansible
Chef
+7 more

Bito is a startup that is using AI (ChatGPT, OpenAI, etc) to create game-changing productivity experiences for software developers in their IDE and CLI. Already, over 100,000 developers are using Bito to increase their productivity by 31% and performing more than 1 million AI requests per week.

 

Our founders have previously started, built, and taken a company public (NASDAQ: PUBM), worth well over $1B. We are looking to take our learnings, learn a lot along with you, and do something more exciting this time. This journey will be incredibly rewarding, and is incredibly difficult!

 

We are building this company with a fully remote approach, with our main teams for time zone management in the US and in India. The founders happen to be in Silicon Valley and India.

 

We are hiring a DevOps Engineer to join our team.

 

Responsibilities:

  • Collaborate with the development team to design, develop, and implement Java-based applications
  • Perform analysis and provide recommendations for Cloud deployments and identify opportunities for efficiency and cost reduction
  • Build and maintain clusters for various technologies such as Aerospike, Elasticsearch, RDS, Hadoop, etc
  • Develop and maintain continuous integration (CI) and continuous delivery (CD) frameworks
  • Provide architectural design and practical guidance to software development teams to improve resilience, efficiency, performance, and costs
  • Evaluate and define/modify configuration management strategies and processes using Ansible
  • Collaborate with DevOps engineers to coordinate work efforts and enhance team efficiency
  • Take on leadership responsibilities to influence the direction, schedule, and prioritization of the automation effort

Requirements:

  • Minimum 4+ years of relevant work experience in a DevOps role
  • At least 3+ years of experience in designing and implementing infrastructure as code within the AWS/GCP/Azure ecosystem
  • Expert knowledge of any cloud core services, big data managed services, Ansible, Docker, Terraform/CloudFormation, Amazon ECS/Kubernetes, Jenkins, and Nginx
  • Expert proficiency in at least two scripting/programming languages such as Bash, Perl, Python, Go, Ruby, etc.
  • Mastery in configuration automation tool sets such as Ansible, Chef, etc
  • Proficiency with Jira, Confluence, and Git toolset
  • Experience with automation tools for monitoring and alerts such as Nagios, Grafana, Graphite, Cloudwatch, New Relic, etc
  • Proven ability to manage and prioritize multiple diverse projects simultaneously

What do we offer: 

At Bito, we strive to create a supportive and rewarding work environment that enables our employees to thrive. Join a dynamic team at the forefront of generative AI technology. 

·               Work from anywhere 

·               Flexible work timings 

·               Competitive compensation, including stock options 

·               A chance to work in the exciting generative AI space 

·               Quarterly team offsite events

Read more
Coredge
at Coredge
3 recruiters
Sajal Saxena
Posted by Sajal Saxena
Bengaluru (Bangalore), Noida, Pune
5 - 10 yrs
₹20L - ₹35L / yr
OpenStack
Ansible
Ceph
skill iconDocker
skill iconKubernetes
+1 more

You need to drive automation for implementing scalable and robust applications. You would indulge your dedication and passion to build server-side optimization ensuring low-latency and high-end performance for the cloud deployed within datacentre. You should have sound knowledge of Open stack and Kubernetes domain.

YOUR ‘OKR’ SUMMARY

OKR means Objective and Key Results.

As a DevOps Engineer, you will understand the overall movement of data in the entire platform, find bottlenecks,define solutions, develop key pieces, write APIs and own deployment of those. You will work with internal and external development teams to discover these opportunities and to solve hard problems. You will also guide engineers in solving the complex problems, developing your acceptance tests for those and reviewing the work and

the test results.

What you will do

  • As a DevOps Engineer responsible for systems being used by customer across the globe.
  • Set the goals for overall system and divide into goals for the sub-system.
  • Guide/motivate/convince/mentor the architects on sub-systems and help them achieving improvements with agility and speed.
  • Identify the performance bottleneck and come up with the solution to optimize time and cost taken by build/test system.
  • Be a thought leader to contribute to the capacity planning for software/hardware, spanning internal and public cloud, solving the trade-off between turnaround time and utilization.
  • Bring in technologies enabling massively parallel systems to improve turnaround time by an order of magnitude.

What you will need

A strong sense of ownership, urgency, and drive. As an integral part of the development team, you will need the following skills to succeed.

  • BS or BE/B.Tech or equivalent experience in EE/CS with 10+ years of experience.
  • Strong background of Architecting and shipping distributed scalable software product with good understanding of system programming.
  • Excellent background of Cloud technologies like: OpenStack, Docker, Kubernetes, Ansible, Ceph is must.
  • Excellent understanding of hybrid, multi-cloud architecture and edge computing concepts.
  • Ability to identify the bottleneck and come up with solution to optimize it.
  • Programming and software development skills in Python, Shell-script along with good understanding of distributed systems and REST APIs.
  • Experience in working with SQL/NoSQL database systems such as MySQL, MongoDB or Elasticsearch.
  • Excellent knowledge and working experience with Docker containers and Virtual Machines.
  • Ability to effectively work across organizational boundaries to maximize alignment and productivity between teams.
  • Ability and flexibility to work and communicate effectively in a multi-national, multi-time-zone corporate.

Additional Advantage:
  • Deep understanding of technology and passionate about what you do.
  • Background in designing high performant scalable software systems with strong focus to optimizehardware cost.
  • Solid collaborative and interpersonal skills, specifically a proven ability to effectively guide andinfluence within a dynamic environment.
  • Strong commitment to get the most performance out of a system being worked on.
  • Prior development of a large software project using service-oriented architecture operating with real time constraints.

What's In It for You?

  • You will get a chance to work on cloud-native and hyper-scale products
  • You will be working with industry leaders in cloud.
  • You can expect a steep learning curve.
  • You will get the experience of solving real time problems, eventually you become a problem solver.

Benefits & Perks:

  • Competitive Salary
  • Health Insurance
  • Open Learning - 100% Reimbursement for online technical courses.
  • Fast Growth - opportunities to grow quickly and surely
  • Creative Freedom + Flat hierarchy
  • Sponsorship to all those employees who represent company in events and meet ups.
  • Flexible working hours
  • 5 days week
  • Hybrid Working model (Office and WFH)

Our Hiring Process:

Candidates for this position can expect the hiring process as follows (subject to successful clearing of every round)

  • Initial Resume screening call with our Recruiting team
  • Next, candidates will be invited to solve coding exercises.
  • Next, candidates will be invited for first technical interview
  • Next, candidates will be invited for final technical interview
  • Finally, candidates will be invited for Culture Plus interview with HR
  • Candidates may be asked to interview with the Leadership team
  • Successful candidates will subsequently be made an offer via email

As always, the interviews and screening call will be conducted via a mix of telephonic and video call.

So, if you are looking at an opportunity to really make a difference- make it with us…

Coredge.io provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable central, state or local laws.

Read more
Great Place to work Certified
Great Place to work Certified
Agency job
via Purple Hirez by Aditya K
Hyderabad
8 - 12 yrs
₹10L - ₹34L / yr
Ansible
DevOps
skill iconJenkins
skill iconDocker
skill iconKubernetes
+6 more
  • Collaborate with Dev, QA and Data Science teams on environment maintenance, monitoring (ELK, Prometheus or equivalent), deployments and diagnostics
  • Administer a hybrid datacenter, including AWS and EC2 cloud assets
  • Administer, automate and troubleshoot container based solutions deployed on AWS ECS
  • Be able to troubleshoot problems and provide feedback to engineering on issues
  • Automate deployment (Ansible, Python), build (Git, Maven. Make, or equivalent) and integration (Jenkins, Nexus) processes
  • Learn and administer technologies such as ELK, Hadoop etc.
  • A self-starter and enthusiasm to learn and pick up new technologies in a fast-paced environment.

Need to have

  • Hands-on Experience in Cloud based DevOps
  • Experience working in AWS (EC2, S3, CloudFront, ECR, ECS etc)
  • Experience with any programming language.
  • Experience using Ansible, Docker, Jenkins, Kubernetes
  • Experience in Python.
  • Should be very comfortable working in Linux/Unix environment.
  • Exposure to Shell Scripting.
  • Solid troubleshooting skills
Read more
They provide both wholesale and retail funding. PM1
They provide both wholesale and retail funding. PM1
Agency job
via Multi Recruit by Sapna Deb
Bengaluru (Bangalore)
8 - 10 yrs
₹40L - ₹50L / yr
DevOps
skill iconDocker
skill iconAmazon Web Services (AWS)
CI/CD
Ansible
+5 more
  • 3+ years experience leading a team of DevOps engineers
  • 8+ years experience managing DevOps for large engineering teams developing cloud-native software
  • Strong in networking concepts
  • In-depth knowledge of AWS and cloud architectures/services.
  • Experience within the container and container orchestration space (Docker, Kubernetes)
  • Passion for CI/CD pipeline using tools such as Jenkins etc.
  • Familiarity with config management tools like Ansible Terraform etc
  • Proven record of measuring and improving DevOps metrics
  • Familiarity with observability tools and experience setting them up
  • Passion for building tools and productizing services that empower development teams.
  • Excellent knowledge of Linux command-line tools and ability to write bash scripts.
  • Strong in Unix / Linux administration and management,


KEY ROLES/RESPONSIBILITIES:

  • Own and manage the entire cloud infrastructure
  • Create the entire CI/CD pipeline to build and release
  • Explore new technologies and tools and recommend those that best fit the team and organization
  • Own and manage the site reliability
  • Strong decision-making skills and metric-driven approach
  • Mentor and coach other team members
Read more
A firm which works with US Clients. Permanent wfh
A firm which works with US Clients. Permanent wfh
Agency job
via Jobdost by Riya Roy
Remote only
4 - 10 yrs
₹7.5L - ₹13L / yr
DevOps
skill iconAmazon Web Services (AWS)
Linux/Unix
GCD
skill iconKubernetes
+7 more

This person MUST have:

  • B.E Computer Science or equivalent
  • 2+ Years of hands-on experience troubleshooting/setting up of the Linux environment, who can write shell scripts for any given requirement.
  • 1+ Years of hands-on experience setting up/configuring AWS or GCP services from SCRATCH and maintaining them.
  • 1+ Years of hands-on experience setting up/configuring Kubernetes & EKS and ensuring high availability of container orchestration.
  • 1+ Years of hands-on experience setting up CICD from SCRATCH in Jenkins & Gitlab.
  • Experience configuring/maintaining one monitoring tool.
  • Excellent verbal & written communication skills.
  • Candidates with certifications - AWS, GCP, CKA, etc will be preferred
  • Hands-on experience with databases (Cassandra, MongoDB, MySQL, RDS).

Experience:


  • Min 3 years of experience as SRE automation engineer building, running, and maintaining production sites. Not looking for candidates who have experience only as L1/L2.

Location:

  • Remotely, anywhere in India

Timings:

  • The person is expected to deliver with both high speed and high quality as well as work for 40 Hours per week (~6.5 hours per day, 6 days per week) in shifts which will rotate every month.

Position:

  • Full time/Direct
  • We have great benefits such as PF, medical insurance, 12 annual company holidays, 12 PTO leaves per year, annual increments, Diwali bonus, spot bonuses and other incentives etc.
  • We dont believe in locking in people with large notice periods.  You will stay here because you love the company.  We have only a 15 days notice period.
Read more
Semiconductor based industry
Bengaluru (Bangalore)
8 - 14 yrs
₹10L - ₹50L / yr
DevOps
Red Hat Linux
redhat
EDA
Reporting
Your challenge
The mission of R&D IT Design Infrastructure is to offer a state-of-the-art design environment
for the chip hardware designers. The R&D IT design environment is a complex landscape of EDA Applications, High Performance Compute, and Storage environments - consolidated in five regional datacenters. Over 7,000 chip hardware designers, spread across 40+ locations around the world, use this state-of-the-art design environment to design new chips and drive the innovation of company. The following figures give an idea about the scale: the landscape has more 75,000+ cores, 30+ PBytes of data, and serves 2,000+ CAD applications and versions. The operational service teams are globally organized to cover 24/7 support to the chip hardware design and software design projects.
Since the landscape is really too complex to manage the traditional way, it is our strategy to transform our R&D IT design infrastructure into “software-defined datacenters”. This transformation entails a different way of work and a different mind-set (DevOps, Site Reliability Engineering) to ensure that our IT services are reliable. That’s why we are looking for a DevOps Linux Engineer to strengthen the team that is building a new on-premise software defined virtualization and containerization platform (PaaS) for our IT landscape, so that we can manage it with best practices from software engineering and offer the IT service reliability which is required by our chip hardware design community.
It will be your role to develop and maintain the base Linux OS images that are offered via automation to the customers of the internal (on-premise) cloud platforms.

Your responsibilities as DevOps Linux Engineer:
• Develop and maintain the base RedHat Linux operating system images
• Develop and maintain code to configure and test the base OS image
• Provide input to support the team to design, develop and maintain automation
products with playbooks (YAML) and modules (Python/PowerShell) in tools like
Ansible Tower and Service-Now
• Test and verify the code produced by the team (including your own) to continuously
improve and refactor
• Troubleshoot and solve incidents on the RedHat Linux operating system
• Work actively with other teams to align on the architecture of the PaaS solution
• Keep the Base OS image up2date via patches or make sure patches are available to
the virtual machine owners
• Train team members and others with your extensive automation knowledge
• Work together with ServiceNow developers in your team to provide the best intuitive
end-user experience possible for the virtual machine OS deployments

We are looking for a DevOps engineer/consultant with the following characteristics:
• Master or Bachelor degree
• You are a technical, creative, analytical and open-minded engineer that is eager to
learn and not afraid to take initiative.
• Your favorite t-shirt has “Linux” or “RedHat” printed on it at least once.
• Linux guru: You have great knowledge Linux servers (RedHat), RedHat Satellite 6
and other RedHat products
• Experience in Infrastructure services, e.g., Networking, DNS, LDAP, SMTP
• DevOps mindset: You are a team-player that is eager to develop and maintain cool
products to automate/optimize processes in a complex IT infrastructure and are able
to build and maintain productive working relationships
• You have great English communication skills, both verbally as in writing.
• No issue to work outside business hours to support the platform for critical R&D
Applications

Other competences we value, but are not strictly mandatory:
• Experience with agile development methods, like Scrum, and are convinced of its
power to deliver products with immense (business) value.
• “Security” is your middle name, and you are always challenging yourself and your
colleagues to design and develop new solutions as security tight as possible.
• Being a master in automation and orchestration with tools like Ansible Tower (or
comparable) and feel comfortable with developing new modules in Python or
PowerShell.
• It would be awesome if you are already a true Yoda when it comes to code version
control and branching strategies with Git, and preferably have worked with GitLab
before.
• Experience with automated testing in a CI/CD pipeline with Ansible, Python and tools
like Selenium.
• An enthusiast on cloud platforms like Azure & AWS.
• Background in and affinity with R&D
Read more
www.thecatalystiq.com
at www.thecatalystiq.com
1 recruiter
Uneza Maqbool
Posted by Uneza Maqbool
Mumbai
5 - 15 yrs
₹25L - ₹35L / yr
IBM Director
DevOps
skill iconDocker
skill iconKubernetes
Linux/Unix
+1 more

Your Role:

    • Serve as a primary point responsible for the overall health, performance, and capacity of one or more of our Internet-facing services
    • Gain deep knowledge of our complex applications
    • Assist in the roll-out and deployment of new product features and installations to facilitate our rapid iteration and constant growth
    • Develop tools to improve our ability to rapidly deploy and effectively monitor custom applications in a large-scale UNIX environment
    • Work closely with development teams to ensure that platforms are designed with "operability" in mind.
    • Function well in a fast-paced, rapidly-changing environment
    • Should be able to lead a team of smart engineers
    • Should be able to strategically guide the team to greater automation adoption

Must Have:

    • Experience Building/managing DevOps/SRE teams
    • Strong in troubleshooting/debugging Systems, Network and Applications
    • Strong in Unix/Linux operating systems and Networking
    • Working knowledge of Open source technologies in Monitoring, Deployment and incident management

Good to Have:

      • Minimum 3+ years of team management experience
      • Experience in Containers and orchestration layers like Kubernetes, Mesos/Marathon
      • Proven experience in programming & diagnostics in any languages like Go, Python, Java
      • Experience in NoSQL/SQL technologies like Cassandra/MySQL/CouchBase etc.
      • Experience in BigData technologies like Kafka/Hadoop/Airflow/Spark
      • Is a die-hard sports fan
 
 
 
Read more
Radical HealthTech
at Radical HealthTech
3 recruiters
Shibjash Dutt
Posted by Shibjash Dutt
NCR (Delhi | Gurgaon | Noida)
2 - 7 yrs
₹5L - ₹15L / yr
skill iconPython
Terraform
skill iconAmazon Web Services (AWS)
Linux/Unix
skill iconDocker
DevOps Engineer


Radical is a platform connecting data, medicine and people -- through machine learning, and usable, performant products. Software has never been the strong suit of the medical industry -- and we are changing that. We believe that the same sophistication and performance that powers our daily needs through millions of consumer applications -- be it your grocery, your food delivery or your movie tickets -- when applied to healthcare, has a massive potential to transform the industry, and positively impact lives of patients and doctors. Radical works with some of the largest hospitals and public health programmes in India, and has a growing footprint both inside the country and abroad.


As a DevOps Engineer at Radical, you will:

Work closely with all stakeholders in the healthcare ecosystem - patients, doctors, paramedics and administrators - to conceptualise and bring to life the ideal set of products that add value to their time
Work alongside Software Developers and ML Engineers to solve problems and assist in architecture design
Work on systems which have an extraordinary emphasis on capturing data that can help build better workflows, algorithms and tools
Work on high performance systems that deal with several million transactions, multi-modal data and large datasets, with a close attention to detail


We’re looking for someone who has:

Familiarity and experience with writing working, well-documented and well-tested scripts, Dockerfiles, Puppet/Ansible/Chef/Terraform scripts.
Proficiency with scripting languages like Python and Bash.
Knowledge of systems deployment and maintainence, including setting up CI/CD and working alongside Software Developers, monitoring logs, dashboards, etc.
Experience integrating with a wide variety of external tools and services
Experience navigating AWS and leveraging appropriate services and technologies rather than DIY solutions (such as hosting an application directly on EC2 vs containerisation, or an Elastic Beanstalk)


It’s not essential, but great if you have:

An established track record of deploying and maintaining systems.
Experience with microservices and decomposition of monolithic architectures
Proficiency in automated tests.
Proficiency with the linux ecosystem
Experience in deploying systems to production on cloud platforms such as AWS


The position is open now, and we are onboarding immediately.


Please write to us with an updated resume, and one thing you would like us to see as part of your application. This one thing can be anything that you think makes you stand apart among candidates.


Radical is based out of Delhi NCR, India, and we look forward to working with you!


We're looking for people who may not know all the answers, but are obsessive about finding them, and take pride in the code that they write. We are more interested in the ability to learn fast, think rigorously and for people who aren’t afraid to challenge assumptions, and take large bets -- only to work hard and prove themselves correct. You're encouraged to apply even if your experience doesn't precisely match the job description. Join us.

Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos