Cutshort logo
Reputed client of People First logo
MLOps Engineer
Reputed client of People First
Reputed client of People First's logo

MLOps Engineer

at Reputed client of People First

2 - 9 yrs
₹10L - ₹30L / yr
Chennai, Bengaluru (Bangalore), Gurugram
Skills
DevOps
skill iconKubernetes
skill iconDocker
skill iconJenkins
CI/CD
Google Cloud Storage
Microsoft Windows Azure
skill iconAmazon Web Services (AWS)
Mlops
Job Description:
As a MLOps Engineer in QuantumBlack you will:

Develop and deploy technology that enables data scientists and data engineers to build, productionize and deploy machine learning models following best practices. Work to set the standards for SWE and
DevOps practices within multi-disciplinary delivery teams

Choose and use the right cloud services, DevOps tooling and ML tooling for the team to be able to produce high-quality code that allows your team to release to production
.
Build modern, scalable, and secure CI/CD pipelines to automate development and deployment
workflows used by data scientists (ML pipelines) and data engineers (Data pipelines)

Shape and support next generation technology that enables scaling ML products and platforms. Bring
expertise in cloud to enable ML use case development, including MLOps

Our Tech Stack-

We leverage AWS, Google Cloud, Azure, Databricks, Docker, Kubernetes, Argo, Airflow, Kedro, Python,
Terraform, GitHub actions, MLFlow, Node.JS, React, Typescript amongst others in our projects

Key Skills:

• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security

• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)

• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)

• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)

• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)

• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure

• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

Similar jobs

CLOUDSUFI
at CLOUDSUFI
3 recruiters
Ayushi Dwivedi
Posted by Ayushi Dwivedi
Noida
4 - 8 yrs
₹15L - ₹22L / yr
Google Cloud Platform (GCP)
skill iconKubernetes
Terraform
skill iconDocker
helm

About Us

CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services, and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.


Our Values

We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.


Equal Opportunity Statement

CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/


What are we looking for

We are seeking a highly skilled and experienced Senior DevOps Engineer to join our team. The ideal candidate will have extensive expertise in modern DevOps tools and practices, particularly in managing CI/CD pipelines, infrastructure as code, and cloud-native environments. This role involves designing, implementing, and maintaining robust, scalable, and efficient infrastructure and deployment pipelines to support our development and operations teams.


Required Skills and Experience:

- 7+ years of experience in DevOps, infrastructure automation, or related fields.

- Advanced expertise in Terraform for infrastructure as code.

- Solid experience with Helm for managing Kubernetes applications.

- Proficient with GitHub for version control, repository management, and workflows.

- Extensive experience with Kubernetes for container orchestration and management.

- In-depth understanding of Google Cloud Platform (GCP) services and architecture.

- Strong scripting and automation skills (e.g., Python, Bash, or equivalent).

- Excellent problem-solving skills and attention to detail. - Strong communication and collaboration abilities in agile development environments.


Preferred Qualifications:

- Experience with other CI/CD tools (e.g., Jenkins, GitLab CI/CD).

- Knowledge of additional cloud platforms (e.g., AWS, Azure).

- Certification in Kubernetes (CKA/CKAD) or Google Cloud (GCP Professional DevOps Engineer).


Behavioral Competencies

• Must have worked with US/Europe based clients in onsite/offshore delivery models.

• Should have very good verbal and written communication, technical articulation, listening and presentation skills.

• Should have proven analytical and problem solving skills.

• Should have collaborative mindset for cross-functional team work

• Passion for solving complex search problems

• Should have demonstrated effective task prioritization, time management and internal/external stakeholder management skills.

• Should be a quick learner, self starter, go-getter and team player.

• Should have experience of working under stringent deadlines in a Matrix organization structure.

Read more
CloudTechner
at CloudTechner
1 recruiter
Siba Pulugurty
Posted by Siba Pulugurty
Gurugram, Noida
8 - 12 yrs
₹20L - ₹40L / yr
Microsoft Windows Azure
skill iconKubernetes
Terraform
Ansible
skill iconGitHub
+3 more

Azure DevOps engineer should have a deep understanding of container principles and hands-on experience with Docker. 

They should also be able to set-up and manage clusters using Azure Kubernetes Service (AKS). Additionally, understanding of API management, Azure Key-Vaults, ACR, networking concepts like virtual networks, subnets, NSG, route tables. Awareness of any one of the software like Apigee, Kong, or APIM in Azure is a must. Strong experience with IaC technologies like Terraform, ARM/ Bicep Templates, GitHub Pipelines, Sonar etc.


  • Designing DevOps strategies: Recommending strategies for migrating and consolidating DevOps tools, designing an Agile work management approach, and creating a secure development process
  • Implementing DevOps development processes: Designing version control strategies, integrating source control, and managing build infrastructure
  • Managing application configuration and secrets: Ensuring system and infrastructure availability, stability, scalability, and performance
  • Automating processes: Overseeing code releases and deployments with an emphasis on continuous integration and delivery
  • Collaborating with teams: Working with architect and developers to ensure smooth code integration and collaborating with development and operations teams to define pipelines.
  • Documentation: Producing detailed Development Architecture design, setting up the DevOps tools and working together with the CI/CD specialist in integrating the automated CI and CD pipelines with those tools
  • Ensuring security and compliance/DevSecOps: Managing code quality and security policies
  • Troubleshooting issues: Investigating issues and responding to customer queries
  • Core Skills: Azure DevOps engineer should have a deep understanding of container principles and hands-on experience with Docker. They should also be able to set-up and manage clusters using Azure Kubernetes Service (AKS). Additionally, understanding of API management, Azure Key-Vaults, ACR, networking concepts like virtual networks, subnets, NSG, route tables. Awareness of any one of the software like Apigee, Kong, or APIM in Azure is a must. Strong experience with IaC technologies like Terraform, ARM/ Bicep Templates, GitHub Pipelines, Sonar,
  • Additional Skills: Self-starter and ability to execute tasks on time, Excellent communication skills, ability to come up with multiple solutions for problems, interact with client-side experts to resolve issues by providing correct pointers, excellent debugging skills, ability to breakdown tasks into smaller steps.


Read more
Coredge
at Coredge
3 recruiters
Sajal Saxena
Posted by Sajal Saxena
Remote only
3 - 6 yrs
₹8L - ₹20L / yr
OpenStack
ceph
skill iconDocker
skill iconKubernetes
DevOps
+3 more
You need to drive automation for implementing scalable and robust applications. You would indulge your dedication and passion to build server-side optimization ensuring low-latency and high-end performance for the cloud deployed within datacentre. You should have sound knowledge of Open stack and Kubernetes domain.

YOUR ‘OKR’ SUMMARY

OKR means Objective and Key Results.

As a Cloud Engineer, you will understand the overall movement of data in the entire platform, find bottlenecks,define solutions, develop key pieces, write APIs and own deployment of those. You will work with internal and external development teams to discover these  opportunities and to solve hard problems. You will also guide engineers in solving the complex problems, developing your acceptance tests for those and reviewing the work and

the test results.


What you will do

  • End to End Complete RHOSP Deployment (Under and Overcloud) considered as a NFVi Deployment
  • Installing Red Hat’s OpenStack technology using the OSP-director
  • Deploying a Red Hat OpenStack Platform based on Red Hat's reference architecture
  • Deploying managed hosts with required OpenStack parameters
  • Deploying Three (3) Node Highly available (HA) Controller Hosts using Pacemaker and HAP Roxy
  • Deploying all the supplied compute hosts that will be hosting multiple VNFs (SRIOV & DPDK)
  • Implementing CEPH
  • Integrate Software-defined storage (CEPH) with RHOSP, Red Hat OpenStack Platform Operation Tools as

           per Industry standard & best practices

  • Detailed Network configuration and implementation using Neutron networking - (VXLAN) network type

           and Modular Layer 2 (ML2) Open Switch plugin

  • Integrating Monitoring Solution with RHOSP
  • Design and deployment of common Alarm and Performance Management solution
  • Red Hat OpenStack management & monitoring
  • VM Alarm and Performance management
  • Cloud Management Platform will be configured for day-to-day operational tools to measure uponCPU/Mem/Network utilization etc. from VM level
  • Baseline Security Standard) & VA (Vulnerability Assessment) for RHOSP.


Additional Advantage:

  • Deep understanding of technology and passionate about what you do.
  • Background in designing high performant scalable software systems with strong focus to optimize hardware cost.
  • Solid collaborative and interpersonal skills, specifically a proven ability to effectively guide and influence within a dynamic environment.
  • Strong commitment to get the most performance out of a system being worked on.
  • Prior development of a large software project using service-oriented architecture operating with real time constraints. 


What's In It for You

  • You will get a chance to work on cloud-native and hyper-scale products
  • You will be working with industry leaders in cloud.
  • You can expect a steep learning curve.
  • You will get the experience of solving real time problems, eventually you become a problem solver.


Benefits & Perks

  • Competitive Salary
  • Health Insurance
  • Open Learning - 100% Reimbursement for online technical courses.
  • Fast Growth - opportunities to grow quickly and surely
  • Creative Freedom + Flat hierarchy
  • Sponsorship to all those employees who represent company in events and meet ups.
  • Flexible working hours
  • 5 days week
  • Hybrid Working model (Office and WFH)


Our Hiring Process

Candidates for this position can expect the hiring process as follows (subject to successful clearing of every round)

  • Initial Resume screening call with our Recruiting team
  • Next, candidates will be invited to solve coding exercises.
  • Next, candidates will be invited for first technical interview
  • Next, candidates will be invited for final technical interview
  • Finally, candidates will be invited for Culture Plus interview with HR
  • Candidates may be asked to interview with the Leadership team
  • Successful candidates will subsequently be made an offer via email

As always, the interviews and screening call will be conducted via a mix of telephonic and video call.


So, if you are looking at an opportunity to really make a difference- make it with us…


Coredge.io
provides equal employment opportunities to all employees and applicants for employment and prohibits

discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability

status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other

characteristic protected by applicable central, state or local laws.

Read more
HCL
HCL
Agency job
via Saiva System by Sunny Kumar
Bengaluru (Bangalore)
5 - 8 yrs
₹3L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconJenkins
Ansible
+8 more

Client: Sony Corporation

Position: 3

Exp: 5-8Years

DevOps Engineer

Location: Bangalore

Budget: 16.5 LPA Max

 

Gerrit ,Jenkins, Rabbit MQ AWS Linux
Python ,Ansible, Tomcat ,Postgresql
Grafana ,Groovy,HTML,Shell,Apache,Git , ELK

Read more
Synapsica Technologies Pvt Ltd
at Synapsica Technologies Pvt Ltd
6 candid answers
1 video
Human Resources
Posted by Human Resources
Bengaluru (Bangalore)
6 - 10 yrs
₹15L - ₹40L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+5 more

Introduction

http://www.synapsica.com/">Synapsica is a https://yourstory.com/2021/06/funding-alert-synapsica-healthcare-ivycap-ventures-endiya-partners/">series-A funded HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don't have to rely on cryptic 2 liners given to them as a diagnosis. 

Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting.  We are backed by IvyCap, Endia Partners, YCombinator and other investors from India, US, and Japan. We are proud to have GE and The Spinal Kinetics as our partners. Here’s a small sample of what we’re building: https://www.youtube.com/watch?v=FR6a94Tqqls">https://www.youtube.com/watch?v=FR6a94Tqqls

 

Your Roles and Responsibilities

The Lead DevOps Engineer will be responsible for the management, monitoring and operation of our applications and services in production. The DevOps Engineer will be a hands-on person who can work independently or with minimal guidance and has the ability to drive the team’s deliverables by mentoring and guiding junior team members. You will work with the existing teams very closely and build on top of tools like Kubernetes, Docker and Terraform and support our numerous polyglot services.

Introducing a strong DevOps ethic into the rest of the team is crucial, and we expect you to lead the team on best practices in deployment, monitoring, and tooling. You'll work collaboratively with software engineering to deploy and operate our systems, help automate and streamline our operations and processes, build and maintain tools for deployment, monitoring, and operations and troubleshoot and resolve issues in our development, test and production environments. The position is based in our Bangalore office.

 

 

Primary Responsibilities

  • Providing strategies and creating pathways in support of product initiatives in DevOps and automation, with a focus on the design of systems and services that run on cloud platforms.
  • Optimizations and execution of the CI/CD pipelines of multiple products and timely promotion of the releases to production environments
  • Ensuring that mission critical applications are deployed and optimised for high availability, security & privacy compliance and disaster recovery.
  • Strategize, implement and verify secure coding techniques, integrate code security tools for Continuous Integration
  • Ensure analysis, efficiency, responsiveness, scalability and cross-platform compatibility of applications through captured  metrics, testing frameworks, and debugging methodologies.
  • Technical documentation through all stages of development
  • Establish strong relationships, and proactively communicate, with team members as well as individuals across the organisation

 

Requirements

  • Minimum of 6 years of experience on Devops tools.
  • Working experience with Linux, container orchestration and management technologies (Docker, Kubernetes, EKS, ECS …).
  • Hands-on experience with "infrastructure as code" solutions (Cloudformation, Terraform, Ansible etc).
  • Background of building and maintaining CI/CD pipelines (Gitlab-CI, Jenkins, CircleCI, Github actions etc).
  • Experience with the Hashicorp stack (Vault, Packer, Nomad etc).
  • Hands-on experience in building and maintaining monitoring/logging/alerting stacks (ELK stack, Prometheus stack, Grafana etc).
  • Devops mindset and experience with Agile / SCRUM Methodology
  • Basic knowledge of Storage , Databases (SQL and noSQL)
  • Good understanding of networking technologies, HAProxy, firewalling and security.
  • Experience in Security vulnerability scans and remediation
  • Experience in API security and credentials management
  • Worked on Microservice configurations across dev/test/prod environments
  • Ability to quickly adapt new languages and technologies
  • A strong team player attitude with excellent communication skills.
  • Very high sense of ownership.
  • Deep interest and passion for technology
  • Ability to plan projects, execute them and meet the deadline
  • Excellent verbal and written English communication.
Read more
Growth Partner For Businesses
Growth Partner For Businesses
Agency job
via Unnati by Astha Bharadwaj
Remote only
2 - 6 yrs
₹15L - ₹25L / yr
DevOps
skill iconDocker
skill iconKubernetes
Reliability engineering
skill iconPython
+6 more
Are you a pragmatic tech person, who understands business and technology alike, are full of curiosity and are updated with the latest trends, then this job is for you!
 
Our client is a niche software company that builds strong backend software and helps businesses scale through their growth process. They help to solve problems in software, data engineering and infrastructure engineering. They remotely build and manage the tech teams, design architecture that is known for its robustness and reliability, and use mainstream languages like Golang, Python and Ruby. The consulting firm also provides on-site solutions like an observatory, data centres, alert systems, data recovery sites, and also help to set up Data Compliance Policies for their clients. With some major startups as clients, the team has grown tremendously over the past couple of years.
 
As an SRE/ DevOps, you will work on improving reliability and uptime for our customers.
 
What you will do:
  • Working on scalability, maintainability and reliability of company's products.
  • Working with clients to solve their day-to-day challenges, moving manual processes to automation.
  • Keeping systems reliable and gauging the effort it takes to reach there.
  • Understanding Juxtapose tools and technologies to choose x over y.
  • Understanding Infrastructure as a Code and applying software design principles to it.
  • Automating tedious work using your favourite scripting languages.
  • Taking code from the local system to production by implementing Continuous Integration and Delivery principles.

 

 

What you need to have:

  • Worked with any one of the programming languages like Go, Python, Java, Ruby.
  • Work experience with public cloud providers like AWS, GCP or Azure.
  • Understanding of Linux systems and Containers
  • Meticulous in creating and following runbooks and checklists
  • Microservices experience and use of orchestration tools like Kubernetes/Nomad.
  • Understanding of Computer Networking fundamentals like TCP, UDP.
  • Strong bash scripting skills.
Read more
Blue Sky Analytics
at Blue Sky Analytics
3 recruiters
Balahun Khonglanoh
Posted by Balahun Khonglanoh
Remote only
1 - 4 yrs
Best in industry
skill iconAmazon Web Services (AWS)
DevOps
Amazon EC2
AWS Lambda
ECS
+1 more

About the Company

Blue Sky Analytics is a Climate Tech startup that combines the power of AI & Satellite data to aid in the creation of a global environmental data stack. Our funders include Beenext and Rainmatter. Over the next 12 months, we aim to expand to 10 environmental data-sets spanning water, land, heat, and more!


We are looking for DevOps Engineer who can help us build the infrastructure required to handle huge datasets on a scale. Primarily, you will work with AWS services like EC2, Lambda, ECS, Containers, etc. As part of our core development crew, you’ll be figuring out how to deploy applications ensuring high availability and fault tolerance along with a monitoring solution that has alerts for multiple microservices and pipelines. Come save the planet with us!


Your Role

  • Applications built at scale to go up and down on command.
  • Manage a cluster of microservices talking to each other.
  • Build pipelines for huge data ingestion, processing, and dissemination.
  • Optimize services for low cost and high efficiency.
  • Maintain high availability and scalable PSQL database cluster.
  • Maintain alert and monitoring system using Prometheus, Grafana, and Elastic Search.

Requirements

  • 1-4 years of work experience.
  • Strong emphasis on Infrastructure as Code - Cloudformation, Terraform, Ansible.
  • CI/CD concepts and implementation using Codepipeline, Github Actions.
  • Advanced hold on AWS services like IAM, EC2, ECS, Lambda, S3, etc.
  • Advanced Containerization - Docker, Kubernetes, ECS.
  • Experience with managed services like database cluster, distributed services on EC2.
  • Self-starters and curious folks who don't need to be micromanaged.
  • Passionate about Blue Sky Climate Action and working with data at scale.

Benefits

  • Work from anywhere: Work by the beach or from the mountains.
  • Open source at heart: We are building a community where you can use, contribute and collaborate on.
  • Own a slice of the pie: Possibility of becoming an owner by investing in ESOPs.
  • Flexible timings: Fit your work around your lifestyle.
  • Comprehensive health cover: Health cover for you and your dependents to keep you tension free.
  • Work Machine of choice: Buy a device and own it after completing a year at BSA.
  • Quarterly Retreats: Yes there's work-but then there's all the non-work+fun aspect aka the retreat!
  • Yearly vacations: Take time off to rest and get ready for the next big assignment by availing the paid leaves.
Read more
symphony retail ai
symphony retail ai
Agency job
Bengaluru (Bangalore), WFH
1 - 3 yrs
₹1L - ₹5.5L / yr
DevOps
Shell Scripting
Linux/Unix

THE OPPORTUNITY

A platform to learn and grow in a great working environment with cutting edge technologies. There is vast opportunity to showcase creative thinking which can be translated into a highly optimized tools/utilities.

 

KEY ACTIVITIES

  • Code Integrations ( Compile, Build, Notify )
  • Package creation ( Service Packs, Patches, Hotfixes )
  •  Environment Preparation ( With GOLD SCM technical stack )
  •  Environment & Infra management
  •  Package delivery ( Customer specific & Standard )
  • Build & Deployment automation
  • Tickets management

 

KEY CRITERIA

Primary Technical Skills  

  • Shell Scripting
  • ORACLE DB Fundamentals
  • Infra Monitoring & Diagnostics
  • ClearCase / GIT
  • DevOps concepts
  • Basic Java

Other Required Skills

  • Problem Solving
  • Logical Reasoning
  • Aptitude & Attitude
  • Communication (Verbal & Written)
Read more
Radical HealthTech
at Radical HealthTech
3 recruiters
Shibjash Dutt
Posted by Shibjash Dutt
NCR (Delhi | Gurgaon | Noida)
2 - 7 yrs
₹5L - ₹15L / yr
skill iconPython
Terraform
skill iconAmazon Web Services (AWS)
Linux/Unix
skill iconDocker
DevOps Engineer


Radical is a platform connecting data, medicine and people -- through machine learning, and usable, performant products. Software has never been the strong suit of the medical industry -- and we are changing that. We believe that the same sophistication and performance that powers our daily needs through millions of consumer applications -- be it your grocery, your food delivery or your movie tickets -- when applied to healthcare, has a massive potential to transform the industry, and positively impact lives of patients and doctors. Radical works with some of the largest hospitals and public health programmes in India, and has a growing footprint both inside the country and abroad.


As a DevOps Engineer at Radical, you will:

Work closely with all stakeholders in the healthcare ecosystem - patients, doctors, paramedics and administrators - to conceptualise and bring to life the ideal set of products that add value to their time
Work alongside Software Developers and ML Engineers to solve problems and assist in architecture design
Work on systems which have an extraordinary emphasis on capturing data that can help build better workflows, algorithms and tools
Work on high performance systems that deal with several million transactions, multi-modal data and large datasets, with a close attention to detail


We’re looking for someone who has:

Familiarity and experience with writing working, well-documented and well-tested scripts, Dockerfiles, Puppet/Ansible/Chef/Terraform scripts.
Proficiency with scripting languages like Python and Bash.
Knowledge of systems deployment and maintainence, including setting up CI/CD and working alongside Software Developers, monitoring logs, dashboards, etc.
Experience integrating with a wide variety of external tools and services
Experience navigating AWS and leveraging appropriate services and technologies rather than DIY solutions (such as hosting an application directly on EC2 vs containerisation, or an Elastic Beanstalk)


It’s not essential, but great if you have:

An established track record of deploying and maintaining systems.
Experience with microservices and decomposition of monolithic architectures
Proficiency in automated tests.
Proficiency with the linux ecosystem
Experience in deploying systems to production on cloud platforms such as AWS


The position is open now, and we are onboarding immediately.


Please write to us with an updated resume, and one thing you would like us to see as part of your application. This one thing can be anything that you think makes you stand apart among candidates.


Radical is based out of Delhi NCR, India, and we look forward to working with you!


We're looking for people who may not know all the answers, but are obsessive about finding them, and take pride in the code that they write. We are more interested in the ability to learn fast, think rigorously and for people who aren’t afraid to challenge assumptions, and take large bets -- only to work hard and prove themselves correct. You're encouraged to apply even if your experience doesn't precisely match the job description. Join us.

Read more
Yulu Bikes
at Yulu Bikes
1 video
3 recruiters
Keerthana k
Posted by Keerthana k
Bengaluru (Bangalore)
3 - 7 yrs
₹7L - ₹15L / yr
DevOps
skill iconKubernetes
skill iconJenkins
skill iconDocker
Linux/Unix
+7 more
  • Mandatory: Docker, AWS, Linux, Kubernete or ECS
  • Prior experience provisioning and spinning up AWS Clusters / Kubernetes
  • Production experience to build scalable systems (load balancers, memcached, master/slave architectures)
  • Experience supporting a managed cloud services infrastructure
  • Ability to maintain, monitor and optimise production database servers
  • Prior work with Cloud Monitoring tools (Nagios, Cacti, CloudWatch etc.)
  • Experience with Docker, Kubernetes, Mesos, NoSQL databases (DynamoDB, Cassandra, MongoDB, etc)
  • Other Open Source tools used in the infrastructure space (Packer, Terraform, Vagrant, etc.)
  • In-depth knowledge on Linux Environment.
  • Prior experience leading technical teams through the design and implementation of systems infrastructure projects.
  • Working knowledge of Configuration Management (Chef, Puppet or Ansible preferred) Continuous Integration Tools (Jenkins preferred)
  • Experience in handling large production deployments and infrastructure.
  • DevOps based infrastructure and application deployments experience.
  • Working knowledge of the AWS network architecture including designing VPN solutions between regions and subnets
  • Hands-on knowledge with the AWS AMI architecture including the development of machine templates and blueprints
  • He/she should be able to validate that the environment meets all security and compliance controls.
  • Good working knowledge of AWS services such as Messaging, Application Services, Migration Services, Cost Management Platform.
  • Proven written and verbal communication skills.
  • Understands and can serve as the technical team lead to oversee the build of the Cloud environment based on customer requirements.
  • Previous NOC experience.
  • Client Facing Experience with excellent Customer Communication and Documentation Skills
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos