Cutshort logo
Tops Infosolutions logo
Sr. Devops Engineer
Tops Infosolutions's logo

Sr. Devops Engineer

Zurin Momin's profile picture
Posted by Zurin Momin
5 - 12 yrs
₹9.5L - ₹14L / yr
Ahmedabad
Skills
skill iconLaravel
skill iconJavascript
DevOps
CI/CD
skill iconKubernetes
skill iconDocker
Ansible

Job Title: DevOps Engineer


Job Description: We are seeking an experienced DevOps Engineer to support our Laravel, JavaScript (Node.js, React, Next.js), and Python development teams. The role involves building and maintaining scalable CI/CD pipelines, automating deployments, and managing cloud infrastructure to ensure seamless delivery across multiple environments.


Responsibilities:

Design, implement, and maintain CI/CD pipelines for Laravel, Node.js, and Python projects.

Automate application deployment and environment provisioning using AWS and containerization tools.

Manage and optimize AWS infrastructure (EC2, ECS, RDS, S3, CloudWatch, IAM, Lambda).

Implement Infrastructure as Code (IaC) using Terraform or AWS CloudFormation. Manage configuration automation using Ansible.

Build and manage containerized environments using Docker (Kubernetes is a plus).

Monitor infrastructure and application performance using CloudWatch, Prometheus, or Grafana.

Ensure system security, data integrity, and high availability across environments.

Collaborate with development teams to streamline builds, testing, and deployments.

Troubleshoot and resolve infrastructure and deployment-related issues.


Required Skills:

AWS (EC2, ECS, RDS, S3, IAM, Lambda)

CI/CD Tools: Jenkins, GitLab CI/CD, AWS CodePipeline, CodeBuild, CodeDeploy

Infrastructure as Code: Terraform or AWS CloudFormation Configuration Management: Ansible

Containers: Docker (Kubernetes preferred)

Scripting: Bash, Python

Version Control: Git, GitHub, GitLab

Web Servers: Apache, Nginx (preferred)

Databases: MySQL, MongoDB (preferred)


Qualifications:

3+ years of experience as a DevOps Engineer in a production environment.

Proven experience supporting Laravel, Node.js, and Python-based applications.

Strong understanding of CI/CD, containerization, and automation practices.

Experience with infrastructure monitoring, logging, and performance optimization.

Familiarity with agile and collaborative development processes.

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

About Tops Infosolutions

Founded :
2016
Type :
Services
Size :
100-1000
Stage :
Profitable

About

TOPS Infosolutions, an IT Services & Enterprise Solutions Provider, combines Tech Expertise and Business Intelligence to enable Organizations make most of their business operations and customers. As a software development and consulting company, we offer services to enterprises and startups, over 250 clients in more than 20 countries entrust TOPS in diverse industry verticals of software development including real estate, healthcare, retail, hospitality, food and beverages, manufacturing, etc.

Read more

Company social profiles

bloglinkedintwitterfacebook

Similar jobs

Hashone Careers
Bengaluru (Bangalore), Pune, Hyderabad
5 - 10 yrs
₹12L - ₹25L / yr
DevOps
skill iconPython
cicd
skill iconKubernetes
skill iconDocker
+1 more

Job Description

Experience: 5 - 9 years

Location: Bangalore/Pune/Hyderabad

Work Mode: Hybrid(3 Days WFO)


Senior Cloud Infrastructure Engineer for Data Platform 


The ideal candidate will play a critical role in designing, implementing, and maintaining cloud infrastructure and CI/CD pipelines to support scalable, secure, and efficient data and analytics solutions. This role requires a strong understanding of cloud-native technologies, DevOps best practices, and hands-on experience with Azure and Databricks.


Key Responsibilities:


Cloud Infrastructure Design & Management

Architect, deploy, and manage scalable and secure cloud infrastructure on Microsoft Azure.

Implement best practices for Azure Resource Management, including resource groups, virtual networks, and storage accounts.

Optimize cloud costs and ensure high availability and disaster recovery for critical systems


Databricks Platform Management

Set up, configure, and maintain Databricks workspaces for data engineering, machine learning, and analytics workloads.

Automate cluster management, job scheduling, and monitoring within Databricks.

Collaborate with data teams to optimize Databricks performance and ensure seamless integration with Azure services.


CI/CD Pipeline Development

Design and implement CI/CD pipelines for deploying infrastructure, applications, and data workflows using tools like Azure DevOps, GitHub Actions, or similar.

Automate testing, deployment, and monitoring processes to ensure rapid and reliable delivery of updates.


Monitoring & Incident Management

Implement monitoring and alerting solutions using tools like Dynatrace, Azure Monitor, Log Analytics, and Databricks metrics.

Troubleshoot and resolve infrastructure and application issues, ensuring minimal downtime.


Security & Compliance

Enforce security best practices, including identity and access management (IAM), encryption, and network security.

Ensure compliance with organizational and regulatory standards for data protection and cloud operations.


Collaboration & Documentation

Work closely with cross-functional teams, including data engineers, software developers, and business stakeholders, to align infrastructure with business needs.

Maintain comprehensive documentation for infrastructure, processes, and configurations.


Required Qualifications

Education: Bachelor’s degree in Computer Science, Engineering, or a related field.


Must Have Experience:

6+ years of experience in DevOps or Cloud Engineering roles.

Proven expertise in Microsoft Azure services, including Azure Data Lake, Azure Databricks, Azure Data Factory (ADF), Azure Functions, Azure Kubernetes Service (AKS), and Azure Active Directory.

Hands-on experience with Databricks for data engineering and analytics.


Technical Skills:

Proficiency in Infrastructure as Code (IaC) tools like Terraform, ARM templates, or Bicep.

Strong scripting skills in Python, or Bash.

Experience with containerization and orchestration tools like Docker and Kubernetes.

Familiarity with version control systems (e.g., Git) and CI/CD tools (e.g., Azure DevOps, GitHub Actions).


Soft Skills:

Strong problem-solving and analytical skills.

Excellent communication and collaboration abilities.

Read more
Biz-Tech Analytics
Biz-Tech Analytics
Agency job
via Keep Knockin by Mehul Gulati
Remote only
4 - 10 yrs
₹1L - ₹3L / yr
Linux/Unix
skill iconDocker
skill iconPython
PyTorch
TensorFlow

Hiring DevOps Engineers (Freelance)

We’re hiring for our client: Biz-Tech Analytics


Role: DevOps Engineer (Freelance)

Experience: 4-7+ years

Project: Terminus Project

Location: Remote

Engagement Type: Freelance | Project-based


About the Role:

Biz-Tech Analytics is looking for experienced DevOps Engineers to contribute to the Terminus Project, a hands-on initiative involving system-level problem solving, automation, and containerised environments.

This role is ideal for engineers who enjoy working close to the system layer, debugging complex issues, and building reliable automation in isolated environments.


Key Responsibilities:

• Work on Linux-based systems, handling process management, file systems, and system utilities

• Write clean, testable Python code for automation and verification

• Build, configure, and manage Docker-based environments for testing and deployment

• Troubleshoot and debug complex system and software issues

• Collaborate using Git and GitHub workflows, including pull requests and branching

• Execute tasks independently and iterate based on structured feedback


Required Skills & Qualifications:

• Expert-level proficiency with Linux CLI, including Bash scripting

• Strong Python programming skills for automation and tooling

• Hands-on experience with Docker and containerized environments

• Excellent problem-solving and debugging skills

• Proficiency with Git and standard GitHub workflows


Preferred Qualifications:

• Professional experience in DevOps or Site Reliability Engineering (SRE)

• Exposure to cloud platforms such as AWS, GCP, or Azure

• Familiarity with machine learning frameworks like TensorFlow or PyTorch

• Prior experience contributing to open-source projects


Engagement Details

• Fully remote freelance engagement

• Flexible workload, with scope to take on additional tasks

• Opportunity to work on real-world systems supporting advanced AI and infrastructure projects


Apply via Google form: https://forms.gle/SDgdn7meiicTNhvB8


About Biz-Tech Analytics:

Biz-Tech Analytics partners with global enterprises, AI labs, and industrial businesses to help them build and scale frontier AI systems. From data creation to deployment, the team delivers specialised services including human-in-the-loop annotation, reinforcement learning from human feedback (RLHF), and custom dataset creation.

With a network of 500+ vetted developers, STEM professionals, linguists, and domain experts, Biz-Tech Analytics supports leading global platforms by enhancing complex AI models and providing high-precision feedback at scale.

Their work sits at the intersection of advanced research, engineering rigor, and real-world AI deployment, making them a strong partner for cutting-edge AI initiatives.

  

Read more
Arcitech
Navi Mumbai
5 - 7 yrs
₹12L - ₹14L / yr
Cyber Security
VAPT
Cloud Computing
CI/CD
skill iconJenkins
+4 more

Senior DevSecOps Engineer (Cybersecurity & VAPT) - Arcitech AI



Arcitech AI, located in Mumbai's bustling Lower Parel, is a trailblazer in software and IT, specializing in software development, AI, mobile apps, and integrative solutions. Committed to excellence and innovation, Arcitech AI offers incredible growth opportunities for team members. Enjoy unique perks like weekends off and a provident fund. Our vibrant culture is friendly and cooperative, fostering a dynamic work environment that inspires creativity and forward-thinking. Join us to shape the future of technology.

Full-time

Navi Mumbai, Maharashtra, India

5+ Years Experience

1200000 - 1400000

Job Title: Senior DevSecOps Engineer (Cybersecurity & VAPT)

Location: Vashi, Navi Mumbai (On-site)

Shift: 10:00 AM - 7:00 PM

Experience: 5+ years

Salary : INR 12,00,000 - 14,00,000


Job Summary

Hiring a Senior DevSecOps Engineer with strong cloud, CI/CD, automation skills and hands-on experience in Cybersecurity & VAPT to manage deployments, secure infrastructure, and support DevSecOps initiatives.


Key Responsibilities

Cloud & Infrastructure

  • Manage deployments on AWS/Azure
  • Maintain Linux servers & cloud environments
  • Ensure uptime, performance, and scalability


CI/CD & Automation

  • Build and optimize pipelines (Jenkins, GitHub Actions, GitLab CI/CD)
  • Automate tasks using Bash/Python
  • Implement IaC (Terraform/CloudFormation)


Containerization

  • Build and run Docker containers
  • Work with basic Kubernetes concepts


Cybersecurity & VAPT

  • Perform Vulnerability Assessment & Penetration Testing
  • Identify, track, and mitigate security vulnerabilities
  • Implement hardening and support DevSecOps practices
  • Assist with firewall/security policy management


Monitoring & Troubleshooting

  • Use ELK, Prometheus, Grafana, CloudWatch
  • Resolve cloud, deployment, and infra issues


Cross-Team Collaboration

  • Work with Dev, QA, and Security for secure releases
  • Maintain documentation and best practices


Required Skills

  • AWS/Azure, Linux, Docker
  • CI/CD tools: Jenkins, GitHub Actions, GitLab
  • Terraform / IaC
  • VAPT experience + understanding of OWASP, cloud security
  • Bash/Python scripting
  • Monitoring tools (ELK, Prometheus, Grafana)
  • Strong troubleshooting & communication
Read more
Tessact
at Tessact
4 recruiters
Apurv Gandhwani
Posted by Apurv Gandhwani
Mumbai
2 - 5 yrs
₹8L - ₹15L / yr
DevOps
skill iconAmazon Web Services (AWS)
skill iconDocker
skill iconJenkins
  1. Design cloud infrastructure that is secure, scalable, and highly available on AWS, Azure and GCP
  2. Work collaboratively with software engineering to define infrastructure and deployment requirements
  3. Provision, configure and maintain AWS, Azure, GCP cloud infrastructure defined as code
  4. Ensure configuration and compliance with configuration management tools
  5. Administer and troubleshoot Linux based systems
  6. Troubleshoot problems across a wide array of services and functional areas
  7. Build and maintain operational tools for deployment, monitoring, and analysis of AWS, Azure Infrastructure and systems
  8. Perform infrastructure cost analysis and optimization
Read more
Kutumb
at Kutumb
3 recruiters
Dimpy Mehra
Posted by Dimpy Mehra
Bengaluru (Bangalore)
2 - 4 yrs
₹15L - ₹30L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Kutumb is the first and largest communities platform for Bharat. We are growing at an exponential trajectory. More than 1 Crore users use Kutumb to connect with their community. We are backed by world-class VCs and angel investors. We are growing and looking for exceptional Infrastructure Engineers to join our Engineering team.

 

More on this here - https://kutumbapp.com/why-join-us.html">https://kutumbapp.com/why-join-us.html

 

We’re excited if you have:

  • Recent experience designing and building unified observability platforms that enable companies to use the sometimes-overwhelming amount of available data (metrics, logs, and traces) to determine quickly if their application or service is operating as desired
  • Expertise in deploying and using open-source observability tools in large-scale environments, including Prometheus, Grafana, ELK (ElasticSearch + Logstash + Kibana), Jaeger, Kiali, and/or Loki
  • Familiarity with open standards like OpenTelemetry, OpenTracing, and OpenMetrics
  • Familiarity with Kubernetes and Istio as the architecture on which the observability platform runs, and how they integrate and scale. Additionally, the ability to contribute improvements back to the joint platform for the benefit of all teams
  • Demonstrated customer engagement and collaboration skills to curate custom dashboards and views, and identify and deploy new tools, to meet their requirements
  • The drive and self-motivation to understand the intricate details of a complex infrastructure environment 
  • Using CICD tools to automatically perform canary analysis and roll out changes after passing automated gates (think Argo & keptn)
  • Hands-on experience working with AWS 
  • Bonus points for knowledge of ETL pipelines and Big data architecture
  • Great problem-solving skills & takes pride in your work
  • Enjoys building scalable and resilient systems, with a focus on systems that are robust by design and suitably monitored
  • Abstracting all of the above into as simple of an interface as possible (like Knative) so developers don't need to know about it unless they choose to open the escape hatch

 

What you’ll be doing:

  • Design and build automation around the chosen tools to make onboarding new services easy for developers (dashboards, alerts, traces, etc)
  • Demonstrate great communication skills in working with technical and non-technical audiences
  • Contribute new open-source tools and/or improvements to existing open-source tools back to the CNCF ecosystem

 

Tools we use:

Kops, Argo, Prometheus/ Loki/ Grafana, Kubernetes, AWS, MySQL/ PostgreSQL, Apache Druid, Cassandra, Fluentd, Redis, OpenVPN, MongoDB, ELK

 

What we offer:

  • High pace of learning
  • Opportunity to build the product from scratch
  • High autonomy and ownership
  • A great and ambitious team to work with
  • Opportunity to work on something that really matters
  • Top of the class market salary and meaningful ESOP ownership
Read more
6sense
at 6sense
15 recruiters
Neha Singh
Posted by Neha Singh
Remote only
10 - 15 yrs
₹10L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
CI/CD
+8 more

Required qualifications and must have skills  

  • 5+ years of experience managing a team of 5+ infrastructure software engineers 

  • 5+ years of experience in building and scaling technical infrastructure 

  • 5+ years of experience in delivering software 

  • Experience leading by influence in multi-team, cross-functional projects 

  • Demonstrated experience recruiting and managing technical teams, including performance management and managing engineers 

  • Experience with cloud service providers such as AWS, GCP, or Azure 

  • Experience with containerization technologies such as Kubernetes and Docker 

Nice to have Skills   

  • Experience with Hadoop, Hive and Presto 

  • Application/infrastructure benchmarking and optimization 

  • Familiarity with modern CI/CD practices 

  • Familiarity with reliability best practices 

Read more
CoLearn
at CoLearn
1 recruiter
Saroj Sahoo
Posted by Saroj Sahoo
Remote only
5 - 8 yrs
₹30L - ₹50L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconGit
Linux/Unix
+3 more

About the Company

  • 💰 Early-stage, ed-tech, funded, growing, growing fast
  • 🎯 Mission Driven: Make Indonesia competitive on a global scale
  • 🥅 Build the best educational content and technology to advance STEM education
  • 🥇 Students-First approach
  • 🇮🇩 🇮🇳 Teams in India and Indonesia

 

Skillset 🧗🏼‍♀️

  • You primarily identify as a DevOps/Infrastructure engineer and are comfortable working with systems and cloud-native services on AWS
  • You can design, implement, and maintain secure and scalable infrastructure delivering cloud-based services
  • You have experience operating and maintaining production systems in a Linux based public cloud environment
  • You are familiar with cloud-native concepts - Containers, Lambdas, Orchestration (ECS, Kubernetes)
  • You’re in love with system metrics and strive to help deliver improvements to systems all the time
  • You can think in terms of Infrastructure as Code to build tools for automating deployment, monitoring, and operations of the platform
  • You can be on-call once every few weeks to provide application support, incident management, and troubleshooting
  • You’re fairly comfortable with GIT, AWS CLI, python, docker CLI, in general, all things CLI. Oh! Bash scripting too!
  • You have high integrity, and you are reliable

 

What you can expect from us 👌🏼

 

☮️ Mentorship, growth, great work culture

  • Mentorship and continuous improvement are a part of the team’s DNA. We have a battle-tested robust growth framework. You will have people to look up to and people looking up to you
  • We are a people-first, high-trust, high-autonomy team
  • We live in the TDD, Pair Programming, First Principles world

 

🌏 Remote done right

  • Distributed does not mean working in isolation, feeling alone, being buried in Zoom calls
  • Our leadership team has been WFH for 10+ years now and we know how remote teams work. This will be a place to belong
  • A good balance between deep focussed work and collaborative work ⚖️

 

🖥️ Friendly, humane interview process

  • 30-minute alignment check and screening call
  • A short take-home coding assignment, no more than 2-3 hours. Time is precious
  • Pair programming interview. Collaborate, work together. No sitting behind a desk and judging
  • In-depth engineering discussion around your skills and career so far
  • System design and architecture interview for seniors

 

What we ask from you👇🏼

  • Bring your software engineering — both individual brilliance and collaborative skills
  • Bring your good nature — we're building a team that supports each other
  • Be vested or interested in the company vision
Read more
8 years old IT
8 years old IT
Agency job
Remote only
8 - 11 yrs
₹8L - ₹13L / yr
DevOps
Agile/Scrum
Scripting language
skill iconPython
Linux/Unix
+3 more

Contract to hire
Total 8 years of experience and relevant 4 years
• Experience with building and deploying software in the cloud, preferably on Google Cloud       Platform (GCP)
• Sound knowledge to build infrastructure as code with Terraform
• Comfortable with test-driven development, testing frameworks and building CI/CD pipelines     with version control software Gitlab
• Strong skills of containerisation with Docker, Kubernetes and Helm
• Familiar with Gitlab, systems integration and BDD
• Solid networking skills e.g. IP, DNS, VPN, HTTP/HTTPS
• Scripting experience (Bash, Python, etc.)
• Experience in Linux/Unix administration
• Experience with agile methods and practices (Scrum, Kanban, Continuous Integration, Pair     Programming, TDD)
Read more
Pramata Knowledge Solutions
Seena Narayanan
Posted by Seena Narayanan
Bengaluru (Bangalore)
3 - 7 yrs
₹8L - ₹16L / yr
DevOps
Automation
skill iconProgramming
Linux/Unix
Software deployment
+7 more
Job Title: DevOps Engineer Work Experience: 3-7 years Qualification: B.E / M. Tech Location: Bangalore, India About Pramata Pramata’s unique, industry-proven offering combines the digitization of critical customer data currently locked in unstructured and obscure sources, then converts that data into high-quality, actionable information accessible through one or multiple applications through the Pramata cloud-based customer digitization platform. Pramata’s customers are some of the largest companies in the world including CenturyLink, Comcast Business, FICO, HPE, Microsoft, NCR, Novelis, and Truven Health IBM. Pramata has helped these companies and more find millions of dollars in revenue, ensure regulatory and pricing compliance, as well as enable risk identification and management across their customer, partner, and even supplier bases. Pramata is headquartered near San Francisco, California and has its Product Engineering and Solutions Delivery Center in Bangalore, India. How Pramata Works Pramata extracts essential intelligence about customer relationships from complex, negotiated contracts, simplifies it from legalese into plain English, synthesizes it with data from CRM, CLM, billing and other systems, and delivers it in the context of a particular user’s role and responsibilities. This is done through Pramata’s unique Digitization-as-a-Service (DaaS) process which transforms unstructured and diverse data into accurate, timely and meaningful digital information stored in the Pramata Digital Intelligence Hub. The Hub keeps the information centralized as one single, shared source of truth along with ensuring that this data remains consistent, accessible and highly secure. The opportunity - What you get to do You will be instrumental in bringing automation to development and testing pipelines, release management, configuration management, environment & application management and day-to-day support of development teams. You will manage the development of capabilities to achieve higher automation, quality and performance in automated build and deployment management, release management, on-demand environment configuration & automation, configuration and change management and in production environment support - Application monitoring, performance management and production support of mission-critical applications including application and system uptime and remote diagnostics - Security - Ensure that the highly sensitive data from our customers is secure at all times. - Instrument applications for performance baselines and to aid rapid diagnostics and resolution in case of system issues. - High availability and disaster recover - Build and maintain systems that are designed to provide 99.9% uptime and ensure that disaster recovery mechanisms are in place. - Automate provisioning and integration tasks as required to deploy new code. - Monitoring - Proactive steps to monitor complex interdependent systems to ensure that issues are being identified and addressed in real-time. Skills required: - Excellent communicator with great interpersonal skills, driving clarity about the intricate systems - Come with hands-on experience in application infrastructure technologies like Linux(RHEL), MySQL, Apache, Nginx, Phusion passenger, Redis etc. - Good understanding of software application builds, configuration management and deployments - Strong scripting skills like Shell, Ruby, Python, Perl etc. Comes with passion for automation - Comfortable with collaboration, open communication and reaching across functional borders. - Advanced problem-solving and task break-down ability. Additional Skills (Good to have but not mandatory): - In depth understanding and experience working with any Cloud Platforms (e.g: AWS, Azure, Google cloud etc) - Experience using configuration management tools like Chef, Puppet, Capistrano, Ansible, etc. - Being able to work under pressure and solve problems using an analytical approach; decisive, fast moving; and a positive attitude. Minimum Qualifications: - Bachelor’s Degree in Computer Science or a related field - Background in technology operations for Linux based applications with 2-4 years of experience in enterprise software - Strong programming skills in Python, Shell or Java - Experience with one or more of the following Configuration Management Tools: Ansible, Chef, Salt, Puppet - Experience with one or more of the following Databases: PostgreSQL, MySQL, Oracle, RDS
Read more
Adobe Systems
Noida, NCR (Delhi | Gurgaon | Noida)
11 - 17 yrs
₹40L - ₹80L / yr
DevOps
SRE
skill iconAmazon Web Services (AWS)
Adobe - An Award-Winning Employer Adobe believes in hiring the very best. We are known for our vibrant, dynamic and rewarding workplace where personal and professional fulfillment and company success go hand in hand. We take pride in creating exceptional work experiences, encouraging innovation and being involved with our employees, customers, and communities. We invite you to discover what makes Adobe a place where exceptional people thrive. Click this link to experience A Day in the Life at Adobe: http://www.adobe.com/aboutadobe/careeropp/fma/dayinthelife/ About Technical Operations: Adobe Technical Operations supports the Adobe Marketing Cloud's design, delivery, and operation. We’re a global team of over 200 smart, passionate people. We work with Development and Product Management to balance scope, quality, and time to market for our industry leading SaaS solutions. Our multiple groups include Security, Networking, Storage, Data Center Operations, 24x7 NOC, Systems Engineering, and Application Development. We work with a wide variety of technologies - we are a collection of organic and acquired products and services. We focus on building services that Development and Operations can reuse to encourage speed, consistency and value creation. Responsibilities • Develop the solutions to maintain and optimize the availability and performance of services to ensure a fantastic, reliable experience for our customers. • Envision, design and build the tools and solutions to keep the services healthy and responsive • Continuously improve the techniques and processes used in Operations to optimize the costs and increase the productivity • Evaluate and utilize the newer technologies coming in the industry to keep the solution on the cutting edge • Collaborate across different teams – development, quality engineering, product management, program management etc to ensure the true devops culture to get the right systems and solutions in place for agile delivery of a growing portfolio of SaaS applications, product releases and infrastructure optimizations • Effectively work across multiple timezones to collaborate with peers in other geographies • Handle escalations from different quarters – customers, client care and engineering teams, resolve the issues and effectively communicate the status across the board • Create a culture that supports innovation and creativity while delivering high volume in a predictable and reliable way. • Keep the team motivated to go beyond the expected in execution and thought leadership.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos