Cutshort logo
Rapidly growing fintech SaaS firm that propels business grow logo
Senior Devops Engineer/Devops Lead
Rapidly growing fintech SaaS firm that propels business grow
Senior Devops Engineer/Devops Lead
Rapidly growing fintech SaaS firm that propels business grow's logo

Senior Devops Engineer/Devops Lead

at Rapidly growing fintech SaaS firm that propels business grow

Agency job
5 - 10 yrs
₹25L - ₹35L / yr
Bengaluru (Bangalore)
Skills
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
CI/CD
JIRA
skill iconPython
Terraform
Ansible
Puppet
Chef

What is the role?

As a DevOps Engineer, you are responsible for setting up and maintaining the GIT repository, DevOps tools like Jenkins, UCD, Docker, Kubernetes, Jfrog Artifactory, Cloud monitoring tools, and Cloud security.

Key Responsibilities

  • Set up, configure, and maintain GIT repos, Jenkins, UCD, etc. for multi-hosting cloud environments.
  • Architect and maintain the server infrastructure in AWS. Build highly resilient infrastructure following industry best practices.
  • Work on Docker images and maintain Kubernetes clusters.
  • Develop and maintain the automation scripts using Ansible or other available tools.
  • Maintain and monitor cloud Kubernetes Clusters and patching when necessary.
  • Work on Cloud security tools to keep applications secured.
  • Participate in software development lifecycle, specifically infra design, execution, and debugging required to achieve a successful implementation of integrated solutions within the portfolio.
  • Have the necessary technical and professional expertise.

What are we looking for?

  • Minimum 5-12 years of experience in the IT industry.
  • Expertise in implementing and managing DevOps CI/CD pipeline.
  • Experience in DevOps automation tools. Well versed with DevOps Frameworks, and Agile.
  • Working knowledge of scripting using Shell, Python, Terraform, Ansible, Puppet, or chef.
  • Experience and good understanding of any Cloud like AWS, Azure, or Google cloud.
  • Knowledge of Docker and Kubernetes is required.
  • Proficient in troubleshooting skills with proven abilities in resolving complex technical issues.
  • Experience working with ticketing tools.
  • Middleware technologies knowledge or database knowledge is desirable.
  • Experience with Jira is a plus.

What can you look for?

A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact, and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being here.

We are

It  is a rapidly growing fintech SaaS firm that propels business growth while focusing on human motivation. Backed by Giift and Apis Partners Growth Fund II,offers a suite of three products - Plum, Empuls, and Compass. Works with more than 2000 clients across 10+ countries and over 2.5 million users. Headquartered in Bengaluru, It  is a 300+ strong team with four global offices in San Francisco, Dublin, Singapore, New Delhi.

Way forward

We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. We however assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

Similar jobs

Improving
Hyderabad
8 - 12 yrs
₹25L - ₹35L / yr
skill iconKubernetes
skill iconAmazon Web Services (AWS)
prometheus
skill iconDocker
skill iconJenkins

Work mode- WFO 5 days

Location: Hyderabad (Onsite)

Experience- 7+

  • K8s Hands-on experience
  • Linux Troubleshooting Skills
  • Experience on OnPrem Servers and Management
  • Helm
  • Docker
  • Ingress and Ingress Controllers
  • Networking Basics
  • Proficient Communication


Must-Have Skills:

  • Hands-on experience with airgap Kubernetes clusters, ideally in regulated industries (finance, healthcare, etc.).
  • Strong expertise in CI/CD pipelines, programmable infrastructure, and automation.
  • Proficiency in Linux troubleshooting, observability (Prometheus, Grafana, ELK), and multi-region disaster recovery.
  • Security & compliance knowledge for regulated industries.
  • Preferred: Experience with GKE, RKE, Rook-Ceph, and certifications like CKA, CKAD.

Who You Are

  • A Kubernetes expert who thrives on scalability, automation, and security.
  • Passionate about optimizing infrastructure, CI/CD, and high-availability systems.
  • Comfortable troubleshooting Linux, improving observability, and ensuring disaster recovery readiness.
  • A problem solver who simplifies complexity and drives cloud-native adoption.

What You’ll Do

  • Architect & automate Kubernetes solutions for airgap and multi-region clusters.
  • Optimize CI/CD pipelines & cloud-native deployments.
  • Work with open-source projects, selecting the right tools for the job.
  • Educate & guide teams on modern cloud-native infrastructure best practices.
  • Solve real-world scaling, security, and infrastructure automation challenges.

Why Join Us?

  • Work on high-impact Kubernetes projects in regulated industries.
  • Solve real-world automation & infrastructure challenges with cutting-edge tools.
  • Grow in a team that values learning, open-source contributions, and innovation.
Read more
LogiNext
at LogiNext
1 video
7 recruiters
Rakhi Daga
Posted by Rakhi Daga
Mumbai
11 - 15 yrs
₹1L - ₹15L / yr
Microservices
Linux/Unix
skill iconPython
Shell Scripting
skill iconAmazon Web Services (AWS)
+22 more

Only apply on this link - https://loginext.hire.trakstar.com/jobs/fk025uh?source=" target="_blank">https://loginext.hire.trakstar.com/jobs/fk025uh?source=

LogiNext is looking for a technically savvy and passionate Associate Vice President - Product Engineering - DevOps or Senior Database Administrator to cater to the development and operations efforts in product. You will choose and deploy tools and technologies to build and support a robust infrastructure.

You have hands-on experience in building secure, high-performing and scalable infrastructure. You have experience to automate and streamline the development operations and processes. You are a master in troubleshooting and resolving issues in dev, staging and production environments.

 

Responsibilities:

  • Design and implement scalable infrastructure for delivering and running web, mobile and big data applications on cloud
  • Scale and optimise a variety of SQL and NoSQL databases (especially MongoDB), web servers, application frameworks, caches, and distributed messaging systems
  • Automate the deployment and configuration of the virtualized infrastructure and the entire software stack
  • Plan, implement and maintain robust backup and restoration policies ensuring low RTO and RPO
  • Support several Linux servers running our SaaS platform stack on AWS, Azure, IBM Cloud, Ali Cloud
  • Define and build processes to identify performance bottlenecks and scaling pitfalls
  • Manage robust monitoring and alerting infrastructure 
  • Explore new tools to improve development operations to automate daily tasks
  • Ensure High Availability and Auto-failover with minimum or no manual interventions


Requirements:

  • Bachelor’s degree in Computer Science, Information Technology or a related field
  • 11 to 14 years of experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
  • Strong background in Linux/Unix Administration and Python/Shell Scripting
  • Extensive experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
  • Experience in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios
  • Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
  • Experience in query analysis, peformance tuning, database redesigning, 
  • Experience in enterprise application development, maintenance and operations
  • Knowledge of best practices and IT operations in an always-up, always-available service
  • Excellent written and oral communication skills, judgment and decision-making skills.
  • Excellent leadership skill.
Read more
AJACKUS
at AJACKUS
1 video
6 recruiters
Kaushik Vedpathak
Posted by Kaushik Vedpathak
Remote only
2 - 7 yrs
₹4L - ₹18L / yr
DevOps
MySQL
skill iconKubernetes
Cloud Computing
Google Cloud Platform (GCP)
+2 more

Type, Location

Full Time @ Anywhere in India

 

Desired Experience

2+ years

 

Job Description

What You’ll Do

● Deploy, automate and maintain web-scale infrastructure with leading public cloud vendors such as Amazon Web Services, Digital Ocean & Google Cloud Platform.

● Take charge of DevOps activities for CI/CD with the latest tech stacks.

● Acquire industry-recognized, professional cloud certifications (AWS/Google) in the capacity of developer or architect Devise multi-region technical solutions.

● Implementing the DevOps philosophy and strategy across different domains in organisation.

● Build automation at various levels, including code deployment to streamline release process

● Will be responsible for architecture of cloud services

● 24*7 monitoring of the infrastructure

● Use programming/scripting in your day-to-day work

● Have shell experience - for example Powershell on Windows, or BASH on *nix

● Use a Version Control System, preferably git

● Hands on at least one CLI/SDK/API of at least one public cloud ( GCP, AWS, DO)

● Scalability, HA and troubleshooting of web-scale applications.

● Infrastructure-As-Code tools like Terraform, CloudFormation

● CI/CD systems such as Jenkins, CircleCI

● Container technologies such as Docker, Kubernetes, OpenShift

● Monitoring and alerting systems: e.g. NewRelic, AWS CloudWatch, Google StackDriver, Graphite, Nagios/ICINGA

 

What you bring to the table

● Hands on experience in Cloud compute services, Cloud Function, Networking, Load balancing, Autoscaling.

● Hands on with GCP/AWS Compute & Networking services i.e. Compute Engine, App Engine, Kubernetes Engine, Cloud Function, Networking (VPC, Firewall, Load Balancer), Cloud SQL, Datastore.

● DBs: Postgresql, MySQL, Elastic Search, Redis, kafka, MongoDB or other NoSQL systems

● Configuration management tools such as Ansible/Chef/Puppet

 

 

Bonus if you have…

● Basic understanding of Networking(routing, switching, dns) and Storage

● Basic understanding of Protocol such as UDP/TCP

● Basic understanding of Cloud computing

● Basic understanding of Cloud computing models like SaaS, PaaS

● Basic understanding of git or any other source code repo

● Basic understanding of Databases(sql/no sql)

● Great problem solving skills

● Good in communication

● Adaptive to learning

Read more
Startup-E-Learning
Startup-E-Learning
Agency job
via Merito by Gaurav Bhosle
Remote
7 - 14 yrs
₹30L - ₹45L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more
Hello

Senior DevOps Engineer (8-12 yrs Exp)
Job Description:
We are looking for an experienced and enthusiastic DevOps Engineer. As our new DevOps
Engineer, you will be in charge of the specification and documentation of the new project
features. In addition, you will be developing new features and writing scripts for automation
using Java/BitBucket/Python/Bash.
Roles and Responsibilities:
• Deploy updates and fixes
• Utilize various open source technologies
• Need to have hands on experience on automation tools like Docker / Jenkins /
Puppet etc.
• Build independent web based tools, micro-services and solutions
• Write scripts and automation using Java/BitBucket/Python/Bash.
• Configure and manage data sources like MySQL, Mongo, Elastic search, Redis etc
• Understand how various systems work
• Manage code deployments, fixes, updates and related processes.
• Understand how IT operations are managed
• Work with CI and CD tools, and source control such as GIT and SVN.
• Experience with project management and workflow tools such as Agile, Redmine,
WorkFront, Scrum/Kanban/SAFe, etc.
• Build tools to reduce occurrences of errors and improve customer experience
• Develop software to integrate with internal back-end systems
• Perform root cause analysis for production errors
• Design procedures for system troubleshooting and maintenance
Requirements:
• More than six years of experience in a DevOps Engineer role (or similar role);
experience in software development and infrastructure development is a mandatory.
• Bachelor’s degree or higher in engineering or related field
• Proficiency in deploying and maintaining web applications
• Ability to construct and execute network, server, and application status monitoring
• Knowledge of software automation production systems, including code deployment
• Working knowledge of software development methodologies
• Previous experience with high-performance and high-availability open source web
technologies
• Strong experience with Linux-based infrastructures, Linux/Unix administration, and
AWS.
• Strong communication skills and ability to explain protocol and processes with team
and management.
• Solid team player.
Read more
Blue Sky Analytics
at Blue Sky Analytics
3 recruiters
Balahun Khonglanoh
Posted by Balahun Khonglanoh
Remote only
1 - 4 yrs
Best in industry
skill iconAmazon Web Services (AWS)
DevOps
Amazon EC2
AWS Lambda
ECS
+1 more

About the Company

Blue Sky Analytics is a Climate Tech startup that combines the power of AI & Satellite data to aid in the creation of a global environmental data stack. Our funders include Beenext and Rainmatter. Over the next 12 months, we aim to expand to 10 environmental data-sets spanning water, land, heat, and more!


We are looking for DevOps Engineer who can help us build the infrastructure required to handle huge datasets on a scale. Primarily, you will work with AWS services like EC2, Lambda, ECS, Containers, etc. As part of our core development crew, you’ll be figuring out how to deploy applications ensuring high availability and fault tolerance along with a monitoring solution that has alerts for multiple microservices and pipelines. Come save the planet with us!


Your Role

  • Applications built at scale to go up and down on command.
  • Manage a cluster of microservices talking to each other.
  • Build pipelines for huge data ingestion, processing, and dissemination.
  • Optimize services for low cost and high efficiency.
  • Maintain high availability and scalable PSQL database cluster.
  • Maintain alert and monitoring system using Prometheus, Grafana, and Elastic Search.

Requirements

  • 1-4 years of work experience.
  • Strong emphasis on Infrastructure as Code - Cloudformation, Terraform, Ansible.
  • CI/CD concepts and implementation using Codepipeline, Github Actions.
  • Advanced hold on AWS services like IAM, EC2, ECS, Lambda, S3, etc.
  • Advanced Containerization - Docker, Kubernetes, ECS.
  • Experience with managed services like database cluster, distributed services on EC2.
  • Self-starters and curious folks who don't need to be micromanaged.
  • Passionate about Blue Sky Climate Action and working with data at scale.

Benefits

  • Work from anywhere: Work by the beach or from the mountains.
  • Open source at heart: We are building a community where you can use, contribute and collaborate on.
  • Own a slice of the pie: Possibility of becoming an owner by investing in ESOPs.
  • Flexible timings: Fit your work around your lifestyle.
  • Comprehensive health cover: Health cover for you and your dependents to keep you tension free.
  • Work Machine of choice: Buy a device and own it after completing a year at BSA.
  • Quarterly Retreats: Yes there's work-but then there's all the non-work+fun aspect aka the retreat!
  • Yearly vacations: Take time off to rest and get ready for the next big assignment by availing the paid leaves.
Read more
MTX
at MTX
2 recruiters
Sinchita S
Posted by Sinchita S
Hyderabad
7 - 10 yrs
₹38L - ₹56L / yr
DevOps
CI/CD
Google Cloud Platform (GCP)
skill iconPostgreSQL
skill iconJenkins
+7 more

MTX Group Inc. is seeking a motivated Lead DevOps Engineer to join our team. MTX Group Inc. is a global implementation partner enabling organizations to become fit enterprises. MTX provides expertise across various platforms and technologies, including Google Cloud, Salesforce, artificial intelligence/machine learning, data integration, data governance, data quality, analytics, visualization and mobile technology. MTX’s very own Artificial Intelligence platform Maverick, enables clients to accelerate processes and critical decisions by leveraging a Cognitive Decision Engine, a collection of purpose-built Artificial Neural Networks designed to leverage the power of Machine Learning. The Maverick Platform includes Smart Asset Detection and Monitoring, Chatbot Services, Document Verification, to name a few.


Responsibilities:

  • Be responsible for software releases, configuration, monitoring and support of production system components and infrastructure.
  • Troubleshoot technical or functional issues in a complex environment to provide timely resolution, with various applications and platforms that are global.
  • Bring experience on Google Cloud Platform.
  • Write scripts and automation tools in languages such as Bash/Python/Ruby/Golang.
  • Configure and manage data sources like PostgreSQL, MySQL, Mongo, Elasticsearch, Redis, Cassandra, Hadoop, etc
  • Build automation and tooling around Google Cloud Platform using technologies such as Anthos, Kubernetes, Terraform, Google Deployment Manager, Helm, Cloud Build etc.
  • Bring a passion to stay on top of DevOps trends, experiment with and learn new CI/CD technologies.
  • Work with users to understand and gather their needs in our catalogue. Then participate in the required developments
  • Manage several streams of work concurrently
  • Understand how various systems work
  • Understand how IT operations are managed


What you will bring:

  • 5 years of work experience as a DevOps Engineer.
  • Must possess ample knowledge and experience in system automation, deployment, and implementation.
  • Must possess experience in using Linux, Jenkins, and ample experience in configuring and automating the monitoring tools.
  • Experience in the software development process and tools and languages like SaaS, Python, Java, MongoDB, Shell scripting, Python, MySQL, and Git. 
  • Knowledge in handling distributed data systems. Examples: Elasticsearch, Cassandra, Hadoop, and others.

What we offer:


  • Group Medical Insurance (Family Floater Plan - Self + Spouse + 2 Dependent Children)
    • Sum Insured: INR 5,00,000/- 
    • Maternity cover upto two children
    • Inclusive of COVID-19 Coverage
    • Cashless & Reimbursement facility
    • Access to free online doctor consultation

  • Personal Accident Policy (Disability Insurance) -
  • Sum Insured: INR. 25,00,000/- Per Employee
  • Accidental Death and Permanent Total Disability is covered up to 100% of Sum Insured
  • Permanent Partial Disability is covered as per the scale of benefits decided by the Insurer
  • Temporary Total Disability is covered

  • An option of Paytm Food Wallet (up to Rs. 2500) as a tax saver  benefit
  • Monthly Internet Reimbursement of upto Rs. 1,000 
  • Opportunity to pursue Executive Programs/ courses at top universities globally
  • Professional Development opportunities through various MTX sponsored certifications on multiple technology stacks including Salesforce, Google Cloud, Amazon & others

                                                       *******************

Read more
A digital business enablement MNC
A digital business enablement MNC
Agency job
via Exploro Solutions by Sapna Prabhudesai
Pune, Chennai
4 - 8 yrs
₹3L - ₹20L / yr
DevOps
Ansible
skill iconKubernetes
skill iconDocker
skill iconJenkins
+4 more

Minimum 4 years exp 

Skillsets:

  1. Build automation/CI: Jenkins
  2. Secure repositories: Artifactory, Nexus
  3. Build technologies: Maven, Gradle
  4. Development Languages: Python, Java, C#, Node, Angular, React/Redux
  5. SCM systems: Git, Github, Bitbucket
  6. Code Quality: Fisheye, Crucible, SonarQube
  7. Configuration Management: Packer, Ansible, Puppet, Chef
  8. Deployment: uDeploy, XLDeploy
  9. Containerization: Kubernetes, Docker, PCF, OpenShift
  10. Automation frameworks: Selenium, TestNG, Robot
  11. Work Management: JAMA, Jira
  12. Strong problem solving skills, Good verbal and written communication skills
  13. Good knowledge of Linux environment: RedHat etc.
  14. Good in shell scripting
  15. Good to have Cloud Technology : AWS, GCP and Azure
Notice period - 15 to 30 days
Read more
Relevance Lab
at Relevance Lab
1 recruiter
Mohith Yadukumar
Posted by Mohith Yadukumar
Remote only
5 - 16 yrs
₹10L - ₹45L / yr
DevOps
skill iconKubernetes
Terraform
skill iconDocker
skill iconAmazon Web Services (AWS)
+5 more
  • 5+ years hands-on experience with designing, deploying and managing core AWS services and infrastructure
  • Proficiency in scripting using Bash, Python, Ruby, Groovy, or similar languages 
  • Experience in source control management, specifically with Git
  • Hands-on experience in Unix/Linux and bash scripting
  • Experience building, managing Helm-based build and release CI-CD pipelines for Kubernetes platforms (EKS, Openshift, GKE) 
  • Strong experience with orchestration and config management tools such as Terraform, Ansible or Cloudformation  
  • Ability to debug, analyze issues leveraging tools like App Dynamics, New Relic and Sumologic 
  • Knowledge of Agile Methodologies and principles 
  • Good writing and documentation skills
  • Strong collaborator with the ability to work well with core teammates and our colleagues across STS
Read more
Borgos Technologies
at Borgos Technologies
1 recruiter
Anurag Mahanta
Posted by Anurag Mahanta
Bengaluru (Bangalore)
3 - 10 yrs
₹4L - ₹15L / yr
skill iconKubernetes
skill iconAmazon Web Services (AWS)
DevOps
skill iconJenkins
Hadoop
+5 more
• Works closely with the development team, technical lead, and Solution Architects within the
Engineering group to plan ongoing feature development, product maintenance.
• Familiar with Virtualization, Containers - Kubernetes, Core Networking, Cloud Native
Development, Platform as a Service – Cloud Foundry, Infrastructure as a Service, Distributed
Systems etc
• Implementing tools and processes for deployment, monitoring, alerting, automation, scalability,
and ensuring maximum availability of server infrastructure
• Should be able to manage distributed big data systems such as hadoop, storm, mongoDB,
elastic search and cassandra etc.,
• Troubleshooting multiple deployment servers, Software installation, Managing licensing etc,.
• Plan, coordinate, and implement network security measures in order to protect data, software, and
hardware.
• Monitor the performance of computer systems and networks, and to coordinate computer network
access and use.
• Design, configure and test computer hardware, networking software, and operating system
software.
• Recommend changes to improve systems and network configurations, and determine hardware or
software requirements related to such changes.
Read more
Pramata Knowledge Solutions
Seena Narayanan
Posted by Seena Narayanan
Bengaluru (Bangalore)
3 - 7 yrs
₹8L - ₹16L / yr
DevOps
Automation
skill iconProgramming
Linux/Unix
Software deployment
+7 more
Job Title: DevOps Engineer Work Experience: 3-7 years Qualification: B.E / M. Tech Location: Bangalore, India About Pramata Pramata’s unique, industry-proven offering combines the digitization of critical customer data currently locked in unstructured and obscure sources, then converts that data into high-quality, actionable information accessible through one or multiple applications through the Pramata cloud-based customer digitization platform. Pramata’s customers are some of the largest companies in the world including CenturyLink, Comcast Business, FICO, HPE, Microsoft, NCR, Novelis, and Truven Health IBM. Pramata has helped these companies and more find millions of dollars in revenue, ensure regulatory and pricing compliance, as well as enable risk identification and management across their customer, partner, and even supplier bases. Pramata is headquartered near San Francisco, California and has its Product Engineering and Solutions Delivery Center in Bangalore, India. How Pramata Works Pramata extracts essential intelligence about customer relationships from complex, negotiated contracts, simplifies it from legalese into plain English, synthesizes it with data from CRM, CLM, billing and other systems, and delivers it in the context of a particular user’s role and responsibilities. This is done through Pramata’s unique Digitization-as-a-Service (DaaS) process which transforms unstructured and diverse data into accurate, timely and meaningful digital information stored in the Pramata Digital Intelligence Hub. The Hub keeps the information centralized as one single, shared source of truth along with ensuring that this data remains consistent, accessible and highly secure. The opportunity - What you get to do You will be instrumental in bringing automation to development and testing pipelines, release management, configuration management, environment & application management and day-to-day support of development teams. You will manage the development of capabilities to achieve higher automation, quality and performance in automated build and deployment management, release management, on-demand environment configuration & automation, configuration and change management and in production environment support - Application monitoring, performance management and production support of mission-critical applications including application and system uptime and remote diagnostics - Security - Ensure that the highly sensitive data from our customers is secure at all times. - Instrument applications for performance baselines and to aid rapid diagnostics and resolution in case of system issues. - High availability and disaster recover - Build and maintain systems that are designed to provide 99.9% uptime and ensure that disaster recovery mechanisms are in place. - Automate provisioning and integration tasks as required to deploy new code. - Monitoring - Proactive steps to monitor complex interdependent systems to ensure that issues are being identified and addressed in real-time. Skills required: - Excellent communicator with great interpersonal skills, driving clarity about the intricate systems - Come with hands-on experience in application infrastructure technologies like Linux(RHEL), MySQL, Apache, Nginx, Phusion passenger, Redis etc. - Good understanding of software application builds, configuration management and deployments - Strong scripting skills like Shell, Ruby, Python, Perl etc. Comes with passion for automation - Comfortable with collaboration, open communication and reaching across functional borders. - Advanced problem-solving and task break-down ability. Additional Skills (Good to have but not mandatory): - In depth understanding and experience working with any Cloud Platforms (e.g: AWS, Azure, Google cloud etc) - Experience using configuration management tools like Chef, Puppet, Capistrano, Ansible, etc. - Being able to work under pressure and solve problems using an analytical approach; decisive, fast moving; and a positive attitude. Minimum Qualifications: - Bachelor’s Degree in Computer Science or a related field - Background in technology operations for Linux based applications with 2-4 years of experience in enterprise software - Strong programming skills in Python, Shell or Java - Experience with one or more of the following Configuration Management Tools: Ansible, Chef, Salt, Puppet - Experience with one or more of the following Databases: PostgreSQL, MySQL, Oracle, RDS
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos