Cutshort logo
Tech AI startup in Bangalore logo
Infrastructure Engineer - Database & Storage
Tech AI startup in Bangalore
Infrastructure Engineer - Database & Storage
Tech AI startup in Bangalore's logo

Infrastructure Engineer - Database & Storage

at Tech AI startup in Bangalore

Agency job
5 - 8 yrs
₹12L - ₹22L / yr
Remote only
Skills
skill iconPostgreSQL
Windows Azure
Terraform
helm

Infrastructure Engineer – Database & Storage


Responsibilities

  • Design and maintain PostgreSQLOpenSearch, and Azure Blob/S3 clusters.
  • Implement schema registry, metadata catalog, and time-versioned storage.
  • Configure read replicas, backups, encryption-at-rest, and WORM (Write Once Read Many) compliance.
  • Optimize query execution, indexing, and replication latency.
  • Partner with DevOps on infrastructure as code and cross-region replication.

Requirements

  • 6 + years database / data-infrastructure administration.
  • Mastery of indexing, partitioning, query tuning, sharding.
  • Proven experience deploying cloud-native DB stacks with Terraform or Helm.


Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

Similar jobs

Ride-hailing Industry
Ride-hailing Industry
Agency job
via Peak Hire Solutions by Dharati Thakkar
Bengaluru (Bangalore)
6 - 9 yrs
₹47L - ₹50L / yr
DevOps
skill iconPython
Shell Scripting
skill iconKubernetes
Terraform
+15 more

JOB DETAILS:

- Job Title: Lead DevOps Engineer

- Industry: Ride-hailing

- Experience: 6-9 years

- Working Days: 5 days/week

- Work Mode: ONSITE

- Job Location: Bangalore

- CTC Range: Best in Industry


Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)

 

Criteria:

1.   Candidate must be from a product-based or scalable app-based start-ups company with experience handling large-scale production traffic.

2.   Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant

3.   Candidate must have 2 years of experience as an lead (handling team of 3 to 4 members at least)

4.   Own end-to-end infrastructure right from non-prod to prod environment including self-managed

5.   Candidate must have Self experience in database migration from scratch 

6.   Must have a firm hold on the container orchestration tool Kubernetes

7.   Should have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet

8.   Understanding programming languages like GO/Python, and Java

9.   Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

10.   Working experience on Cloud platform -AWS

11. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.

 

Description

Job Summary:

As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.

 

Job Responsibilities:

● Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs

● Codify our infrastructure

● Do what it takes to keep the uptime above 99.99%

● Understand the bigger picture and sail through the ambiguities

● Scale technology considering cost and observability and manage end-to-end processes

● Understand DevOps philosophy and evangelize the principles across the organization

● Strong communication and collaboration skills to break down the silos

 

Job Requirements:

● B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience

● Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant

● Must have a firm hold on the container orchestration tool Kubernetes

● Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet

● Strong problem-solving skills, and ability to write scripts using any scripting language

● Understanding programming languages like GO/Python, and Java

● Comfortable working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

 

What’s there for you?

Company’s team handles everything – infra, tooling, and self-manages a bunch of databases, such as

● 150+ microservices with event-driven architecture across different tech stacks Golang/ java/ node

● More than 100,000 Request per second on our edge gateways

● ~20,000 events per second on self-managed Kafka

● 100s of TB of data on self-managed databases

● 100s of real-time continuous deployment to production

● Self-managed infra supporting

● 100% OSS

Read more
ZeMoSo Technologies
Remote only
4 - 8 yrs
₹10L - ₹20L / yr
Google Cloud Platform (GCP)
skill iconPython
Shell Scripting
CI/CD
skill iconJenkins
+2 more

Job Overview:

 

You will work in engineering and development teams to integrate and develop cloud solutions and virtualized deployment of software as a service product. This will require understanding the software system architecture function as well as performance and security requirements. The DevOps Engineer is also expected to have expertise in available cloud solutions and services, administration of virtual machine clusters, performance tuning and configuration of cloud computing resources, the configuration of security, scripting and automation of monitoring functions. This position requires the deployment and management of multiple virtual clusters and working with compliance organizations to support security audits. The design and selection of cloud computing solutions that are reliable, robust, extensible, and easy to migrate are also important.

 

Experience:

 

  • Experience working on billing and budgets for a GCP project - MUST
  • Experience working on optimizations on GCP based on vendor recommendations - NICE TO HAVE
  • Experience in implementing the recommendations on GCP
  • Architect Certifications on GCP - MUST
  • Excellent communication skills (both verbal & written) - MUST
  • Excellent documentation skills on processes and steps and instructions- MUST
  • At least 2 years of experience on GCP.

 

Basic Qualifications:

  • Bachelor’s/Master’s Degree in Engineering OR Equivalent.
  • Extensive scripting or programming experience (Shell Script, Python).
  • Extensive experience working with CI/CD (e.g. Jenkins).
  • Extensive experience working with GCP, Azure, or Cloud Foundry.
  • Experience working with databases (PostgreSQL, elastic search).
  • Must have 2 years of minimum experience with GCP certification.

 

Benefits :

  • Competitive salary.
  • Work from anywhere.
  • Learning and gaining experience rapidly.
  • Reimbursement for basic working set up at home.
  • Insurance (including top-up insurance for COVID).

 

Location :

Remote - work from anywhere.

Ideal joining preferences:
Immediate or 15 days

Read more
HappyFox
at HappyFox
1 video
6 products
Lindsey A
Posted by Lindsey A
Chennai, Bengaluru (Bangalore)
7 - 15 yrs
₹15L - ₹20L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+9 more

About us:

HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.

 

We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.

 

To know more, Visit! - https://www.happyfox.com/

 

Responsibilities

  • Build and scale production infrastructure in AWS for the HappyFox platform and its products.
  • Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
  • Implement consistent observability, deployment and IaC setups
  • Lead incident management and actively respond to escalations/incidents in the production environment from customers and the support team.
  • Hire/Mentor other Infrastructure engineers and review their work to continuously ship improvements to production infrastructure and its tooling.
  • Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
  • Lead infrastructure security audits

 

Requirements

  • At least 7 years of experience in handling/building Production environments in AWS.
  • At least 3 years of programming experience in building API/backend services for customer-facing applications in production.
  • Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
  • Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
  • Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
  • Experience in security hardening of infrastructure, systems and services.
  • Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
  • Experience in setting up and managing test/staging environments, and CI/CD pipelines.
  • Experience in IaC tools such as Terraform or AWS CDK
  • Exposure/Experience in setting up or managing Cloudflare, Qualys and other related tools
  • Passion for making systems reliable, maintainable, scalable and secure.
  • Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
  • Bonus points – Hands-on experience with Nginx, Postgres, Postfix, Redis or Mongo systems.

 

 

Read more
Ashnik
Ashnik
Agency job
via InvokHR by Sandeepa Kasala
Remote only
8 - 15 yrs
₹10L - ₹20L / yr
Infrastructure
Automation
skill iconPython
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more

Position- Cloud and Infrastructure Automation Consultant

Location- India(Pan India)-Work from Home

The position:

This exciting role in Ashnik’s consulting team brings great opportunity to design and deploy automation solutions for Ashnik’s enterprise customers spread across SEA and India. This role takes a lead in consulting the customers for automation of cloud and datacentre based resources. You will work hands-on with your team focusing on infrastructure solutions and to automate infrastructure deployments that are secure and compliant. You will provide implementation oversight of solutions to over the challenges of technology and business.

Responsibilities:

· To lead the consultative discussions to identify challenges for the customers and suggest right fit open source tools

· Independently determine the needs of the customer and create solution frameworks

· Design and develop moderately complex software solutions to meet needs

· Use a process-driven approach in designing and developing solutions.

· To create consulting work packages, detailed SOWs and assist sales team to position them to enterprise customers

· To be responsible for implementation of automation recipes (Ansible/CHEF) and scripts (Ruby, PowerShell, Python) as part of an automated installation/deployment process

Experience and skills required :

· 8 to 10 year of experience in IT infrastructure

· Proven technical skill in designing and delivering of enterprise level solution involving integration of complex technologies

· 6+ years of experience with RHEL/windows system automation

· 4+ years of experience using Python and/or Bash scripting to solve and automate common system tasks

· Strong understanding and knowledge of networking architecture

· Experience with Sentinel Policy as Code

· Strong understanding of AWS and Azure infrastructure

· Experience deploying and utilizing automation tools such as Terraform, CloudFormation, CI/CD pipelines, Jenkins, Github Actions

· Experience with Hashicorp Configuration Language (HCL) for module & policy development

· Knowledge of cloud tools including CloudFormation, CloudWatch, Control Tower, CloudTrail and IAM is desirable

This role requires high degree of self-initiative, working with diversified teams and working with customers spread across Southeast Asia and India region. This role requires you to be pro-active in communicating with customers and internal teams about industry trends, technology development and creating thought leadership.

 

About Us

Ashnik is a leading enterprise open-source solutions company in Southeast Asia and India, enabling organizations to adopt open source for their digital transformation goals. Founded in 2009, it offers a full-fledged Open-Source Marketplace, Solutions, and Services – Consulting, Managed, Technical, Training. Over 200 leading enterprises so far have leveraged Ashnik’s offerings in the space of Database platforms, DevOps & Microservices, Kubernetes, Cloud, and Analytics.

As a team culture, Ashnik is a family for its team members. Each member brings in a different perspective, new ideas and diverse background. Yet we all together strive for one goal – to deliver the best solutions to our customers using open-source software. We passionately believe in the power of collaboration. Through an open platform of idea exchange, we create a vibrant environment for growth and excellence.


Package : upto 20L

Experience: 8 yrs

Read more
RaRa Now
at RaRa Now
3 recruiters
N SHUBHANGINI
Posted by N SHUBHANGINI
Remote only
2 - 8 yrs
₹7L - ₹15L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

About RaRa Delivery

Not just a delivery company…

RaRa Delivery is revolutionising instant delivery for e-commerce in Indonesia through data driven logistics.

RaRa Delivery is making instant and same-day deliveries scalable and cost-effective by leveraging a differentiated operating model and real-time optimisation technology. RaRa makes it possible for anyone, anywhere to get same day delivery in Indonesia. While others are focusing on ‘one-to-one’ deliveries, the company has developed proprietary, real-time batching tech to do ‘many-to-many’ deliveries within a few hours.. RaRa is already in partnership with some of the top eCommerce players in Indonesia like Blibli, Sayurbox, Kopi Kenangan and many more.

We are a distributed team with the company headquartered in Singapore 🇸🇬 , core operations in Indonesia 🇮🇩 and technology team based out of India 🇮🇳

Future of eCommerce Logistics.

  • Datadriven logistics company that is bringing in same day delivery revolution in Indonesia 🇮🇩
  • Revolutionising delivery as an experience
  • Empowering D2C Sellers with logistics as the core technology

About the Role

  • Build and maintain CI/CD tools and pipelines.
  • Designing and managing highly scalable, reliable, and fault-tolerant infrastructure & networking that forms the backbone of distributed systems at RaRa Delivery.
  • Continuously improve code quality, product execution, and customer delight.
  • Communicate, collaborate and work effectively across distributed teams in a global environment.
  • Operate to strengthen teams across their product with their knowledge base
  • Contribute to improving team relatedness, and help build a culture of camaraderie.
  • Continuously refactor applications to ensure high-quality design
  • Pair with team members on functional and non-functional requirements and spread design philosophy and goals across the team
  • Excellent bash, and scripting fundamentals and hands-on with scripting in programming languages such as Python, Ruby, Golang, etc.
  • Good understanding of distributed system fundamentals and ability to troubleshoot issues in a larger distributed infrastructure
  • Working knowledge of the TCP/IP stack, internet routing, and load balancing
  • Basic understanding of cluster orchestrators and schedulers (Kubernetes)
  • Deep knowledge of Linux as a production environment, container technologies. e.g. Docker, Infrastructure As Code such as Terraform, K8s administration at large scale.
  • Have worked on production distributed systems and have an understanding of microservices architecture, RESTful services, CI/CD.
Read more
Bootlabs Technologies Private Limited
at Bootlabs Technologies Private Limited
2 candid answers
1 recruiter
Haritha Govindaraj
Posted by Haritha Govindaraj
Bengaluru (Bangalore)
2 - 5 yrs
₹5L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more
Required

• Expertise in any one hyper-scale (AWS/AZURE/GCP), including basic services like networking, data and workload management.
o AWS
Networking: VPC, VPC Peering, Transit Gateway, RouteTables, SecurityGroups, etc.
Data: RDS, DynamoDB, ElasticSearch
Workload: EC2, EKS, Lambda, etc.

o Azure
Networking: VNET, VNET Peering,
Data: Azure MySQL, Azure MSSQL, etc.
Workload: AKS, VirtualMachines, AzureFunctions

o GCP
Networking: VPC, VPC Peering, Firewall, Flowlogs, Routes, Static and External IP Addresses
Data: Cloud Storage, DataFlow, Cloud SQL, Firestore, BigTable, BigQuery
Workload: GKE, Instances, App Engine, Batch, etc.

• Experience in any one of the CI/CD tools (Gitlab/Github/Jenkins) including runner setup, templating and configuration.

• Kubernetes experience or Ansible Experience (EKS/AKS/GKE), basics like pod, deployment, networking, service mesh. Used any package manager like helm.

• Scripting experience (Bash/python), automation in pipelines when required, system service.

• Infrastructure automation (Terraform/pulumi/cloudformation), write modules, setup pipeline and version the code.

Optional
• Experience in any programming language is not required but is appreciated.
• Good experience in GIT, SVN or any other code management tool is required.
• DevSecops tools like (Qualys/SonarQube/BlackDuck) for security scanning of artifacts, infrastructure and code.
• Observability tools (Opensource: Prometheus, Elasticsearch, OpenTelemetry; Paid: Datadog, 24/7, etc)
Read more
Hiring for one of the product based org for PAN India
Hiring for one of the product based org for PAN India
Agency job
via Natalie Consultants by Swati Bansal
Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Bengaluru (Bangalore), Pune, Ranchi, Patna, Kolkata, Gandhinagar, Ahmedabad, Indore, Lucknow, Bhopal, Jaipur, Jodhpur
2 - 6 yrs
₹3L - ₹10L / yr
DevOps
CI/CD
skill iconDocker
skill iconKubernetes
skill iconAmazon Web Services (AWS)
+2 more
We are looking for a proficient, passionate, and skilled DevOps Specialist. You will have the opportunity to build an in-depth understanding of our client's business problems and then implement organizational strategies to resolve the same.

Skills required:
Strong knowledge and experience of cloud infrastructure (AWS, Azure or GCP), systems, network design, and cloud migration projects.
Strong knowledge and understanding of CI/CD processes tools (Jenkins/Azure DevOps) is a must.
Strong knowledge and understanding of Docker & Kubernetes is a must.
Strong knowledge of Python, along with one more language (Shell, Groovy, or Java).
Strong prior experience using automation tools like Ansible, Terraform.
Architect systems, infrastructure & platforms using Cloud Services.
Strong communication skills. Should have demonstrated the ability to collaborate across teams and organizations.

Benefits of working with OpsTree Solutions:

Opportunity to work on the latest cutting edge tools/technologies in DevOps
Knowledge focused work culture
Collaboration with very enthusiastic DevOps experts
High growth trajectory
Opportunity to work with big shots in the IT industry
Read more
GoodWorker
at GoodWorker
6 recruiters
Sunder E
Posted by Sunder E
Bengaluru (Bangalore)
7 - 10 yrs
₹12L - ₹15L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Terraform
+1 more

Why you should join us

 

- You will join the mission to create positive impact on millions of peoples lives

- You get to work on the latest technologies in a culture which encourages experimentation - You get to work with super humans (Psst: Look up these super human1, super human2, super human3, super human4)

- You get to work in an accelerated learning environment

 

 

What you will do

 

- You will provide deep technical expertise to your team in building future ready systems.

- You will help develop a robust roadmap for ensuring operational excellence

- You will setup infrastructure on AWS that will be represented as code

- You will work on several automation projects that provide great developer experience

- You will setup secure, fault tolerant, reliable and performant systems

- You will establish clean and optimised coding standards for your team that are well documented

- You will set up systems in a way that are easy to maintain and provide a great developer experience

- You will actively mentor and participate in knowledge sharing forums

- You will work in an exciting startup environment where you can be ambitious and try new things :)

 

 

You should apply if

 

- You have a strong foundation in Computer Science concepts and programming fundamentals

- You have been working on cloud infrastructure setup, especially on AWS since 8+ years

- You have set up and maintained reliable systems that operate at high scale

- You have experience in hardening and securing cloud infrastructures

- You have a solid understanding of computer networking, network security and CDNs

- Extensive experience in AWS, Kubernetes and optionally Terraform

- Experience in building automation tools for code build and deployment (preferably in JS)

- You understand the hustle of a startup and are good with handling ambiguity

- You are curious, a quick learner and someone who loves to experiment

- You insist on highest standards of quality, maintainability and performance

- You work well in a team to enhance your impact

Read more
One Championship
at One Championship
1 video
1 recruiter
Agency job
via Volks Consulting by Dhirender Gulair
Bengaluru (Bangalore)
4 - 10 yrs
₹30L - ₹35L / yr
DevOps
skill iconKubernetes
skill iconDocker
Windows Azure
Ansible
+2 more
About the Role
As part of the engineering team, you would be expected to have
deep technology expertise with a passion for building highly scalable products.
This is a unique opportunity where you can impact the lives of people across 150+
countries!

Responsibilities
• Develop Collaborate in large-scale systems design discussions.
• Deploying and maintaining in-house/customer systems ensuring high availability,
performance and optimal cost.
• Automate build pipelines. Ensuring right architecture for CI/CD
• Work with engineering leaders to ensure cloud security
• Develop standard operating procedures for various facets of Infrastructure
services (CI/CD, Git Branching, SAST, Quality gates, Auto Scaling)
• Perform & automate regular backups of servers & databases. Ensure rollback and
restore capabilities are Realtime and with zero-downtime.
• Lead the entire DevOps charter for ONE Championship. Mentor other DevOps
engineers. Ensure industry standards are followed.

Requirements
• Overall 5+ years of experience in as DevOps Engineer/Site Reliability Engineer
• B.E/B.Tech in CS or equivalent streams from institute of repute
• Experience in Azure is a must. AWS experience is a plus
• Experience in Kubernetes, Docker, and containers
• Proficiency in developing and deploying fully automated environments using
Puppet/Ansible and Terraform
• Experience with monitoring tools like Nagios/Icinga, Prometheus, AlertManager,
Newrelic
• Good knowledge of source code control (git)
• Expertise in Continuous Integration and Continuous Deployment setup using Azure
Pipeline or Jenkins
• Strong experience in programming languages. Python is preferred
• Experience in scripting and unit testing
• Basic knowledge of SQL & NoSQL databases
• Strong Linux fundamentals
• Experience in SonarQube, Locust & Browserstack is a plus
Read more
Pramata Knowledge Solutions
Seena Narayanan
Posted by Seena Narayanan
Bengaluru (Bangalore)
3 - 7 yrs
₹8L - ₹16L / yr
DevOps
Automation
skill iconProgramming
Linux/Unix
Software deployment
+7 more
Job Title: DevOps Engineer Work Experience: 3-7 years Qualification: B.E / M. Tech Location: Bangalore, India About Pramata Pramata’s unique, industry-proven offering combines the digitization of critical customer data currently locked in unstructured and obscure sources, then converts that data into high-quality, actionable information accessible through one or multiple applications through the Pramata cloud-based customer digitization platform. Pramata’s customers are some of the largest companies in the world including CenturyLink, Comcast Business, FICO, HPE, Microsoft, NCR, Novelis, and Truven Health IBM. Pramata has helped these companies and more find millions of dollars in revenue, ensure regulatory and pricing compliance, as well as enable risk identification and management across their customer, partner, and even supplier bases. Pramata is headquartered near San Francisco, California and has its Product Engineering and Solutions Delivery Center in Bangalore, India. How Pramata Works Pramata extracts essential intelligence about customer relationships from complex, negotiated contracts, simplifies it from legalese into plain English, synthesizes it with data from CRM, CLM, billing and other systems, and delivers it in the context of a particular user’s role and responsibilities. This is done through Pramata’s unique Digitization-as-a-Service (DaaS) process which transforms unstructured and diverse data into accurate, timely and meaningful digital information stored in the Pramata Digital Intelligence Hub. The Hub keeps the information centralized as one single, shared source of truth along with ensuring that this data remains consistent, accessible and highly secure. The opportunity - What you get to do You will be instrumental in bringing automation to development and testing pipelines, release management, configuration management, environment & application management and day-to-day support of development teams. You will manage the development of capabilities to achieve higher automation, quality and performance in automated build and deployment management, release management, on-demand environment configuration & automation, configuration and change management and in production environment support - Application monitoring, performance management and production support of mission-critical applications including application and system uptime and remote diagnostics - Security - Ensure that the highly sensitive data from our customers is secure at all times. - Instrument applications for performance baselines and to aid rapid diagnostics and resolution in case of system issues. - High availability and disaster recover - Build and maintain systems that are designed to provide 99.9% uptime and ensure that disaster recovery mechanisms are in place. - Automate provisioning and integration tasks as required to deploy new code. - Monitoring - Proactive steps to monitor complex interdependent systems to ensure that issues are being identified and addressed in real-time. Skills required: - Excellent communicator with great interpersonal skills, driving clarity about the intricate systems - Come with hands-on experience in application infrastructure technologies like Linux(RHEL), MySQL, Apache, Nginx, Phusion passenger, Redis etc. - Good understanding of software application builds, configuration management and deployments - Strong scripting skills like Shell, Ruby, Python, Perl etc. Comes with passion for automation - Comfortable with collaboration, open communication and reaching across functional borders. - Advanced problem-solving and task break-down ability. Additional Skills (Good to have but not mandatory): - In depth understanding and experience working with any Cloud Platforms (e.g: AWS, Azure, Google cloud etc) - Experience using configuration management tools like Chef, Puppet, Capistrano, Ansible, etc. - Being able to work under pressure and solve problems using an analytical approach; decisive, fast moving; and a positive attitude. Minimum Qualifications: - Bachelor’s Degree in Computer Science or a related field - Background in technology operations for Linux based applications with 2-4 years of experience in enterprise software - Strong programming skills in Python, Shell or Java - Experience with one or more of the following Configuration Management Tools: Ansible, Chef, Salt, Puppet - Experience with one or more of the following Databases: PostgreSQL, MySQL, Oracle, RDS
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos