Cutshort logo
Hashone Careers logo
Devops Engineer
Hashone Careers's logo

Devops Engineer

Madhavan I's profile picture
Posted by Madhavan I
5 - 10 yrs
₹20L - ₹40L / yr
Remote only
Skills
DevOps
skill iconAmazon Web Services (AWS)
skill iconKubernetes
cicd
skill iconPython
Bash

Job Description: DevOps Engineer

Location: Bangalore / Hybrid / Remote

Company: LodgIQ

Industry: Hospitality / SaaS / Machine Learning


About LodgIQ

Headquartered in New York, LodgIQ delivers a revolutionary B2B SaaS platform to the

travel industry. By leveraging machine learning and artificial intelligence, we enable precise

forecasting and optimized pricing for hotel revenue management. Backed by Highgate

Ventures and Trilantic Capital Partners, LodgIQ is a well-funded, high-growth startup with a

global presence.


Role Summary:

We are seeking a Senior DevOps Engineer with 5+ years of strong hands-on experience in

AWS, Kubernetes, CI/CD, infrastructure as code, and cloud-native technologies. This

role involves designing and implementing scalable infrastructure, improving system

reliability, and driving automation across our cloud ecosystem.


Key Responsibilities:

• Architect, implement, and manage scalable, secure, and resilient cloud

infrastructure on AWS

• Lead DevOps initiatives including CI/CD pipelines, infrastructure automation,

and monitoring

• Deploy and manage Kubernetes clusters and containerized microservices

• Define and implement infrastructure as code using

Terraform/CloudFormation

• Monitor production and staging environments using tools like CloudWatch,

Prometheus, and Grafana

• Support MongoDB and MySQL database administration and optimization

• Ensure high availability, performance tuning, and cost optimization

• Guide and mentor junior engineers, and enforce DevOps best practices

• Drive system security, compliance, and audit readiness in cloud environments

• Collaborate with engineering, product, and QA teams to streamline release

processes


Required Qualifications:

• 5+ years of DevOps/Infrastructure experience in production-grade environments

• Strong expertise in AWS services: EC2, EKS, IAM, S3, RDS, Lambda, VPC, etc.

• Proven experience with Kubernetes and Docker in production

• Proficient with Terraform, CloudFormation, or similar IaC tools

• Hands-on experience with CI/CD pipelines using Jenkins, GitHub Actions, or

similar

• Advanced scripting in Python, Bash, or Go

• Solid understanding of networking, firewalls, DNS, and security protocols

• Exposure to monitoring and logging stacks (e.g., ELK, Prometheus, Grafana)

• Experience with MongoDB and MySQL in cloud environments

Preferred Qualifications:

• AWS Certified DevOps Engineer or Solutions Architect

• Experience with service mesh (Istio, Linkerd), Helm, or ArgoCD

• Familiarity with Zero Downtime Deployments, Canary Releases, and Blue/Green

Deployments

• Background in high-availability systems and incident response

• Prior experience in a SaaS, ML, or hospitality-tech environment


Tools and Technologies You’ll Use:

• Cloud: AWS

• Containers: Docker, Kubernetes, Helm

• CI/CD: Jenkins, GitHub Actions

• IaC: Terraform, CloudFormation

• Monitoring: Prometheus, Grafana, CloudWatch

• Databases: MongoDB, MySQL

• Scripting: Bash, Python

• Collaboration: Git, Jira, Confluence, Slack


Why Join Us?

• Competitive salary and performance bonuses.

• Remote-friendly work culture.

• Opportunity to work on cutting-edge tech in AI and ML.

• Collaborative, high-growth startup environment.

• For more information, visit http://www.lodgiq.com

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

About Hashone Careers

Founded :
2021
Type :
Services
Size :
0-20
Stage :
Profitable

About

N/A

Company social profiles

N/A

Similar jobs

IT Services & Staffing Solutions Industry
IT Services & Staffing Solutions Industry
Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
12 - 14 yrs
₹29L - ₹38L / yr
skill iconAmazon Web Services (AWS)
DevOps
Terraform
Troubleshooting
Amazon VPC
+16 more

REVIEW CRITERIA:

MANDATORY:

  • Strong Hands-On AWS Cloud Engineering / DevOps Profile
  • Mandatory (Experience 1): Must have 12+ years of experience in AWS Cloud Engineering / Cloud Operations / Application Support
  • Mandatory (Experience 2): Must have strong hands-on experience supporting AWS production environments (EC2, VPC, IAM, S3, ALB, CloudWatch)
  • Mandatory (Infrastructure as a code): Must have hands-on Infrastructure as Code experience using Terraform in production environments
  • Mandatory (AWS Networking): Strong understanding of AWS networking and connectivity (VPC design, routing, NAT, load balancers, hybrid connectivity basics)
  • Mandatory (Cost Optimization): Exposure to cost optimization and usage tracking in AWS environments
  • Mandatory (Core Skills): Experience handling monitoring, alerts, incident management, and root cause analysis
  • Mandatory (Soft Skills): Strong communication skills and stakeholder coordination skills


ROLE & RESPONSIBILITIES:

We are looking for a hands-on AWS Cloud Engineer to support day-to-day cloud operations, automation, and reliability of AWS environments. This role works closely with the Cloud Operations Lead, DevOps, Security, and Application teams to ensure stable, secure, and cost-effective cloud platforms.


KEY RESPONSIBILITIES:

  • Operate and support AWS production environments across multiple accounts
  • Manage infrastructure using Terraform and support CI/CD pipelines
  • Support Amazon EKS clusters, upgrades, scaling, and troubleshooting
  • Build and manage Docker images and push to Amazon ECR
  • Monitor systems using CloudWatch and third-party tools; respond to incidents
  • Support AWS networking (VPCs, NAT, Transit Gateway, VPN/DX)
  • Assist with cost optimization, tagging, and governance standards
  • Automate operational tasks using Python, Lambda, and Systems Manager


IDEAL CANDIDATE:

  • Strong hands-on AWS experience (EC2, VPC, IAM, S3, ALB, CloudWatch)
  • Experience with Terraform and Git-based workflows
  • Hands-on experience with Kubernetes / EKS
  • Experience with CI/CD tools (GitHub Actions, Jenkins, etc.)
  • Scripting experience in Python or Bash
  • Understanding of monitoring, incident management, and cloud security basics


NICE TO HAVE:

  • AWS Associate-level certifications
  • Experience with Karpenter, Prometheus, New Relic
  • Exposure to FinOps and cost optimization practices
Read more
Cloud based testing platform- Product based startup
Cloud based testing platform- Product based startup
Agency job
via Qrata by Rayal Rajan
Bengaluru (Bangalore)
4 - 9 yrs
₹40L - ₹50L / yr
skill iconKubernetes
skill iconDocker
skill iconJava
DevOps
skill iconAmazon Web Services (AWS)
+3 more

Role – Sr. Devops Engineer

Location - Bangalore

Experience 5+ Years

Responsibilities

  • Implementing various development, testing, automation tools, and IT infrastructure
  • Planning the team structure, activities, and involvement in project management activities.
  • Defining and setting development, test, release, update, and support processes for DevOps operation
  • Troubleshooting techniques and fixing the code bugs
  • Monitoring the processes during the entire lifecycle for its adherence and updating or creating new processes for improvement and minimizing the wastage
  • Encouraging and building automated processes wherever possible
  • Incidence management and root cause analysis.
  • Selecting and deploying appropriate CI/CD tools
  • Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
  • Mentoring and guiding the team members 
  • Monitoring and measuring customer experience and KPIs

Requirements

  • 5-6 years of relevant experience in a Devops role.
  • Good knowledge in cloud technologies such as AWS/Google Cloud.
  • Familiarity with container orchestration services, especially Kubernetes experience (Mandatory) and good knowledge in Docker.
  • Experience administering and deploying development CI/CD tools such as Git, Jira, GitLab, or Jenkins.
  • Good knowledge in complex Debugging mechanisms (especially JVM) and Java programming experience (Mandatory)
  • Significant experience with Windows and Linux operating system environments
  • A team player with excellent communication skills.
Read more
Hybrid Cloud Environments
Hybrid Cloud Environments
Agency job
via The Hub by Sridevi Viswanathan
Bengaluru (Bangalore)
5 - 8 yrs
₹10L - ₹12L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
Cloudfront
Installation
What we do? 

We are a boutique IT services & solutions firm headquartered in the Bay Area with offices in India. Our offering includes custom-configured hybrid cloud solutions backed by our managed services. We combine best in class DevOps and IT infrastructure management practices, to manage our clients Hybrid Cloud Environments.
In addition, we build and deploy our private cloud solutions using Open stack to provide our clients with a secure, cost effective and scale able Hybrid Cloud solution. We work with start-ups as well as enterprise clients.
This is an exciting opportunity for an experienced Cloud Engineer to work on exciting projects and have an opportunity to expand their knowledge working on adjacent technologies as well.

Must have skills

• Provisioning skills on IaaS cloud computing for platforms such as AWS, Azure, GCP.

• Strong working experience in AWS space with various AWS services and implementations (i.e. VPCs, SES, EC2, S3, Route 53, Cloud Front, etc.)

• Ability to design solutions based on client requirements.

• Some experience with various network LAN/WAN appliances like (Cisco routers and ASA systems, Barracuda, Meraki, SilverPeak, Palo Alto, Fortinet, etc.)

• Understanding of networked storage like (NFS / SMB / iSCSI / Storage GW / Windows Offline)

• Linux / Windows server installation, maintenance, monitoring, data backup and recovery, security, and administration.

• Good knowledge of TCP/IP protocol & internet technologies.

• Passion for innovation and problem solving, in a start-up environment.

• Good communication skills.

Good to have

• Remote Monitoring & Management.

• Familiarity with Kubernetes and Containers.

• Exposure to DevOps automation scripts & experience with tools like Git, bash scripting, PowerShell, AWS Cloud Formation, Ansible, Chef or Puppet will be a plus.

• Architect / Practitioner certification from OEM with hands-on capabilities.

What you will be working on

• Trouble shoot and handle L2/ L3 tickets.

• Design and architect Enterprise Cloud systems and services.

• Design, Build and Maintain environments primarily in AWS using EC2, S3/Storage, CloudFront, VPC, ELB, Auto Scaling, Direct Connect, Route53, Firewall, etc.

• Build and deploy in GCP/ Azure as needed.

• Architect cloud solutions keeping performance, cost and BCP considerations in mind.

• Plan cloud migration projects as needed.

• Collaborate & work as part of a cohesive team.

• Help build our private cloud offering on Open stack.
Read more
Anarock Technology
at Anarock Technology
1 video
3 recruiters
Arpita Saha
Posted by Arpita Saha
Bengaluru (Bangalore)
4 - 7 yrs
₹5L - ₹12L / yr
skill iconDocker
Terraform
skill iconAmazon Web Services (AWS)
skill iconKubernetes
DevOps
+2 more

ApnaComplex is one of India’s largest and fastest-growing PropTech disruptors within the Society & Apartment Management business.  The SaaS based B2C platform is headquartered out of India’s tech start-up hub, Bangalore, with branches in 6 other cities. It currently empowers 3,600 Societies, managing over 6 Lakh Households in over 80 Indian cities to effortlessly manage all aspects of running large complexes seamlessly.

ApnaComplex is part of ANAROCK Group. ANAROCK Group is India's leading specialized real estate services company having diversified interests across the real estate value chain.

If it excites you to - drive innovation, create industry-first solutions, build new capabilities ground-up, and work with multiple new technologies, ApnaComplex is the place for you.

 

Must have-

 

  • Knowledge of Docker
  • Knowledge of Terraforms
  • Knowledge of AWS

 

Good to have -

  • Kubernetes
  • Scripting language: PHP/Go Lang and Python
  • Webserver knowledge
  • Logging and monitoring experience
  • Test, build, design, deployment, and ability to maintain continuous integration and continuous delivery process using tools like Jenkins, maven Git, etc. 
  • Build and maintain highly available production systems.
  • Must know how to choose the best tools and technologies which best fits the business needs. 
  • Develop software to integrate with internal back-end systems.
  • Investigate and resolve technical issues.
  • Problem-solving attitude.
  • Ability to automate test and deploy the code and monitor. 
  • Work in close coordination with the development and operations team such that the application is in line with performance according to the customer's expectation.
  • Lead and guide the team in identifying and implementing new technologies.

 

 

Skills that will help you build a success story with us

 

  • An ability to quickly understand and solve new problems
  • Strong interpersonal skills
  • Excellent data interpretation
  • Context-switching
  • Intrinsically motivated
  • A tactical and strategic track record for delivering research-driven results

 

Quick Glances:

 

 

ANAROCK Ethos - Values Over Value:

Our assurance of consistent ethical dealing with clients and partners reflects our motto - Values Over Value.

We value diversity within ANAROCK Group and are committed to offering equal opportunities in employment. We do not discriminate against any team member or applicant for employment based on nationality, race, color, religion, caste, gender identity / expression, sexual orientation, disability, social origin and status, indigenous status, political opinion, age, marital status or any other personal characteristics or status. ANAROCK Group values all talent and will do its utmost to hire, nurture and grow them.

Read more
Horizontal Integration
Remote, Bengaluru (Bangalore), Hyderabad, Vadodara, Pune, Jaipur, Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
6 - 15 yrs
₹10L - ₹25L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
Microsoft Windows Azure
Google Cloud Platform (GCP)
skill iconDocker
+2 more

Position Summary

DevOps is a Department of Horizontal Digital, within which we have 3 different practices.

  1. Cloud Engineering
  2. Build and Release
  3. Managed Services

This opportunity is for Cloud Engineering role who also have some experience with Infrastructure migrations, this will be a complete hands-on job, with focus on migrating clients workloads to the cloud, reporting to the Solution Architect/Team Lead and along with that you are also expected to work on different projects for building out the Sitecore Infrastructure from scratch.

We are Sitecore Platinum Partner and majority of the Infrastructure work that we are doing is for Sitecore.

Sitecore is a .Net Based Enterprise level Web CMS, which can be deployed on On-Prem, IaaS, PaaS and Containers.

So, most of our DevOps work is currently planning, architecting and deploying infrastructure for Sitecore.
 

Key Responsibilities:

  • This role includes ownership of technical, commercial and service elements related to cloud migration and Infrastructure deployments.
  • Person who will be selected for this position will ensure high customer satisfaction delivering Infra and migration projects.
  • Candidate must expect to work in parallel across multiple projects, along with that candidate must also have a fully flexible approach to working hours.
  • Candidate should keep him/herself updated with the rapid technological advancements and developments that are taking place in the industry.
  • Along with that candidate should also have a know-how on Infrastructure as a code, Kubernetes, AKS/EKS, Terraform, Azure DevOps, CI/CD Pipelines.

Requirements:

  • Bachelor’s degree in computer science or equivalent qualification.
  • Total work experience of 6 to 8 Years.
  • Total migration experience of 4 to 6 Years.
  • Multiple Cloud Background (Azure/AWS/GCP)
  • Implementation knowledge of VMs, Vnet,
  • Know-how of Cloud Readiness and Assessment
  • Good Understanding of 6 R's of Migration.
  • Detailed understanding of the cloud offerings
  • Ability to Assess and perform discovery independently for any cloud migration.
  • Working Exp. on Containers and Kubernetes.
  • Good Knowledge of Azure Site Recovery/Azure Migrate/Cloud Endure
  • Understanding on vSphere and Hyper-V Virtualization.
  • Working experience with Active Directory.
  • Working experience with AWS Cloud formation/Terraform templates.
  • Working Experience of VPN/Express route/peering/Network Security Groups/Route Table/NAT Gateway, etc.
  • Experience of working with CI/CD tools like Octopus, Teamcity, Code Build, Code Deploy, Azure DevOps, GitHub action.
  • High Availability and Disaster Recovery Implementations, taking into the consideration of RTO and RPO aspects.
  • Candidates with AWS/Azure/GCP Certifications will be preferred.
Read more
YourHRfolks
at YourHRfolks
6 recruiters
Pranit Visiyait
Posted by Pranit Visiyait
Remote, Jaipur
3 - 8 yrs
₹6L - ₹16L / yr
DevOps
skill iconDocker
skill iconJenkins
skill iconKubernetes
Terraform
+6 more

Job Location: Jaipur

Experience Required: Minimum 3 years

About the role:

As a DevOps Engineer for Punchh, you will be working with our developers, SRE, and DevOps teams implementing our next generation infrastructure. We are looking for a self-motivated, responsible, team player who love designing systems that scale. Punchh provides a rich engineering environment where you can be creative, learn new technologies, solve engineering problems, all while delivering business objectives. The DevOps culture here is one with immense trust and responsibility. You will be given the opportunity to make an impact as there are no silos here. 

Responsibilities:

  • Deliver SLA and business objectives through whole lifecycle design of services through inception to implementation.
  • Ensuring availability, performance, security, and scalability of AWS production systems
  • Scale our systems and services through continuous integration, infrastructure as code, and gradual refactoring in an agile environment. 
  • Maintain services once a project is live by monitoring and measuring availability, latency, and overall system and application health.
  • Write and maintain software that runs the infrastructure that powers the Loyalty and Data platform for some of the world’s largest brands.
  • 24x7 in shifts on call for Level 2 and higher escalations
  • Respond to incidents and write blameless RCA’s/postmortems
  • Implement and practice proper security controls and processes
  • Providing recommendations for architecture and process improvements.
  • Definition and deployment of systems for metrics, logging, and monitoring on platform.

Must have:  

  • Minimum 3 Years of Experience in DevOps.  
  • BS degree in Computer Science, Mathematics, Engineering, or equivalent practical experience.
  • Strong inter-personal skills.
  • Must have experience in CI/CD tooling such as Jenkins, CircleCI, TravisCI
  • Must have experience in Docker, Kubernetes, Amazon ECS or  Mesos
  • Experience in code development in at least one high-level programming language fromthis list: python, ruby, golang, groovy
  • Proficient in shell scripting, and most importantly, know when to stop scripting and start developing.
  • Experience in creation of highly automated infrastructures with any Configuration Management tools like: Terraform, Cloudformation or Ansible.  
  • In-depth knowledge of the Linux operating system and administration. 
  • Production experience with a major cloud provider such Amazon AWS.
  • Knowledge of web server technologies such as Nginx or Apache. 
  • Knowledge of Redis, Memcache, or one of the many in-memory data stores.
  • Experience with various load balancing technologies such as Amazon ALB/ELB, HA Proxy, F5. 
  • Comfortable with large-scale, highly-available distributed systems.

Good to have:  

  • Understanding of Web Standards (REST, SOAP APIs, OWASP, HTTP, TLS)
  • Production experience with Hashicorp products such as Vault or Consul
  • Expertise in designing, analyzing troubleshooting large-scale distributed systems.
  • Experience in an PCI environment
  • Experience with Big Data distributions from Cloudera, MapR, or Hortonworks
  • Experience maintaining and scaling database applications
  • Knowledge of fundamental systems engineering principles such as CAP Theorem, Concurrency Control, etc. 
  • Understanding of the network fundamentals: OSI, TCI/IP, topologies, etc.
  • Understanding of Auditing of Infrastructure and help org. to control Infrastructure costs. 
  • Experience in Kafka, RabbitMQ or any messaging bus.
Read more
Anzy
at Anzy
34 recruiters
Mukesh Mishra
Posted by Mukesh Mishra
Bengaluru (Bangalore)
1 - 3 yrs
₹5L - ₹17L / yr
DevOps
skill iconKubernetes
AWS CloudFormation
skill iconPython
skill iconDocker
+2 more
YOE: 1- 3years
Skill: Python, Docker or Ansible , AWS

➢ Experience Building a multi-region highly available auto-scaling infrastructure that optimizes
performance and cost. plan for future infrastructure as well as Maintain & optimize existing
infrastructure.
➢ Conceptualize, architect and build automated deployment pipelines in a CI/CD environment like
Jenkins.
➢ Conceptualize, architect and build a containerized infrastructure using Docker,Mesosphere or
similar SaaS platforms.
Work with developers to institute systems, policies and workflows which allow for rollback of
deployments Triage release of applications to production environment on a daily basis.
➢ Interface with developers and triage SQL queries that need to be executed inproduction
environments.
➢ Maintain 24/7 on-call rotation to respond and support troubleshooting of issues in production.
➢ Assist the developers and on calls for other teams with post mortem, follow up and review of
issues affecting production availability.
➢ Establishing and enforcing systems monitoring tools and standards
➢ Establishing and enforcing Risk Assessment policies and standards
➢ Establishing and enforcing Escalation policies and standards
Read more
Product-centric market leader in building loyalty solutions.
Product-centric market leader in building loyalty solutions.
Agency job
Pune
2 - 10 yrs
₹4L - ₹25L / yr
DevOps
skill iconKubernetes
skill iconAmazon Web Services (AWS)
skill iconDocker
skill iconPython
+3 more
1. Should have been working for at least 3 years as a DevOps/Cloud Engineer in an AWS Cloud Environment .
2. Has done Infrastructure coding using Cloudformation/Terraform and Configuration also understands it very clearly
3. Deep understanding of the microservice design and aware of centralized Caching(Redis),centralized configuration(Consul/Zookeeper)
4. Hands-on experience of working on containers and its orchestration using Kubernetes
5. Hands-on experience of Linux and Windows Operating System
6. Worked on NoSQL Databases like Cassandra, Aerospike, Mongo or
Couchbase, Central Logging, monitoring and Caching using stacks like ELK(Elastic) on the cloud, Prometheus, etc.
7. Has good knowledge of Network Security, Security Architecture and Secured SDLC practices

Read more
Pramata Knowledge Solutions
Seena Narayanan
Posted by Seena Narayanan
Bengaluru (Bangalore)
3 - 7 yrs
₹8L - ₹16L / yr
DevOps
Automation
skill iconProgramming
Linux/Unix
Software deployment
+7 more
Job Title: DevOps Engineer Work Experience: 3-7 years Qualification: B.E / M. Tech Location: Bangalore, India About Pramata Pramata’s unique, industry-proven offering combines the digitization of critical customer data currently locked in unstructured and obscure sources, then converts that data into high-quality, actionable information accessible through one or multiple applications through the Pramata cloud-based customer digitization platform. Pramata’s customers are some of the largest companies in the world including CenturyLink, Comcast Business, FICO, HPE, Microsoft, NCR, Novelis, and Truven Health IBM. Pramata has helped these companies and more find millions of dollars in revenue, ensure regulatory and pricing compliance, as well as enable risk identification and management across their customer, partner, and even supplier bases. Pramata is headquartered near San Francisco, California and has its Product Engineering and Solutions Delivery Center in Bangalore, India. How Pramata Works Pramata extracts essential intelligence about customer relationships from complex, negotiated contracts, simplifies it from legalese into plain English, synthesizes it with data from CRM, CLM, billing and other systems, and delivers it in the context of a particular user’s role and responsibilities. This is done through Pramata’s unique Digitization-as-a-Service (DaaS) process which transforms unstructured and diverse data into accurate, timely and meaningful digital information stored in the Pramata Digital Intelligence Hub. The Hub keeps the information centralized as one single, shared source of truth along with ensuring that this data remains consistent, accessible and highly secure. The opportunity - What you get to do You will be instrumental in bringing automation to development and testing pipelines, release management, configuration management, environment & application management and day-to-day support of development teams. You will manage the development of capabilities to achieve higher automation, quality and performance in automated build and deployment management, release management, on-demand environment configuration & automation, configuration and change management and in production environment support - Application monitoring, performance management and production support of mission-critical applications including application and system uptime and remote diagnostics - Security - Ensure that the highly sensitive data from our customers is secure at all times. - Instrument applications for performance baselines and to aid rapid diagnostics and resolution in case of system issues. - High availability and disaster recover - Build and maintain systems that are designed to provide 99.9% uptime and ensure that disaster recovery mechanisms are in place. - Automate provisioning and integration tasks as required to deploy new code. - Monitoring - Proactive steps to monitor complex interdependent systems to ensure that issues are being identified and addressed in real-time. Skills required: - Excellent communicator with great interpersonal skills, driving clarity about the intricate systems - Come with hands-on experience in application infrastructure technologies like Linux(RHEL), MySQL, Apache, Nginx, Phusion passenger, Redis etc. - Good understanding of software application builds, configuration management and deployments - Strong scripting skills like Shell, Ruby, Python, Perl etc. Comes with passion for automation - Comfortable with collaboration, open communication and reaching across functional borders. - Advanced problem-solving and task break-down ability. Additional Skills (Good to have but not mandatory): - In depth understanding and experience working with any Cloud Platforms (e.g: AWS, Azure, Google cloud etc) - Experience using configuration management tools like Chef, Puppet, Capistrano, Ansible, etc. - Being able to work under pressure and solve problems using an analytical approach; decisive, fast moving; and a positive attitude. Minimum Qualifications: - Bachelor’s Degree in Computer Science or a related field - Background in technology operations for Linux based applications with 2-4 years of experience in enterprise software - Strong programming skills in Python, Shell or Java - Experience with one or more of the following Configuration Management Tools: Ansible, Chef, Salt, Puppet - Experience with one or more of the following Databases: PostgreSQL, MySQL, Oracle, RDS
Read more
Directi
at Directi
13 recruiters
Richa Pancholy
Posted by Richa Pancholy
Bengaluru (Bangalore)
2 - 8 yrs
₹10L - ₹40L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
Linux/Unix
DevOps
skill iconMongoDB
+8 more
What is the Job like?We are looking for a talented individual to join our DevOps and Platforms Engineering team. You will play an important role in helping build and run our globally distributed infrastructure stack and platforms. Technologies you can expect to work on every day include Linux, AWS, MySQL/PostgreSQL, MongoDB, Hadoop/HBase, ElasticSearch, FreeSwitch, Jenkins, Nagios, and CFEngine amongst others.Responsibilities:- * Troubleshoot and fix production outages and performance issues in our AWS/Linux infrastructure stack* Build automation tools for provisioning and managing our cloud infrastructure by leveraging the AWS API for EC2, S3, CloudFront, RDS and Route53 amongst others* Contribute to enhancing and managing our continuous delivery pipeline* Proactively seek out opportunities to improve monitoring and alerting of our hosts and services, and implement them in a timely fashion* Code scripts and tools to collect and visualize metrics from linux hosts and JVM applications* Enhance and maintain our logs collection, processing and visualization infrastructure* Automate systems configuration by writing policies and modules for configuration management tools* Write both frontend (html/css/js) and backend code (Python, Ruby, Perl)* Participate in periodic oncall rotations for devopsSkills:- * DevOps/System Admin experience ranging between 3-4 years* In depth Linux/Unix knowledge, good understanding the various linux kernel subsystems (memory, storage, network etc)* DNS, TCP/IP, Routing, HA & Load Balancing* Configuration management using tools like CFEngine, Puppet or Chef* SQL and NoSQL databases like MySQL, PostgreSQL, MongoDB and HBase* Build and packaging tools like Jenkins and RPM/Yum* HA and Load balancing using tools like the Elastic Load Balancer and HAProxy* Monitoring tools like Nagios, Pingdom or similar* Log management tools like logstash, fluentd, syslog, elasticsearch or similar* Metrics collection tools like Ganglia, Graphite, OpenTSDB or similar* Programming in a high level language like Python or Ruby
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos