Devops Engineer

at US Based Fortune 200 Company

icon
Bengaluru (Bangalore)
icon
5 - 9 yrs
icon
₹12L - ₹35L / yr
icon
Full time
Skills
Docker
Kubernetes
DevOps
Amazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
Terraform
NACL
Virtual Network
DevOps and Security Engineer
Role Summary/Purpose:
As the DevOps & Security Engineer, you will be responsible deploying, configuring, and managing a Cloud based and Hybrid product/platform
and providing operations support.

Essential Responsibilities:

Contribute to product development, APIs, database coding and back-end services (.NET, Java, Python, Angular, React)
Provide operations support (setup, configuration, management, troubleshooting) of IT platforms
Deploy, configure and manage a Platform-as-a-Service (PaaS) cloud platform
Provide Level 1 (OS, patching) and Level 2 (app server instance troubleshooting)
DevOps-like programming: writing scripts, building operations/server instance/app/DB monitoring tools
Set up / manage continuous build and dev project management environment: Jenkins, Git, Jira
Designing secure networks, systems and application architectures
Collaborating with cross-functional teams to ensure secure product development
Disaster recovery, network forensics analysis, and pen-testing current solutions
Planning, researching and developing security policies, standards and procedures
Awareness training of the workforce on information security standards, policies and best practices
Installation and use of firewalls, data encryption and other security products and procedures
Qualifications/Requirements:
Minimum Bachelor’s Degree in Computer Science, Computer Engineering or in “STEM” Majors (Science, Technology, Engineering, and
Math)
5+ years of professional experience
Desired Skills:
Experience with AWS, Microsoft Azure, or GCP environments
Experience with web hosting on Apache and IIS
Experience building and configuring servers (Win 2012, Linux RHEL and/or Ubuntu)
Continuous Build / Continuous Integration experience with Jenkins
Solid foundation within Windows and Linux file systems, configuration and setup, Bash shell, scripting
Strong interpersonal skills, analytical skills, combined with intellectual curiosity, and a desire and ability to "get things done" are essential
requirements
Technical Certifications like CISSP, MCSE, CCSP, CCNP, CCSP, GIAC, CEH
Excellent understanding of networking principles including TCP/IP, WANs, LANs, and commonly used protocols/standards such as
DHCP, DNS, (E)SMTP, HTTP(S), IPSec, TLS/SSL, PKI, Telnet, SNMP, POP, LDAP, SSH
Programming and scripting experience: Python, Java, .NET/C#, PowerShell, Salt/Puppet/Chef scripting
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

DevOps Engineer - SDE1/2/3

at Anarock Technology

Founded 2017  •  Products & Services  •  20-100 employees  •  Bootstrapped
DevOps
Kubernetes
Docker
Amazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
Jenkins
NOSQL Databases
Cassandra
MongoDB
Redis
icon
Mumbai, Bengaluru (Bangalore), Gurugram
icon
2 - 5 yrs
icon
₹8L - ₹18L / yr

At Anarock Tech, we are building a modern technology platform with automated analytics and reporting tools. This offers timely solutions to our real estate clients, while delivering financially favorable and efficient results.

If it excites you to - drive innovation, create industry-first solutions, build new capabilities ground-up, and work with multiple new technologies, Anarock is the place for you.

 

 Key Job Responsibilities:

 

  • Deploy and maintain critical applications on cloud-native microservices architecture
  • Implement automation, effective monitoring, and infrastructure-as-code
  • Deploy and maintain CI/CD pipelines across multiple environments
  • Support and work alongside a cross-functional engineering team on the latest technologies
  • Iterate on best practices to increase the quality & velocity of deployments
  • Sustain and improve the process of knowledge sharing throughout the engineering team

 

Basic Qualifications

  • Experience maintaining and deploying highly-available, fault-tolerant systems at scale
  • A drive towards automating repetitive tasks (e.g. scripting via Bash, Python, Ruby, etc)
  • Expertise with AWS (e.g. IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda)
  • Version control system experience (e.g. Git)
  • Experience implementing CI/CD (e.g. Jenkins, TravisCI)
  • Operational (e.g. HA/Backups) NoSQL experience (e.g. Cassandra, MongoDB, Redis)
  • Bachelor or Master’s degree in CS, or equivalent practical experience
  • Effective communication

 

Skills that will help you build a success story with us

 

  • Worked in startup environment with high levels of ownership and full
  • Experience with search techniques and solid foundation in search engines (Solr, Elasticsearch or others)
  • Experience in building highly scalable business applications, which involve implementing large complex business flows and dealing with huge amount of data

 

Quick Glances:

 

 


 

Anarock Ethos - Values Over Value:

Our assurance of consistent ethical dealing with clients and partners reflects our motto - Values Over Value.

We value diversity within ANAROCK Group and are committed to offering equal opportunities in employment. We do not discriminate against any team member or applicant for employment based on nationality, race, color, religion, caste, gender identity / expression, sexual orientation, disability, social origin and status, indigenous status, political opinion, age, marital status or any other personal characteristics or status. ANAROCK Group values all talent and will do its utmost to hire, nurture and grow them.

Job posted by
Arpita Saha

DevOps Engineer

at Vume Interactive

Founded 2016  •  Product  •  20-100 employees  •  Raised funding
Docker
Kubernetes
DevOps
Amazon Web Services (AWS)
Windows Azure
Python
Ansible
Chef
icon
Bengaluru (Bangalore)
icon
2 - 10 yrs
icon
₹3L - ₹20L / yr

Key Responsibilities:

  • Work with the development team to plan, execute and monitor deployments
  • Capacity planning for product deployments
  • Adopt best practices for deployment and monitoring systems
  • Ensure the SLAs for performance, up time are met
  • Constantly monitor systems, suggest changes to improve performance and decrease costs.
  • Ensure the highest standards of security


Key Competencies (Functional):

 

  • Proficiency in coding in atleast one scripting language - bash, Python, etc
  • Has personally managed a fleet of servers (> 15)
  • Understand different environments production, deployment and staging
  • Worked in micro service / Service oriented architecture systems
  • Has worked with automated deployment systems – Ansible / Chef / Puppet.
  • Can write MySQL queries
Job posted by
Shweta Jaiswal

DevOps Engineer

at mosaic wellness

Agency job
via Qrata
DevOps
Javascript
AWS Lambda
icon
Mumbai
icon
2 - 7 yrs
icon
₹7L - ₹15L / yr
Role

We are looking for an experienced DevOps engineer that will help our team establish DevOps
practice. You will work closely with the technical lead to identify and establish DevOps practices in the company.You will also help us build scalable, efficient cloud infrastructure. You’ll implement monitoring for automated system health checks. Lastly, you’ll build our CI pipeline, and train and guide the team in DevOps practices. This would be a hybrid role and the person would be expected to also do some application-level programming in their downtime.

Responsibilities

- Deployment, automation, management, and maintenance of production systems.
- Ensuring availability, performance, security, and scalability of production systems.
- Evaluation of new technology alternatives and vendor products.
- System troubleshooting and problem resolution across various application domains and
platforms.
- Providing recommendations for architecture and process improvements.
- Definition and deployment of systems for metrics, logging, and monitoring on AWS
platform.
- Manage the establishment and configuration of SaaS infrastructure in an agile way
by storing infrastructure as code and employing automated configuration
management tools with a goal to be able to re-provision environments at any point in
time.
- Be accountable for proper backup and disaster recovery procedures.
- Drive operational cost reductions through service optimizations and demand based
auto scaling.
- Have on call responsibilities.
- Perform root cause analysis for production errors
- Uses open source technologies and tools to accomplish specific use cases encountered
within the project.
- Uses coding languages or scripting methodologies to solve a problem with a custom
workflow.

Requirements
- Systematic problem-solving approach, coupled with strong communication skills and a
sense of ownership and drive.
- Prior experience as a software developer in a couple of high level programming
languages.
- Extensive experience in any Javascript based framework since we will be deploying
services to NodeJS on AWS Lambda (Serverless)
- Extensive experience with web servers such as Nginx/Apache
- Strong Linux system administration background.
- Ability to present and communicate the architecture in a visual form.
- Strong knowledge of AWS (e.g. IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda, NAT
gateway, DynamoDB)
- Experience maintaining and deploying highly-available, fault-tolerant systems at scale (~
1 Lakh users a day)
- A drive towards automating repetitive tasks (e.g. scripting via Bash, Python, Ruby, etc)
- Expertise with Git
- Experience implementing CI/CD (e.g. Jenkins, TravisCI)
- Strong experience with databases such as MySQL, NoSQL, Elasticsearch, Redis and/or
Mongo.
- Stellar troubleshooting skills with the ability to spot issues before they become problems.
- Current with industry trends, IT ops and industry best practices, and able to identify the
ones we should implement.
- Time and project management skills, with the capability to prioritize and multitask as
needed.
Job posted by
Revathi Satish

DevOps Engineer

at MX Player

Founded 2011  •  Product  •  500-1000 employees  •  Profitable
DevOps
Docker
Kubernetes
Amazon Web Services (AWS)
Linux/Unix
Ansible
Python
Shell Scripting
icon
Mumbai
icon
3 - 8 yrs
icon
Best in industry
MX Player is the world’s best video player with an install base of 500+ million worldwide and 350+ million in India. We are installed on every second smartphone in India.

We cater to a wide range of entertainment categories including video streaming, music streaming, games and short videos via our MX Player and MX Takatak apps which are our flagship products.

Both MX Player and MX Takatak iOS apps are frequently featured amongst the top 5 apps in the Entertainment category on the Indian App Store. These are built by a small team of engineers based in Mumbai.


Position: Sr Site Reliability Engineer + DevOps:
Experience: 3 to 8 Years.
 
As DevOps Engineer, you will build and manage a secure cloud platform supporting multiple platforms, while ensuring seamless development, build, and deployment capabilities. You will be responsible for managing infrastructure, databases, and applications in different env. You will work closely with developers and QA to ensure systems we build are robust, can seamlessly scale, handle rapid growth, and limit exposure to single points of failure and security vulnerabilities.


Technical Skills:

Strong background in Linux/Unix Administration
Handling of production infrastructure
Experience with automation/configuration management using either Ansible, salt, or an equivalent
Ability to use a wide variety of open source technologies and cloud services (experience with AWS is required). 
AWS Environment troubleshooting and setup.
Good Exposure to containers, Kubernetes, and Docker
Good experience in Infrastructure as Code, Build & Deploy orchestrators terraform.

Support database operations including installation, configuration, security, backup, and monitoring
Process automation using any scripting language like python ruby shell.
Support the automation requirements of continuous integration and continuous deployments, build and maintain an effective and secure CI/CD pipeline. Support deployment and release activities.
Monitor system logs, network traffic for unusual or suspicious activity
Configuration and maintenance of monitoring and alert system.
Work under pressure and provide support for systems running 24x7
Job posted by
Mittal Soni
DevOps
Kubernetes
Docker
Terraform
Reliability engineering
Amazon Web Services (AWS)
Ansible
icon
Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹10L - ₹17L / yr

Role : SRE

Experience : 4 - 8 Years

 

  • Experience in building, deploying and operating cloud solutions on Kubernetes
  • Strong expertise administrating and scaling Kubernetes on bare metal and CKA preferred
  • Expertise on K8s Interfaces CNI, CSI, CRI and Service meshe
  • Hands-on experience as a DevOps or Automation development
  • Demonstrable knowledge of TCP/IP, Linux operating system internals, filesystems, disk/storage technologies and storage protocols.
  • Experience working with Helm Charts and building out Infrastructure As Code (IaC)
  • Experience in writing software to automate orchestration tasks at scale; we commonly use Python, Go, and Shell scripting
  • Knowledge of systems (Linux, GNU tooling), networking (OSI model, DNS, routing) and virtualization vs containerization
  • Expertise in CI/CD tooling for cloud-based applications specifically Terraform / CloudFormation, Jenkins and Git
  • Architected CNF Orchestration with Kubernetes
  • Strong understanding of the principles of 12-factor apps and modern containerized microservices
  • Plan for reliability by designing systems to work across our multi-region and multi-cloud environments
  • Experience developing and using Application & Integration stacks/tools such as Kafka, Spring Cloud, Apache Camel, Kubernetes, Docker, Redis, Knative, and NoSQL
Job posted by
RAKESH RANJAN

DevOps Engineer

at Wheelseye Technology India Pvt Ltd.

Founded 2017  •  Product  •  100-500 employees  •  Raised funding
Kubernetes
DevOps
Ansible
Docker
ECS
Amazon Web Services (AWS)
EKS
icon
NCR (Delhi | Gurgaon | Noida)
icon
3 - 8 yrs
icon
₹15L - ₹30L / yr
About WheelsEye
Logistics is complex, layered with multiple stakeholders, unorganized, completely offline and deep with trivial and deep rooted problems. Though, industry contributes 14% to the GDP not much focus has been put and progress has been made to solve the problems of the industry.
Fleet owner sits at the centre of the supply chain, responsible for the most complex logistics implementation yet least enabled. WheelsEye is a logistics company, rebuilding logistics infrastructure around fleet owners.
Currently, Wheelseye is offering technology to empower truck fleet owners. Our software helps automate operations, secure fleet, save costs, improve on-time performance and streamline their business.
We are a young and energetic team of IIT graduates comprising of Alums from Shuttl, Snapdeal, Ola, Zomato, ClearTax, Blackbuck, having rich industry and technology experience. It is our constant endeavour to create and evolve data-driven software solutions that conquer logistic business problems and inspire smarter data driven decision making.

What’s exciting about WheelsEye?
● Work on a real Indian problem of scale
● Impact lives of 5.5 cr fleet owners, drivers and their families in a meaningful way
● Different from current market players, heavily focused and built around truck owners
● Problem solving and learning oriented organization
● Audacious goals, high speed and action orientation
● Opportunity to scale the organization across country
● Opportunity to build and execute the culture
● Contribute to and become a part of the action plan for building the tech, finance and service infrastructure for logistics industry
● It’s Tough!

Key Responsibilities:
● At least 3 years of experience working as a DevOps Engineer and should have vast experience in systems automation, orchestration, deployment, and implementation.
● Ideal candidate must have experience working with tools such as MySQL, Git, Python, Shell scripting, and MongoDB.
● Demonstrate experience in scaling distributed data systems, for example, Hadoop, Elasticsearch, Cassandra, among others.
● Keen understanding of monitoring solutions for all layers of web infrastructure.
● Experience working with monitoring tools such as Nagios, Grafana.
● The candidate must be skilled in the configuration, maintenance, and securing of Linux systems as well as skill in scripting languages such as Shell and Ruby.
● He/ She will also need to have skills in infrastructure automation tools, for example, Chef, Ansible
● Ability to handle 3-4 people in a team and design their career path.

Founders:
• Anshul Mimani (Ex-Shuttl | IIT Kharagpur)
• Manish Somani (Ex-Shuttl | IIT Roorkee)
Job posted by
Aishwarya Priyam

DevOps Engineer

at Crisp Analytics

Founded 2015  •  Products & Services  •  20-100 employees  •  Profitable
DevOps
Amazon Web Services (AWS)
Network
Docker
Jenkins
Kubernetes
icon
Mumbai, Noida, NCR (Delhi | Gurgaon | Noida)
icon
1 - 4 yrs
icon
₹5L - ₹9L / yr

DevOps Engineer

 

The DevOps team is one of the core technology teams of Lumiq.ai and is responsible for managing network activities, automating Cloud setups and application deployments. The team also interacts with our customers to work out solutions. If you are someone who is always pondering how to make things better, how technologies can interact, how various tools, technologies, and concepts can help a customer or how you can use various technologies to improve user experience, then Lumiq is the place of opportunities.

 

Job Description

 

  • Explore about the newest innovations in scalable and distributed systems.
  • Helps in designing the architecture of the project, solutions to the existing problems and future improvements to be done.
  • Make the cloud infrastructure and services smart by implementing automation and trigger based solutions.
  • Interact with Data Engineers and Application Engineers to create continuous integration and deployment frameworks and pipelines.
  • Playing around with large clusters on different clouds to tune your jobs or to learn.
  • Researching about new technologies, proving the concepts and planning how to integrate or update.
  • Be part of discussions of other projects to learn or to help.

Responsibilities

  • 2+years of experience as DevOps Engineer.
  • You understand actual networking to Software defined networking.
  • You like containers and open source orchestration system like Kubernetes, Mesos.
  • Should have experience to secure system by creating robust access policy and network restrictions enforcement.
  • Should have  knowledge about how applications work are very important to design distributed systems.
  • Should have experience to open source projects and have discussed the shortcomings or problems with the community on several occasions.
  • You understand that provisioning a Virtual Machine is not DevOps.
  • You know you are not a SysAdmin but DevOps Engineer who is the person behind developing operations for the system to run efficiently and scalably.
  • Exposure on Private Cloud, Subnets, VPNs, Peering, Load Balancers and have worked with them.
  • You check logs before screaming about error.
  • Multiple Screens makes you more efficient.
  • You are a doer who don’t say the word impossible.
  • You understand the value of documentation of your work.
  • You understand the Big Data ecosystem and how can you leverage cloud for it.
  • You know these buddies - #airflow, #aws, #azure, #gcloud, #docker, #kubernetes, #mesos, #acs

 

Job posted by
Sneha Pandey

Director-SRE/Devops

at www.thecatalystiq.com

Founded 2019  •  Product  •  20-100 employees  •  Bootstrapped
IBM Director
DevOps
Docker
Kubernetes
Linux/Unix
Troubleshooting
icon
Mumbai
icon
5 - 15 yrs
icon
₹25L - ₹35L / yr

Your Role:

    • Serve as a primary point responsible for the overall health, performance, and capacity of one or more of our Internet-facing services
    • Gain deep knowledge of our complex applications
    • Assist in the roll-out and deployment of new product features and installations to facilitate our rapid iteration and constant growth
    • Develop tools to improve our ability to rapidly deploy and effectively monitor custom applications in a large-scale UNIX environment
    • Work closely with development teams to ensure that platforms are designed with "operability" in mind.
    • Function well in a fast-paced, rapidly-changing environment
    • Should be able to lead a team of smart engineers
    • Should be able to strategically guide the team to greater automation adoption

Must Have:

    • Experience Building/managing DevOps/SRE teams
    • Strong in troubleshooting/debugging Systems, Network and Applications
    • Strong in Unix/Linux operating systems and Networking
    • Working knowledge of Open source technologies in Monitoring, Deployment and incident management

Good to Have:

      • Minimum 3+ years of team management experience
      • Experience in Containers and orchestration layers like Kubernetes, Mesos/Marathon
      • Proven experience in programming & diagnostics in any languages like Go, Python, Java
      • Experience in NoSQL/SQL technologies like Cassandra/MySQL/CouchBase etc.
      • Experience in BigData technologies like Kafka/Hadoop/Airflow/Spark
      • Is a die-hard sports fan
 
 
 
Job posted by
Uneza Maqbool

DevOps Engineer

at Fission Labs

Founded 2008  •  Services  •  100-1000 employees  •  Profitable
DevOps
Azure
Microsoft Windows Azure
CI/CD
Terraform
Docker
Kubernetes
Ansible
icon
Hyderabad
icon
4 - 8 yrs
icon
₹8L - ₹20L / yr

“The only way to do great work is to love what you do”. Are you someone who embraces challenges or interested to do something out of the box? Fission labs leverages cutting edge technologies to foster innovation and provides scalable solutions to identify customers’ core business need. We manage end to end product development by optimizing new technologies such as AI, ML, Salesforce, Bigdata, and Cloud and so on. We provide scalable solutions to some of the best Silicon Valley start-ups and enterprises.

Why explore a career with Fission Labs

  • You will work on cutting edge technologies
  • You will have high responsibilities with high learning curve
  • You will be working with entrepreneur mindset & innovative people
  • You will be challenged & work will be recognized
  • You will learn from true innovators and from scratch
  • You’ll be instilled with the value of hard work, ownership, and self-sustainability

Benefits:

  • Start-up spirit (Good ten years, yet we maintain it)
  • Excellent opportunity to work with highly experienced professionals
  • Flexibility
  • Flat hierarchy
  • Unlimited Career growth

Roles and Responsibilities

Provide infrastructure management while interacting and collaborating with the Engineering Team in providing support for end-user issues & requests related to systems & applications

Should be able to deploy, troubleshoot, automate, maintain and constantly improve the systems that will keep the backend infrastructure running smoothly.

Solid understanding of Continuous Integration and Continuous Delivery best practices.

Hands on knowledge of IaC(Infrastructure as Code) and configuration management tools such as Terraform, Ansible, Packer, Chef etc.

Outstanding expertise in developing and automating tech stack on Azure technologies (e.g. Virtual Machines, Virtual Networks, Load Balancers, Storage Accounts, Azure DevOps,
Azure Active Directory, Monitor, Advisor, Security Centre, SQL Databases, FunctionApp)

Implement containerized solutions using Docker and Azure Kubernetes Service.

Should be able to analyse and resolve complex infrastructure resources and application deployment issues.

Comfortable in supporting our technology stack that collects, stores and processes over 1 billion api calls per day and support business critical solutions for one of the world’s fastest growing mobile companies

Manage and improve microservices infrastructure

 

Technical Attributes

Overall 4 to 8 years and a minimum of 2+ years of experience in Azure administration

Experience in administration Azure environment using Terraform or ARM templates.

Experience in Azure CLI and Power Shell.

Advanced Automation using Terraform (or equivalent technology), Bash, Python, YAML, and PowerShell Scripting experience is a must and along with administration and monitoring experience.

Experience in designing and architecting Highly Available, Scalable, Fault-tolerant, and Secure Enterprise systems.

Strong grasp of automation using scripting languages like and be able to automate various deployment processes using the same

Experience in Public cloud development (Azure) or private Clouds (Open Stack, VMWare)

Experience in Docker and a container orchestration platform like Kubernetes, Mesos/Marathon, or Docker Swarm.

Experience with configuring monitoring solutions Application Insights.

Experience with setting up, configuring and using Azure DevOps or any CI tools, building CI/CD pipeline.

Good understanding of Patterns of Distributed system architecture.

 

Good to Have:

  • Experience with setting up Disaster Recovery for applications, multi-region deployments.
  • Experience setting microservices architecture.
  • Event-driven systems architecture, knowledge of Azure Service Bus is good to have.
  • Experience with working on designing systems and infrastructure for Compliance. (Understanding of any regulatory law e.g. HIPAA, GDPR, PCI/DSS etc. would be a plus.)
  • Experience with Visual Studio Team System.
  • Should be comfortable working with Agile setup particularly SCRUM.
  • Certifications are a plus.
Job posted by
Avipsa Panda

DevOps Engineer

at Entropik Technologies Pvt. Ltd.

Founded 2016  •  Product  •  20-100 employees  •  Raised funding
DevOps
Amazon Web Services (AWS)
Docker
Python
Continuous Integration
Jenkins
icon
Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹8L - ₹16L / yr
At Entropik Technologies, we build systems that measure and analyzes human emotions at an unprecedented scale, with accuracy, speed, and mission-critical availability. We work with some of the leading brands and agencies across the globe who utilizes our platform to improve overall customer experience and understand their consumer’s behavior. If you are excited about the opportunity to learn and work on effective computing systems, enjoy streamlining and automating routine tasks, and work on leading-edge software deployments. Come challenge yourself at Entropik Technologies. Responsibilities Design, Implement and Support CI/CD pipeline Participate in the design phase of latency-driven high scale systems Write scripts to monitor systems and automate routine tasks Maintain our infrastructure across multiple technologies to ensure "ZERO" downtime Design and develop tooling to assist development teams Experiment new tools and/or processes to improve the team routines and communication Troubleshoot issues across the entire stack (diagnose software, application, and network) Document current and future configuration processes and policies Take ownership of existing systems including all tools, technologies, and licenses used by Entropik Work with product management to ensure DevOps is aligned to the overall vision of the company and can scale on demand Build, support and maintain all automated test environment build and code deployment scripts using a mixture of the following: Jenkins, Bitbucket, GIT, Gradle, OpenShift, Artifactory, Cloud virtualized services, JBOSS, Tomcat, Chef Requirements Minimum of 3 - 5 years of experience in software development and DevOps, specifically managing AWS such as EC2s, RDS, Elastic Cache, S3, IAM, cloud trail and other services provided by AWS. Experience Building a multi-region highly available auto-scaling infrastructure that optimizes performance and cost. plan for future infrastructure as well as Maintain & optimize existing infrastructure. Conceptualize, architect and build automated deployment pipelines in a CI/CD environment like Jenkins. Conceptualize, architect and build a containerized infrastructure using Docker, Mesosphere or similar SaaS platforms. Conceptualize, architect and build a secured network utilizing VPCs with inputs from the security team. Work with developers & QA to institute a policy of Continuous Integration with Automated testing Architect, build and manage dashboards to provide visibility into delivery, production application functional and performance status. Work with developers to institute systems, policies, and workflows which allow for a rollback of deployments Triage release of applications to the production environment on a daily basis. Interface with developers and triage SQL queries that need to be executed in production environments. Maintain 24/7 on-call rotation to respond and support troubleshooting of issues in production. Assist the developers and on calls for other teams with post, follow up and review of issues affecting production availability. Minimum of 2 years’ experience in Ansible. Must have written playbook to automate the provisioning of AWS infrastructure as well as automation of routine maintenance tasks. Must have had prior experience automating deployments to production and lower environments. Experience with APM tools like New Relic and log management tools. Our entire platform is hosted on AWS, comprising of web applications, web services, RDS, Redis, and Elastic Search clusters and several other AWS resources like EC2, S3, Cloud front, Route53 and SNS. Essential Functions System Architecture Process Design and Implementation. Minimum of 2 years scripting experience in Ruby/Python (Preferable) and Shell Web Application Deployment Systems Continuous Integration tools (Ansible)Establishing and enforcing Network Security Policy (AWS VPC, Security Group) & ACLs. Establishing and enforcing systems monitoring tools and standards. Establishing and enforcing Risk Assessment policies and standards. Management of big data solutions (Hadoop, Spark) and large messaging infrastructures (Kafka, RabbitMQ) Innovative, creative mindset - out-of-the-box thinker Positive, “can do” attitude who isn’t afraid of a challenge Excellent English skills
Job posted by
Anshul Chawla
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at US Based Fortune 200 Company?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort